WO2021107169A1 - Method for interactively learning and updating deep learning network in edge-cloud system - Google Patents

Method for interactively learning and updating deep learning network in edge-cloud system Download PDF

Info

Publication number
WO2021107169A1
WO2021107169A1 PCT/KR2019/016334 KR2019016334W WO2021107169A1 WO 2021107169 A1 WO2021107169 A1 WO 2021107169A1 KR 2019016334 W KR2019016334 W KR 2019016334W WO 2021107169 A1 WO2021107169 A1 WO 2021107169A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
deep learning
cloud server
learning network
Prior art date
Application number
PCT/KR2019/016334
Other languages
French (fr)
Korean (ko)
Inventor
이상설
장성준
박종희
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Publication of WO2021107169A1 publication Critical patent/WO2021107169A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0852Quantum cryptography
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/30Compression, e.g. Merkle-Damgard construction

Definitions

  • the present invention relates to image processing and SoC (System on Chip) technology using artificial intelligence technology, and more particularly, to a method for mutual learning and updating of a deep learning network between an edge device and a cloud server.
  • SoC System on Chip
  • edge devices are simply performing inference calculations.
  • attempts have been made to incorporate edge computing into learning of deep learning networks.
  • the present invention has been devised to solve the above problems, and an object of the present invention is to transmit the deep learning learning result performed using the newly input image from the edge device to the cloud server to provide the deep learning network of the entire system. To provide a method for updating.
  • a deep learning network mutual learning method includes, by an edge device, extracting feature data from newly input image data; performing, by the edge device, learning on the deep learning network with the extracted feature data; Compressing, by the edge device, the extracted feature data and the weight data of the deep learning network updated by learning into training data; Encrypting, by the edge device, the compressed learning data; Including, by the edge device, transmitting the encrypted learning data to the cloud server.
  • the compression step may be a step of performing lossless compression on the feature data and the updated weight data.
  • the encryption step may be a step of performing quantum encryption on the compressed training data.
  • the cloud server decrypting the received encrypted learning data; Decompressing, by the cloud server, the decrypted compressed learning data;
  • the cloud server may further include a step of verifying the learning data.
  • the deep learning network mutual learning method according to the present invention may further include; by the cloud server, the updated weight data constituting the learning data and the configuration of the deep learning network are further updated.
  • the deep learning network mutual learning method according to the present invention may further include, by the cloud server, updating its own deep learning network according to the update result.
  • the deep learning network mutual learning method according to the present invention may further include, by the cloud server, transmitting update information including the update result to edge devices.
  • extracting feature data from newly input image data, performing learning on the deep learning network with the extracted feature data, and weight data of the deep learning network updated with the extracted feature data and learning A processor for compressing the training data and encrypting the compressed training data; and a communication unit for transmitting the encrypted learning data to the cloud server; is provided.
  • the edge device in a network system in which edge devices and a cloud server are connected, the edge device can learn about newly input data, so that the deep learning network is mutually learned in the entire system and and updateable.
  • FIG. 1 is a view showing an edge-cloud system to which the present invention is applicable
  • Figure 2 is a view provided for explaining the operation of the edge device shown in Figure 1;
  • FIG. 3 is a view provided for explaining the operation of the cloud server shown in FIG. 1;
  • FIG. 4 is a diagram showing data transmission/reception information between an edge device and a cloud server
  • 5 is a diagram illustrating deep learning acceleration hardware configuration information
  • FIG. 6 is a diagram illustrating deep learning network configuration information
  • FIG. 9 is a diagram conceptually illustrating the configuration of the edge devices and the cloud server shown in FIG. 1 .
  • a method for mutually learning, sharing, and updating a deep learning network between an edge device and a cloud server is provided in a system in which an edge device and a cloud server are connected through a network.
  • it is a system that transmits and verifies a result of learning using a newly input image from an edge device to a central cloud server, and propagates it to other edge devices.
  • FIG. 1 is a diagram illustrating an edge-cloud system to which the present invention is applicable.
  • edge devices 110-1, 110-2, ... 100-n and the cloud server 200 communicate with each other through a network. Connected and built to be possible.
  • the network can be both wired and wireless, regardless of the specific type.
  • the deep learning network is mutually learned between the edge devices 110-1, 110-2, ... 100-n and the cloud server 200, and the learning result is transmitted to deep Update the learning network.
  • the edge devices 110-1, 110-2, ... 100-n can also learn a deep learning network using a new input. configured to be able to
  • the learning result is verified through the cloud server 200, it is shared with the edge devices 110-1, 110-2, ... 100-n, and the edge-deep learning network in all devices constituting the cloud system. update is made.
  • the procedures performed in the edge devices 110-1, 110-2, ... 100-n are shown in FIG. 2 .
  • the edge device-1 110-1 learns the deep learning network in FIG. 2 .
  • the same procedure is applied to the edge devices 110-2, ... 100-n other than the edge device-1 110-1.
  • the edge device-1 110-1 first extracts feature data from the input image data.
  • the edge device-1 110-1 uses the extracted feature data to learn the deep learning network it owns, which is a low-speed learning & inference engine, and updates the weight data.
  • the edge device-1 (110-1) compresses the training data.
  • the training data includes feature data extracted from the input image data and updated weight data obtained as a result of the learning.
  • Data compression is for preventing power problems of the edge device-1 (110-1) and congestion of the network.
  • Data compression can be divided into lossy compression and lossless compression. Since training data is important data, it is preferable to apply lossless encoding.
  • the edge device-1 110-1 encrypts the compressed learning data. Since the image data may contain personal information, it is necessary to encrypt the training data before transmission.
  • Quantum encryption can be applied as an encryption method.
  • the secret key for quantum encryption is continuously updated.
  • the edge device-1 ( 110 - 1 ) transmits the encrypted learning data to the cloud server ( 200 ).
  • the cloud server 200 verifies the learning data received from the edge device-1 110-1 and updates the deep learning network of the edge-cloud system as a whole. To this end, the procedures performed in the cloud server 200 are shown in FIG. 3 .
  • the cloud server 200 first decrypts the encrypted learning data received from the edge device-1 (110-1). Quantum decoding is applied to data decoding.
  • the cloud server 200 decompresses the decrypted compressed learning data to secure the learning data. For decompression, lossless decoding is applied.
  • the cloud server 200 verifies the updated weight data using the feature data used by the edge device-1 110-1 with a high-speed learning & inference engine, which is a deep learning network owned by the cloud server 200 .
  • the cloud server 200 updates its own deep learning network. Then, the cloud server 200 losslessly compresses the update information (weight data, network configuration information), performs quantum encryption, and delivers it to the edge devices 110-1, 110-2, ... 100-n.
  • the edge devices 110-1, 110-2, ... 100-n update their deep learning networks through the procedure shown in the lower part of FIG. Specifically, the edge devices 110 - 1 , 110 - 2 , ... 100 - n perform quantum decryption on the encrypted update information received from the cloud server 200 , and then perform lossless decryption to restore the update information.
  • the edge devices 110-1, 110-2, ... 100-n update their deep learning networks according to the restored update information.
  • edge devices 110-1, 110-2, ... 100-n and the cloud server 200, the edge devices 110-1, 110-2, ... 100 -n) was expressed as the center.
  • the edge devices 110-1, 110-2, ... 100-n receive the deep learning acceleration hardware configuration information and the deep learning network configuration information from the cloud server 200 .
  • the deep learning acceleration hardware configuration information is, as shown in FIG. 5, information on the size, channel, and number of bits of the input feature map, information on PE (Processing Element), operation frequency, input feature map and output feature map. It includes information about the buffer to be saved, and information about the external memory and transmission standard.
  • the deep learning network configuration information includes layer processing information, kernel information, channel information, and the like of the deep learning network, as shown in FIG. 6 .
  • the edge devices (110-1, 110-2, ... 100-n) transfer the learning data and Accurate Trace information to the cloud server (200).
  • GT Round Truth
  • the Accurate Trace information provided by the edge devices 110-1, 110-2, ... 100-n is illustrated in FIG. 8 . It is information to be referred to in determining whether to transmit training data by predicting performance improvement due to new learning.
  • FIG. 9 is a diagram conceptually illustrating the configuration of the edge devices 110-1, 110-2, ... 100-n and the cloud server 200 shown in FIG. 1 .
  • the devices 110-1, 110-2, ... 100-n, 200 include a communication unit 310, a processor 320, and a storage unit 330. It is implemented with large and small computing devices that
  • the communication unit 310 is a communication means for transmitting and receiving data by being connected to communicate with other devices.
  • the processor 320 performs learning and inference using a deep learning network, and performs compression/expansion and encryption/decryption of transmission/reception data.
  • the storage unit 330 provides a storage space necessary for the processor 320 to function and operate.
  • data compression and encryption were applied when transmitting data for deep learning network update in the edge-cloud system, and network configuration information was delivered for flexible control of the deep learning accelerator at the edge, and the performance of newly learned data It was made possible to determine whether or not to transmit data through prediction.
  • edge-to-cloud transmission of training data is possible, security and network resource usage are reduced, and flexible reconfiguration and maintenance of the deep learning network of the edge device becomes possible.
  • the hardware that can learn the low-speed new input of the edge device it can be applied to other devices connected to the network, and it is possible to remotely control the deep learning network and update new objects.
  • the technical idea of the present invention can also be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment.
  • the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium.
  • the computer-readable recording medium may be any data storage device readable by the computer and capable of storing data.
  • the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like.
  • the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a method for updating a deep learning network of the entire system by transmitting a result of deep learning performed using a newly input image to a cloud server by an edge device. A method for interactively learning a deep learning network according to an embodiment of the present invention comprises the steps wherein: an edge device extracts feature data from newly input image data; the edge device learns about a deep learning network by using the extracted feature data; the edge device compresses, into learning data, the extracted feature data and weight data of the deep learning network updated via learning; the edge device encrypts the compressed learning data; and the edge device transfers the encrypted learning data to a cloud server. Accordingly, in a network system in which edge devices and a cloud server are connected, the edge devices can learn about newly input data so as to interactively learn and update a deep learning network in the entire system.

Description

엣지-클라우드 시스템에서 딥러닝 네트워크 상호 학습 및 업데이트 방법How to mutually learn and update deep learning networks in edge-cloud systems
본 발명은 인공지능 기술을 활용한 영상 처리 및 SoC(System on Chip) 기술에 관한 것으로, 더욱 상세하게는 엣지 디바이스와 클라우드 서버 간 딥러닝 네트워크를 상호 학습하고 업데이트하는 방법에 관한 것이다.The present invention relates to image processing and SoC (System on Chip) technology using artificial intelligence technology, and more particularly, to a method for mutual learning and updating of a deep learning network between an edge device and a cloud server.
엣지 디바이스들과 클라우드 서버로 구성되는 엣지-클라우드 시스템에서 딥러닝 네트워크를 이용한 영상처리시에는, 오프라인 상에서 대규모 리소스 사용이 가능한 클라우드 서버에서만 학습을 진행하는 것이 일반적이며, 엣지 디바이스에서의 딥러닝 네트워크의 학습은 거의 고려되지 않고 있다. In the case of image processing using a deep learning network in an edge-cloud system consisting of edge devices and cloud servers, it is common to conduct learning only on a cloud server that can use large-scale resources offline, and the Learning is rarely considered.
즉, 엣지 디바이스에서는 단순히 추론 연산만을 수행하고 있는 실정인데, 최근 들어 딥러닝 네트워크의 학습에도 엣지 컴퓨팅을 접목시켜야 한다는 시도가 나타나고 있다.In other words, edge devices are simply performing inference calculations. Recently, attempts have been made to incorporate edge computing into learning of deep learning networks.
이에 따라, 엣지 디바이스에서 추론 외에 딥러닝 네트워크의 학습을 함께 처리하고, 이를 클라우드 서버 및 다른 엣지 디바이스와 공유하기 위한 방안의 모색이 요청된다.Accordingly, it is requested to find a way to process learning of the deep learning network in addition to inference in the edge device and share it with the cloud server and other edge devices.
본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 목적은, 엣지 디바이스에서 신규 입력된 영상을 이용하여 수행한 딥러닝 학습 결과를 클라우드 서버로 전송하여 시스템 전체의 딥러닝 네트워크를 업데이트 하기 위한 방법을 제공함에 있다.The present invention has been devised to solve the above problems, and an object of the present invention is to transmit the deep learning learning result performed using the newly input image from the edge device to the cloud server to provide the deep learning network of the entire system. To provide a method for updating.
상기 목적을 달성하기 위한 본 발명의 일 실시예에 따른, 딥러닝 네트워크 상호 학습 방법은 엣지 디바이스가, 신규 입력된 영상 데이터로부터 특징 데이터를 추출하는 단계; 엣지 디바이스가, 추출된 특징 데이터로 딥러닝 네트워크에 대한 학습을 수행하는 단계; 엣지 디바이스가, 추출된 특징 데이터와 학습으로 업데이트된 딥러닝 네트워크의 웨이트 데이터를 학습 데이터로 압축하는 단계; 엣지 디바이스가, 압축된 학습 데이터를 암호화하는 단계; 엣지 디바이스가, 암호화된 학습 데이터를 클라우드 서버에 전달하는 단계;를 포함한다. According to an embodiment of the present invention for achieving the above object, a deep learning network mutual learning method includes, by an edge device, extracting feature data from newly input image data; performing, by the edge device, learning on the deep learning network with the extracted feature data; Compressing, by the edge device, the extracted feature data and the weight data of the deep learning network updated by learning into training data; Encrypting, by the edge device, the compressed learning data; Including, by the edge device, transmitting the encrypted learning data to the cloud server.
압축 단계는, 특징 데이터와 업데이트된 웨이트 데이터에 대해 무손실 압축을 수행하는 단계일 수 있다. The compression step may be a step of performing lossless compression on the feature data and the updated weight data.
암호화 단계는, 압축된 학습 데이터에 대해 양자 암호화를 수행하는 단계일 수 있다. The encryption step may be a step of performing quantum encryption on the compressed training data.
본 발명에 따른 딥러닝 네트워크 상호 학습 방법은 클라우드 서버가, 수신한 암호화된 학습 데이터를 복호화하는 단계; 클라우드 서버가, 복호화된 압축된 학습 데이터를 압축 해제하는 단계; 클라우드 서버가, 학습 데이터를 검증하는 단계;를 더 포함하는 것일 수 있다.Deep learning network mutual learning method according to the present invention, the cloud server, decrypting the received encrypted learning data; Decompressing, by the cloud server, the decrypted compressed learning data; The cloud server may further include a step of verifying the learning data.
본 발명에 따른 딥러닝 네트워크 상호 학습 방법은 클라우드 서버가, 학습 데이터를 구성하는 업데이트된 웨이트 데이터와 딥러닝 네트워크의 구성을 추가로 업데이트하는 단계;를 더 포함하는 것일 수 있다.The deep learning network mutual learning method according to the present invention may further include; by the cloud server, the updated weight data constituting the learning data and the configuration of the deep learning network are further updated.
본 발명에 따른 딥러닝 네트워크 상호 학습 방법은 클라우드 서버가, 업데이트 결과에 따라 자신의 딥러닝 네트워크를 업데이트 하는 단계;를 더 포함하는 것일 수 있다. The deep learning network mutual learning method according to the present invention may further include, by the cloud server, updating its own deep learning network according to the update result.
본 발명에 따른 딥러닝 네트워크 상호 학습 방법은 클라우드 서버가, 업데이트 결과가 수록된 업데이트 정보를 엣지 디바이스들에게 전송하는 단계;를 더 포함하는 것일 수 있다. The deep learning network mutual learning method according to the present invention may further include, by the cloud server, transmitting update information including the update result to edge devices.
본 발명의 다른 측면에 따르면, 신규 입력된 영상 데이터로부터 특징 데이터를 추출하고, 추출된 특징 데이터로 딥러닝 네트워크에 대한 학습을 수행하며, 추출된 특징 데이터와 학습으로 업데이트된 딥러닝 네트워크의 웨이트 데이터를 학습 데이터로 압축하고, 압축된 학습 데이터를 암호화하는 프로세서; 및 암호화된 학습 데이터를 클라우드 서버에 전달하는 통신부;를 포함하는 것을 특징으로 하는 디바이스가 제공된다.According to another aspect of the present invention, extracting feature data from newly input image data, performing learning on the deep learning network with the extracted feature data, and weight data of the deep learning network updated with the extracted feature data and learning A processor for compressing the training data and encrypting the compressed training data; and a communication unit for transmitting the encrypted learning data to the cloud server; is provided.
이상 설명한 바와 같이, 본 발명의 실시예들에 따르면, 엣지 디바이스들과 클라우드 서버가 연결된 네트워크 시스템에서, 엣지 디바이스가 신규 입력된 데이터에 대해 학습이 가능하여, 전체 시스템에서 딥러닝 네트워크를 상호 학습하고 및 업데이트할 수 있게 된다.As described above, according to the embodiments of the present invention, in a network system in which edge devices and a cloud server are connected, the edge device can learn about newly input data, so that the deep learning network is mutually learned in the entire system and and updateable.
또한, 본 발명의 실시예들에 따르면, 학습 데이터의 압축과 암호화를 통한 전송을 통해, 개인정보를 보호하고 네트워크 부하와 에지 디바이스의 전력사용을 낮출 수 있게 된다.In addition, according to embodiments of the present invention, through transmission through compression and encryption of training data, it is possible to protect personal information and reduce network load and power use of edge devices.
도 1은 본 발명이 적용가능한 엣지-클라우드 시스템을 도시한 도면,1 is a view showing an edge-cloud system to which the present invention is applicable;
도 2는, 도 1에 도시된 엣지 디바이스의 동작 설명에 제공되는 도면,Figure 2 is a view provided for explaining the operation of the edge device shown in Figure 1;
도 3은, 도 1에 도시된 클라우드 서버의 동작 설명에 제공되는 도면,3 is a view provided for explaining the operation of the cloud server shown in FIG. 1;
도 4는 엣지 디바이스와 클라우드 서버 간 데이터 송수신 정보를 나타낸 도면,4 is a diagram showing data transmission/reception information between an edge device and a cloud server;
도 5는 딥러닝 가속 하드웨어 구성정보를 예시한 도면,5 is a diagram illustrating deep learning acceleration hardware configuration information;
도 6은 딥러닝 네트워크 구성정보를 예시한 도면,6 is a diagram illustrating deep learning network configuration information;
도 7은 학습 데이터를 예시한 도면,7 is a diagram illustrating learning data;
도 8은 Accurate Trace 정보를 예시한 도면,8 is a diagram illustrating Accurate Trace information;
도 9는, 도 1에 도시된 엣지 디바이스들과 클라우드 서버의 구성을 개념적으로 나타낸 도면이다.FIG. 9 is a diagram conceptually illustrating the configuration of the edge devices and the cloud server shown in FIG. 1 .
이하에서는 도면을 참조하여 본 발명을 보다 상세하게 설명한다.Hereinafter, the present invention will be described in more detail with reference to the drawings.
본 발명의 실시예에서는, 엣지 디바이스와 클라우드 서버가 네트워크를 통해 연결되어 있는 시스템에서 엣지 디바이스와 클라우드 서바 간에 딥러닝 네트워크를 상호 학습하고 공유하여 업데이트하는 방법을 제시한다.In an embodiment of the present invention, a method for mutually learning, sharing, and updating a deep learning network between an edge device and a cloud server is provided in a system in which an edge device and a cloud server are connected through a network.
구체적으로, 에지 디바이스에서 신규 입력된 영상을 이용하여 학습한 결과를 중앙의 클라우드 서버에 전송하여 검증하고, 다른 에지 디바이스들에게 전파되도록 하는 시스템이다.Specifically, it is a system that transmits and verifies a result of learning using a newly input image from an edge device to a central cloud server, and propagates it to other edge devices.
도 1은 본 발명이 적용가능한 엣지-클라우드 시스템을 도시한 도면이다. 본 발명이 적용가능한 엣지-클라우드 시스템은, 도 1에 도시된 바와 같이, 엣지 디바이스들(110-1, 110-2, ... 100-n)과 클라우드 서버(200)가 네트워크를 통해 상호 통신 가능하도록 연결되어 구축된다. 네트워크는 유선과 무선 모두 가능하며, 구체적인 종류 역시 불문한다.1 is a diagram illustrating an edge-cloud system to which the present invention is applicable. In the edge-cloud system to which the present invention is applicable, as shown in FIG. 1, edge devices 110-1, 110-2, ... 100-n and the cloud server 200 communicate with each other through a network. Connected and built to be possible. The network can be both wired and wireless, regardless of the specific type.
본 발명이 적용가능한 엣지-클라우드 시스템에서는, 엣지 디바이스들(110-1, 110-2, ... 100-n)과 클라우드 서버(200) 간에 딥러닝 네트워크를 상호 학습하고 학습 결과를 전송하여 딥러닝 네트워크의 업데이트를 수행한다.In the edge-cloud system to which the present invention is applicable, the deep learning network is mutually learned between the edge devices 110-1, 110-2, ... 100-n and the cloud server 200, and the learning result is transmitted to deep Update the learning network.
즉, 본 발명이 적용가능한 엣지-클라우드 시스템에서는, 클라우드 서버(200) 외에 엣지 디바이스들(110-1, 110-2, ... 100-n)도 신규 입력을 이용하여 딥러닝 네트워크를 학습할 수 있도록 구성된다.That is, in the edge-cloud system to which the present invention is applicable, in addition to the cloud server 200, the edge devices 110-1, 110-2, ... 100-n can also learn a deep learning network using a new input. configured to be able to
학습 결과는 클라우드 서버(200)를 통해 검증된 후 엣지 디바이스들(110-1, 110-2, ... 100-n)에 공유되어, 엣지-클라우드 시스템을 구성하는 모든 장치들에서 딥러닝 네트워크의 업데이트가 이루어진다.After the learning result is verified through the cloud server 200, it is shared with the edge devices 110-1, 110-2, ... 100-n, and the edge-deep learning network in all devices constituting the cloud system. update is made.
이 과정에서 엣지 디바이스(110-1, 110-2, ... 100-n)에서 이루어지는 절차들을 도 2에 도시하였다. 단, 이해와 설명의 편의를 위해, 도 2에서는 엣지 디바이스-1(110-1)이 딥러닝 네트워크를 학습하는 것을 상정하였다. 엣지 디바이스-1(110-1)이 아닌 다른 엣지 디바이스(110-2, ... 100-n)에 대해서도 동일한 절차가 그대로 적용된다.In this process, the procedures performed in the edge devices 110-1, 110-2, ... 100-n are shown in FIG. 2 . However, for convenience of understanding and explanation, it is assumed that the edge device-1 110-1 learns the deep learning network in FIG. 2 . The same procedure is applied to the edge devices 110-2, ... 100-n other than the edge device-1 110-1.
도시된 바와 같이, 신규 입력 영상을 이용하여 딥러닝 네트워크를 학습하기 위해, 엣지 디바이스-1(110-1)는 먼저 입력 영상 데이터로부터 특징 데이터를 추출한다.As shown, in order to learn a deep learning network using a new input image, the edge device-1 110-1 first extracts feature data from the input image data.
다음, 엣지 디바이스-1(110-1)는 추출된 특징 데이터를 이용하여 저속의 학습&추론 엔진인 자신이 보유하고 있는 딥러닝 네트워크를 학습시켜, 웨이트 데이터들을 업데이트 한다.Next, the edge device-1 110-1 uses the extracted feature data to learn the deep learning network it owns, which is a low-speed learning & inference engine, and updates the weight data.
그리고, 엣지 디바이스-1(110-1)는 학습 데이터를 압축한다. 학습 데이터에는 입력 영상 데이터로부터 추출된 특징 데이터와 학습 결과로 획득한 업데이트된 웨이트 데이터를 포함한다.Then, the edge device-1 (110-1) compresses the training data. The training data includes feature data extracted from the input image data and updated weight data obtained as a result of the learning.
데이터 압축은, 엣지 디바이스-1(110-1)의 전력 문제와 네트워크의 혼잡 방지를 위한 것이다.Data compression is for preventing power problems of the edge device-1 (110-1) and congestion of the network.
데이터 압축은 손실 압축과 비손실 압축으로 구분될 수 있는데, 학습 데이터는 중요한 데이터이므로 무손실 압축(Lossless Encoding)을 적용하는 것이 바람직하다.Data compression can be divided into lossy compression and lossless compression. Since training data is important data, it is preferable to apply lossless encoding.
다음, 엣지 디바이스-1(110-1)는 압축된 학습 데이터를 암호화한다. 영상 데이터에는 개인 정보가 포함될 수 있다는 점에서, 학습 데이터 전송 전에는 암호화하는 것이 필요하다.Next, the edge device-1 110-1 encrypts the compressed learning data. Since the image data may contain personal information, it is necessary to encrypt the training data before transmission.
암호화 방법으로 양자 암호화를 적용할 수 있다. 양자 암호화를 위한 비밀키는 지속적으로 업데이트한다.Quantum encryption can be applied as an encryption method. The secret key for quantum encryption is continuously updated.
이후, 엣지 디바이스-1(110-1)는 암호화된 학습 데이터를 클라우드 서버(200)에 전달한다.Thereafter, the edge device-1 ( 110 - 1 ) transmits the encrypted learning data to the cloud server ( 200 ).
클라우드 서버(200)는 엣지 디바이스-1(110-1)로부터 수신한 학습 데이터를 검증하여 엣지-클라우드 시스템 전체의 딥러닝 네트워크를 업데이트한다. 이를 위해, 클라우드 서버(200)에서 이루어지는 절차들을 도 3에 도시하였다.The cloud server 200 verifies the learning data received from the edge device-1 110-1 and updates the deep learning network of the edge-cloud system as a whole. To this end, the procedures performed in the cloud server 200 are shown in FIG. 3 .
도시된 바와 같이, 클라우드 서버(200)는 먼저 엣지 디바이스-1(110-1)로부터 수신한 암호화된 학습 데이터를 복호화한다. 데이터 복호화는 양자 복호화가 적용된다.As shown, the cloud server 200 first decrypts the encrypted learning data received from the edge device-1 (110-1). Quantum decoding is applied to data decoding.
다음, 클라우드 서버(200)는 복호화된 압축된 학습 데이터를 압축 해제하여 학습 데이터를 확보한다. 압축 해제는 무손실 복호화(Lossless Decoding)가 적용된다.Next, the cloud server 200 decompresses the decrypted compressed learning data to secure the learning data. For decompression, lossless decoding is applied.
그리고, 클라우드 서버(200)는 자신이 보유하고 있는 딥러닝 네트워크인 고속의 학습&추론 엔진으로 엣지 디바이스-1(110-1)가 이용한 특징 데이터들을 이용하여 업데이트된 웨이트 데이터들을 검증한다.And, the cloud server 200 verifies the updated weight data using the feature data used by the edge device-1 110-1 with a high-speed learning & inference engine, which is a deep learning network owned by the cloud server 200 .
검증 과정에서 학습 데이터를 구성하는 웨이트 데이터 외에 딥러닝 네트워크의 구성에 대한 추가적인 업데이트가 이루어질 수도 있다.In the verification process, additional updates to the configuration of the deep learning network may be made in addition to the weight data constituting the training data.
이후, 클라우드 서버(200)는 자신의 딥러닝 네트워크를 업데이트 한다. 그리고, 클라우드 서버(200)는 업데이트 정보(웨이트 데이터, 네트워크 구성정보)를 무손실 압축한 후 양자 암호화하여, 엣지 디바이스들(110-1, 110-2, ... 100-n)에 전달한다.Thereafter, the cloud server 200 updates its own deep learning network. Then, the cloud server 200 losslessly compresses the update information (weight data, network configuration information), performs quantum encryption, and delivers it to the edge devices 110-1, 110-2, ... 100-n.
그러면, 엣지 디바이스들(110-1, 110-2, ... 100-n)은 도 2의 하부에 나타난 절차를 통해 자신의 딥러닝 네트워크를 업데이트 한다. 구체적으로, 엣지 디바이스들(110-1, 110-2, ... 100-n)은 클라우드 서버(200)로부터 수신한 암호화된 업데이트 정보를 양자 복호화한 후 무손실 복호화하여 업데이트 정보를 복원한다.Then, the edge devices 110-1, 110-2, ... 100-n update their deep learning networks through the procedure shown in the lower part of FIG. Specifically, the edge devices 110 - 1 , 110 - 2 , ... 100 - n perform quantum decryption on the encrypted update information received from the cloud server 200 , and then perform lossless decryption to restore the update information.
다음, 엣지 디바이스들(110-1, 110-2, ... 100-n)은 복원된 업데이트 정보에 따라 자신의 딥러닝 네트워크를 업데이트한다.Next, the edge devices 110-1, 110-2, ... 100-n update their deep learning networks according to the restored update information.
도 4는 엣지 디바이스들(110-1, 110-2, ... 100-n)과 클라우드 서버(200) 간의 데이터 송수신 정보를, 엣지 디바이스들(110-1, 110-2, ... 100-n)을 중심으로 표현하였다.4 shows data transmission/reception information between the edge devices 110-1, 110-2, ... 100-n and the cloud server 200, the edge devices 110-1, 110-2, ... 100 -n) was expressed as the center.
도시된 바와 같이, 엣지 디바이스들(110-1, 110-2, ... 100-n)은 클라우드 서버(200)로부터 딥러닝 가속 하드웨어 구성정보와 딥러닝 네트워크 구성정보를 수신한다.As shown, the edge devices 110-1, 110-2, ... 100-n receive the deep learning acceleration hardware configuration information and the deep learning network configuration information from the cloud server 200 .
딥러닝 가속 하드웨어 구성정보는, 도 5에 도시된 바와 같이, Input Feature map의 크기, 채널, 비트수에 대한 정보, PE(Processing Element)에 대한 정보, Operation Frequency, Input Feature map과 Output Feature map을 저장하는 버퍼에 대한 정보, 외부 메모리와 전송 규격에 대한 정보를 포함한다.The deep learning acceleration hardware configuration information is, as shown in FIG. 5, information on the size, channel, and number of bits of the input feature map, information on PE (Processing Element), operation frequency, input feature map and output feature map. It includes information about the buffer to be saved, and information about the external memory and transmission standard.
딥러닝 네트워크 구성정보는, 도 6에 도시된 바와 같이, 딥러닝 네트워크의 layer 처리 정보, 커널 정보, 채널 정보 등을 포함한다.The deep learning network configuration information includes layer processing information, kernel information, channel information, and the like of the deep learning network, as shown in FIG. 6 .
또한, 도시된 바와 같이, 엣지 디바이스들(110-1, 110-2, ... 100-n)은 학습 데이터와 Accurate Trace 정보를 클라우드 서버(200)에 전달한다.In addition, as shown, the edge devices (110-1, 110-2, ... 100-n) transfer the learning data and Accurate Trace information to the cloud server (200).
학습 데이터는, 엣지 디바이스들(110-1, 110-2, ... 100-n)에 의한 학습 결과로, 도 7에 도시된 바와 같이, GT(Ground Truth) 데이터, 특징 데이터, 업데이트된 딥러닝 네트워크의 웨이트 데이터 등이 포함된다.Learning data, as a result of learning by the edge devices 110-1, 110-2, ... 100-n, as shown in FIG. 7, GT (Ground Truth) data, feature data, updated deep Weight data from the learning network, etc. are included.
엣지 디바이스들(110-1, 110-2, ... 100-n)에 의해 제공되는 Accurate Trace 정보를 도 8에 예시하였다. 신규 학습에 의한 성능 향상을 예측하여, 학습 데이터의 전송 여부를 결정함에 있어 참조하도록 한 정보이다.The Accurate Trace information provided by the edge devices 110-1, 110-2, ... 100-n is illustrated in FIG. 8 . It is information to be referred to in determining whether to transmit training data by predicting performance improvement due to new learning.
도 9는, 도 1에 도시된 엣지 디바이스들(110-1, 110-2, ... 100-n)과 클라우드 서버(200)의 구성을 개념적으로 나타낸 도면이다.9 is a diagram conceptually illustrating the configuration of the edge devices 110-1, 110-2, ... 100-n and the cloud server 200 shown in FIG. 1 .
도시된 바와 같이, 본 발명의 실시예에 따른 장치들(110-1, 110-2, ... 100-n, 200)은 통신부(310), 프로세서(320) 및 저장부(330)를 포함하는 크고 작은 컴퓨팅 장치로 구현한다.As shown, the devices 110-1, 110-2, ... 100-n, 200 according to an embodiment of the present invention include a communication unit 310, a processor 320, and a storage unit 330. It is implemented with large and small computing devices that
통신부(310)는 다른 장치들과 통신 가능하도록 연결되어 데이터를 송수신하기 위한 통신 수단이다.The communication unit 310 is a communication means for transmitting and receiving data by being connected to communicate with other devices.
프로세서(320)는 딥러닝 네트워크를 이용한 학습과 추론을 수행하고, 송수신 데이터의 압축/신장 및 암호화/복호화를 수행한다.The processor 320 performs learning and inference using a deep learning network, and performs compression/expansion and encryption/decryption of transmission/reception data.
저장부(330)는 프로세서(320)가 기능하고 동작함에 있어 필요한 저장 공간을 제공한다.The storage unit 330 provides a storage space necessary for the processor 320 to function and operate.
지금까지, 엣지-클라우드 시스템에서 딥러닝 네트워크의 상호 학습 및 업데이트 방법에 대해 바람직한 실시예를 들어 상세히 설명하였다.So far, a preferred embodiment has been described in detail for a method for mutual learning and updating of a deep learning network in an edge-cloud system.
위 실시예에서는, 엣지-클라우드 시스템에서 딥러닝 네트워크 업데이트를 위한 데이터 전송시 데이터 압축 및 암호화를 적용하였고, 엣지단의 딥러닝 가속기의 유연한 제어를 위해 네크워크 구성 정보 전달하였으며, 신규 학습된 데이터의 성능 예측을 통한 데이터 전송 여부 판단이 가능하도록 하였다.In the above embodiment, data compression and encryption were applied when transmitting data for deep learning network update in the edge-cloud system, and network configuration information was delivered for flexible control of the deep learning accelerator at the edge, and the performance of newly learned data It was made possible to determine whether or not to transmit data through prediction.
본 발명의 실시예에 따르면, 학습 데이터의 엣지-클라우드 간 전송이 가능하며, 보안 및 네트워크 리소스 사용량이 감소되고, 엣지 디바이스의 딥러닝 네트워크에 대한 유연한 재구성 및 유지보수가 가능해진다.According to an embodiment of the present invention, edge-to-cloud transmission of training data is possible, security and network resource usage are reduced, and flexible reconfiguration and maintenance of the deep learning network of the edge device becomes possible.
엣지 디바이스의 저속의 신규 입력의 학습이 가능한 하드웨어를 적용하여, 네트워크에 연결된 다른 장치들에도 적용 가능한 형태로, 딥러닝 네트워크 원격 제어 및 새로운 객체 등의 업데이트가 가능하다.By applying the hardware that can learn the low-speed new input of the edge device, it can be applied to other devices connected to the network, and it is possible to remotely control the deep learning network and update new objects.
한편, 본 실시예에 따른 장치와 방법의 기능을 수행하게 하는 컴퓨터 프로그램을 수록한 컴퓨터로 읽을 수 있는 기록매체에도 본 발명의 기술적 사상이 적용될 수 있음은 물론이다. 또한, 본 발명의 다양한 실시예에 따른 기술적 사상은 컴퓨터로 읽을 수 있는 기록매체에 기록된 컴퓨터로 읽을 수 있는 코드 형태로 구현될 수도 있다. 컴퓨터로 읽을 수 있는 기록매체는 컴퓨터에 의해 읽을 수 있고 데이터를 저장할 수 있는 어떤 데이터 저장 장치이더라도 가능하다. 예를 들어, 컴퓨터로 읽을 수 있는 기록매체는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광디스크, 하드 디스크 드라이브, 등이 될 수 있음은 물론이다. 또한, 컴퓨터로 읽을 수 있는 기록매체에 저장된 컴퓨터로 읽을 수 있는 코드 또는 프로그램은 컴퓨터간에 연결된 네트워크를 통해 전송될 수도 있다.On the other hand, it goes without saying that the technical idea of the present invention can also be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment. In addition, the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium. The computer-readable recording medium may be any data storage device readable by the computer and capable of storing data. For example, the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like. In addition, the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.
또한, 이상에서는 본 발명의 바람직한 실시예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어져서는 안될 것이다.In addition, although preferred embodiments of the present invention have been illustrated and described above, the present invention is not limited to the specific embodiments described above, and the technical field to which the present invention belongs without departing from the gist of the present invention as claimed in the claims Various modifications are possible by those of ordinary skill in the art, and these modifications should not be individually understood from the technical spirit or prospect of the present invention.

Claims (8)

  1. 엣지 디바이스가, 신규 입력된 영상 데이터로부터 특징 데이터를 추출하는 단계;extracting, by the edge device, feature data from newly input image data;
    엣지 디바이스가, 추출된 특징 데이터로 딥러닝 네트워크에 대한 학습을 수행하는 단계;performing, by the edge device, learning on the deep learning network with the extracted feature data;
    엣지 디바이스가, 추출된 특징 데이터와 학습으로 업데이트된 딥러닝 네트워크의 웨이트 데이터를 학습 데이터로 압축하는 단계;Compressing, by the edge device, the extracted feature data and the weight data of the deep learning network updated by learning into training data;
    엣지 디바이스가, 압축된 학습 데이터를 암호화하는 단계;Encrypting, by the edge device, the compressed learning data;
    엣지 디바이스가, 암호화된 학습 데이터를 클라우드 서버에 전달하는 단계;를 포함하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.A deep learning network mutual learning method comprising the; edge device, transmitting the encrypted learning data to the cloud server.
  2. 청구항 1에 있어서,The method according to claim 1,
    압축 단계는,The compression step is
    특징 데이터와 업데이트된 웨이트 데이터에 대해 무손실 압축을 수행하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.A deep learning network mutual learning method, characterized in that lossless compression is performed on the feature data and the updated weight data.
  3. 청구항 2에 있어서,3. The method according to claim 2,
    암호화 단계는,The encryption step is
    압축된 학습 데이터에 대해 양자 암호화를 수행하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.A deep learning network mutual learning method, characterized in that quantum encryption is performed on compressed training data.
  4. 청구항 1에 있어서,The method according to claim 1,
    클라우드 서버가, 수신한 암호화된 학습 데이터를 복호화하는 단계;Decrypting, by the cloud server, the received encrypted learning data;
    클라우드 서버가, 복호화된 압축된 학습 데이터를 압축 해제하는 단계;Decompressing, by the cloud server, the decrypted compressed learning data;
    클라우드 서버가, 학습 데이터를 검증하는 단계;를 더 포함하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.The cloud server, the step of verifying the learning data; Deep learning network mutual learning method, characterized in that it further comprises.
  5. 청구항 4에 있어서,5. The method according to claim 4,
    클라우드 서버가, 학습 데이터를 구성하는 업데이트된 웨이트 데이터와 딥러닝 네트워크의 구성을 추가로 업데이트하는 단계;를 더 포함하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.Deep learning network mutual learning method, characterized in that it further comprises; the cloud server, the step of further updating the updated weight data and the configuration of the deep learning network constituting the learning data.
  6. 청구항 5에 있어서,6. The method of claim 5,
    클라우드 서버가, 업데이트 결과에 따라 자신의 딥러닝 네트워크를 업데이트 하는 단계;를 더 포함하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.Deep learning network mutual learning method, characterized in that it further comprises; cloud server, updating its deep learning network according to the update result.
  7. 청구항 6에 있어서,7. The method of claim 6,
    클라우드 서버가, 업데이트 결과가 수록된 업데이트 정보를 엣지 디바이스들에게 전송하는 단계;를 더 포함하는 것을 특징으로 하는 딥러닝 네트워크 상호 학습 방법.The cloud server, transmitting the update information containing the update result to the edge devices; Deep learning network mutual learning method, characterized in that it further comprises.
  8. 신규 입력된 영상 데이터로부터 특징 데이터를 추출하고, 추출된 특징 데이터로 딥러닝 네트워크에 대한 학습을 수행하며, 추출된 특징 데이터와 학습으로 업데이트된 딥러닝 네트워크의 웨이트 데이터를 학습 데이터로 압축하고, 압축된 학습 데이터를 암호화하는 프로세서; 및Extracts feature data from newly input image data, performs learning on the deep learning network with the extracted feature data, and compresses and compresses the extracted feature data and the weight data of the deep learning network updated by learning into training data a processor for encrypting the learned learning data; and
    암호화된 학습 데이터를 클라우드 서버에 전달하는 통신부;를 포함하는 것을 특징으로 하는 디바이스.Device comprising a; communication unit for transmitting the encrypted learning data to the cloud server.
PCT/KR2019/016334 2019-11-26 2019-11-26 Method for interactively learning and updating deep learning network in edge-cloud system WO2021107169A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0152980 2019-11-26
KR1020190152980A KR102512684B1 (en) 2019-11-26 2019-11-26 Mutual Learning and Updating Method for Deep Learning Network in Edge-Cloud System

Publications (1)

Publication Number Publication Date
WO2021107169A1 true WO2021107169A1 (en) 2021-06-03

Family

ID=76129656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/016334 WO2021107169A1 (en) 2019-11-26 2019-11-26 Method for interactively learning and updating deep learning network in edge-cloud system

Country Status (2)

Country Link
KR (1) KR102512684B1 (en)
WO (1) WO2021107169A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180131836A (en) * 2017-06-01 2018-12-11 한국전자통신연구원 Parameter server and method for sharing distributed deep learning parameter using the same
KR20190064862A (en) * 2017-12-01 2019-06-11 주식회사 코이노 Client terminal that improves the efficiency of machine learning through cooperation with a server and a machine learning system including the same
KR20190083127A (en) * 2018-01-03 2019-07-11 한국과학기술원 System and method for trainning convolution neural network model using image in terminal cluster
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
US20190340493A1 (en) * 2018-05-01 2019-11-07 Semiconductor Components Industries, Llc Neural network accelerator

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101957648B1 (en) 2018-11-16 2019-03-12 주식회사 아이코어이앤씨 The module identification and assistant system based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180131836A (en) * 2017-06-01 2018-12-11 한국전자통신연구원 Parameter server and method for sharing distributed deep learning parameter using the same
KR20190064862A (en) * 2017-12-01 2019-06-11 주식회사 코이노 Client terminal that improves the efficiency of machine learning through cooperation with a server and a machine learning system including the same
KR20190083127A (en) * 2018-01-03 2019-07-11 한국과학기술원 System and method for trainning convolution neural network model using image in terminal cluster
US20190340493A1 (en) * 2018-05-01 2019-11-07 Semiconductor Components Industries, Llc Neural network accelerator
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model

Also Published As

Publication number Publication date
KR102512684B1 (en) 2023-03-23
KR20210064588A (en) 2021-06-03

Similar Documents

Publication Publication Date Title
CN112104494B (en) Task security unloading strategy determination method based on air-ground cooperative edge computing network
WO2016068508A1 (en) Device and method for supplying key to plurality of devices in quantum key distribution system
CN113747462A (en) Information processing method and related equipment
CN110765473A (en) Data processing method, data processing device, computer equipment and storage medium
WO2014185594A1 (en) Single sign-on system and method in vdi environment
WO2020101087A1 (en) Encryption system and method for handling personal information
CN113190871B (en) Data protection method and device, readable medium and electronic equipment
CN109345242A (en) Key storage, update method, device, equipment and medium based on block chain
CN114615082B (en) System and method for simulating TCP duplex safety communication by using forward and reverse gatekeepers
WO2018186543A1 (en) Data encryption method and system using device authentication key
CN115361143A (en) Cross-domain data transmission method and device, electronic equipment and computer readable medium
WO2022080784A1 (en) Method and device for quantum key distribution
WO2021107169A1 (en) Method for interactively learning and updating deep learning network in edge-cloud system
US9288116B2 (en) System and method for NAS server test load generation
WO2020218699A1 (en) Apparatus for obfuscating data of iot device by using pseudorandom number, and method therefor
WO2019066319A1 (en) Method of provisioning key information and apparatus using the method
CN114390518A (en) Encryption method, device, equipment and storage medium
CN112416887B (en) Information interaction method and device and electronic equipment
WO2017222320A1 (en) Iot sensing chip in which iot wireless modem and nand flash memory are integrated, and sensing data storage method using same
WO2017213321A1 (en) Method and system for protecting sharing information
WO2020009265A1 (en) Method and system for generating random numbers
CN111611181A (en) HID and CDROM composite USB simulation equipment based on wide area network without BIOS and IPMI support
CN111277582A (en) Internal and external network data distribution device for hospital
WO2024063442A1 (en) Multi-task vision transformer device based on distributed learning using random patch permutation, and transformation method using same
WO2023224182A1 (en) Method and device for performing homomorphic permutation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953723

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953723

Country of ref document: EP

Kind code of ref document: A1