WO2022124729A1 - 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 - Google Patents
무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 Download PDFInfo
- Publication number
- WO2022124729A1 WO2022124729A1 PCT/KR2021/018356 KR2021018356W WO2022124729A1 WO 2022124729 A1 WO2022124729 A1 WO 2022124729A1 KR 2021018356 W KR2021018356 W KR 2021018356W WO 2022124729 A1 WO2022124729 A1 WO 2022124729A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- local model
- federated learning
- information
- response message
- terminal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 130
- 238000004891 communication Methods 0.000 title claims description 100
- 230000008054 signal transmission Effects 0.000 title description 4
- 230000013016 learning Effects 0.000 claims abstract description 217
- 230000004044 response Effects 0.000 claims abstract description 85
- 238000013468 resource allocation Methods 0.000 claims abstract description 32
- 239000010410 layer Substances 0.000 description 87
- 238000013528 artificial neural network Methods 0.000 description 63
- 230000015654 memory Effects 0.000 description 50
- 238000013473 artificial intelligence Methods 0.000 description 45
- 230000008569 process Effects 0.000 description 27
- 230000006870 function Effects 0.000 description 24
- 238000012545 processing Methods 0.000 description 21
- 238000005516 engineering process Methods 0.000 description 18
- 230000005540 biological transmission Effects 0.000 description 17
- 238000013135 deep learning Methods 0.000 description 16
- 238000013527 convolutional neural network Methods 0.000 description 15
- 238000012549 training Methods 0.000 description 15
- 239000013598 vector Substances 0.000 description 13
- 238000010801 machine learning Methods 0.000 description 12
- 230000000306 recurrent effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 230000035045 associative learning Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 239000004984 smart glass Substances 0.000 description 2
- 238000005406 washing Methods 0.000 description 2
- 241000760358 Enodes Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 230000027311 M phase Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000009189 diving Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000002346 layers by function Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1044—Group management mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
- H04L67/1078—Resource delivery mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/04—Wireless resource allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/20—Control channels or signalling for resource management
- H04W72/23—Control channels or signalling for resource management in the downlink direction of a wireless link, i.e. towards a terminal
Definitions
- the following description relates to a wireless communication system, and to an apparatus and method for signal transmission in a wireless communication system.
- a wireless access system is a multiple access system that can support communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
- Examples of the multiple access system include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, and a single carrier frequency (SC-FDMA) system. division multiple access) systems.
- CDMA code division multiple access
- FDMA frequency division multiple access
- TDMA time division multiple access
- OFDMA orthogonal frequency division multiple access
- SC-FDMA single carrier frequency division multiple access
- an enhanced mobile broadband (eMBB) communication technology has been proposed compared to the existing radio access technology (RAT).
- eMBB enhanced mobile broadband
- RAT radio access technology
- UE reliability and latency sensitive services/user equipment
- mMTC massive machine type communications
- the present disclosure may provide an apparatus and method for signal transmission in a wireless communication system.
- the present disclosure may provide a signal transmission apparatus and method for federated learning in a wireless communication system.
- the present disclosure may provide an efficient federated learning method based on grouping.
- the present disclosure may provide an efficient federated learning method based on a split local model.
- a method of operating a terminal in a wireless communication system includes: the terminal receiving federated learning-related configuration information; the terminal learning a local model based on the federated learning-related configuration information Step, the terminal receiving a local model weight request message, transmitting a first response message based on the received weight request message, total local model (total local model) related information based on the first response message Receiving, transmitting a second response message based on the received total local model related information, receiving resource allocation related information based on the second response message, and federation based on the received resource allocation related information and performing federated learning.
- the total local model includes local model information of other terminals participating in the federated learning, and a group related to federated learning is determined based on the second response message, and the federated learning is performed based on the determined group.
- the first response message may include split local model information.
- the total local model related information may include split local model information.
- the terminal may change some layers of the local model of the terminal to the split local model based on the received total local model information including the split local model information.
- the second response message may include comparison information between local model-related data of another terminal participating in the federated learning and local model-related data of the terminal. All of the terminal and the terminals of the group to which the terminal belongs may perform federated learning based on the same resource.
- the group may be determined based on comparison information between local model-related data of other terminals participating in the federated learning and local model-related data of the terminal. A difference in data distribution between terminals within the determined group may be greater than a difference in data distribution between the determined groups.
- a terminal in a wireless communication system, includes a transceiver and a processor connected to the transceiver.
- the processor controls the transceiver to receive federated learning related configuration information, learns a local model based on the federated learning related configuration information, and controls the transceiver to receive a local model weight request message, , controlling the transceiver to transmit a first response message based on the received weight request message, controlling the transceiver to receive total local model related information based on the first response message, and the transceiver controls to transmit a second response message based on the received total local model related information, controls the transceiver to receive resource allocation related information based on the second response message, and based on the received resource allocation related information to perform federated learning.
- the total local model includes local model information of other terminals participating in the federated learning, and a group related to federated learning is determined based on the second response message, and the federated learning is performed based on the determined group.
- the first response message may include split local model information.
- the total local model related information may include split local model information.
- the processor may change some layers of the local model of the terminal to the split local model based on the received total local model information including the split local model information.
- the second response message may include comparison information between local model-related data of another terminal participating in the federated learning and local model-related data of the terminal. All of the terminal and the terminals of the group to which the terminal belongs may perform federated learning based on the same resource.
- the group may be determined based on comparison information between local model-related data of other terminals participating in the federated learning and local model-related data of the terminal. A difference in data distribution between terminals within the determined group may be greater than a difference in data distribution between the determined groups.
- a communication device includes at least one processor and at least one computer memory coupled to the at least one processor and storing instructions for instructing operations as executed by the at least one processor.
- the processor controls the communication device to receive federated learning-related configuration information, controls to learn a local model based on the federated learning-related configuration information, and controls to receive a local model weight request message, , controlling to transmit a first response message based on the received weight request message, controlling to receive total local model related information based on the first response message, and controlling the received total local model related information Control to transmit a second response message based on , control to receive resource allocation related information based on the second response message, and control to perform federated learning based on the received resource allocation related information .
- the total local model includes local model information of other terminals participating in the federated learning, and a group related to federated learning is determined based on the second response message, and the federated learning is performed based on the determined group. .
- a non-transitory computer-readable medium storing at least one instruction may be executable by a processor to store the at least one instruction.
- the at least one instruction instructs the computer-readable medium to receive federated learning-related configuration information, and to learn a local model based on the federated learning-related configuration information, and a local model weight request message instructs to receive, instructs to transmit a first response message based on the received weight request message, instructs to receive total local model related information based on the first response message, and the received Instructs to transmit a second response message based on the total local model related information, instructs to receive resource allocation related information based on the second response message, and federated learning based on the received resource allocation related information instruct to do
- the total local model includes local model information of other terminals participating in the federated learning, and a group related to federated learning is determined based on the second response message, and the federated learning is performed based on the determined group.
- a method of operating a base station in a wireless communication system includes the steps of, by the base station, transmitting federated learning related setting information, the base station transmitting a local model weight request message, to the weight request message Receiving a first response message based on the first response message, transmitting total local model related information based on the first response message, receiving a second response message based on the received total local model related information , transmitting resource allocation related information based on the second response message, and performing federated learning based on the received resource allocation related information.
- a local model is learned based on the federated learning related setting information, the total local model includes local model information of other terminals participating in the federated learning, and based on the second response message, a group related to federated learning is It is determined that the federated learning is performed based on the determined group.
- a base station includes a transceiver and a processor connected to the transceiver.
- the processor controls the transceiver to transmit federated learning related configuration information, controls the transceiver to transmit a local model weight request message, and the transceiver receives a first response message based on the weight request message controlling the transceiver to transmit total local model related information based on the first response message, and controlling the transceiver to receive a second response message based on the total local model related information received by the transceiver and control the transceiver to transmit resource allocation related information based on the second response message, and perform federated learning based on the received resource allocation related information.
- a local model is learned based on the federated learning related setting information, the total local model includes local model information of other terminals participating in the federated learning, and based on the second response message, a group related to federated learning is It is determined that the federated learning is performed based on the determined group.
- the base station and the terminal since the base station and the terminal perform federated learning, overhead when the base station and the terminal transmit data can be reduced.
- a plurality of terminals can efficiently perform federated learning.
- traffic for identifying data characteristics between terminals may be reduced.
- FIG. 1 shows an example of a communication system applicable to the present disclosure.
- FIG. 2 shows an example of a wireless device applicable to the present disclosure.
- FIG 3 shows another example of a wireless device applicable to the present disclosure.
- FIG. 4 shows an example of a portable device applicable to the present disclosure.
- FIG 5 shows an example of a vehicle or autonomous vehicle applicable to the present disclosure.
- AI Artificial Intelligence
- FIG. 7 illustrates a method of processing a transmission signal applicable to the present disclosure.
- FIG 8 illustrates a structure of a perceptron included in an artificial neural network applicable to the present disclosure.
- FIG 9 shows an artificial neural network structure applicable to the present disclosure.
- FIG. 10 illustrates a deep neural network applicable to the present disclosure.
- FIG. 11 shows a convolutional neural network applicable to the present disclosure.
- FIG. 12 illustrates a filter operation of a convolutional neural network applicable to the present disclosure.
- FIG. 13 illustrates a neural network structure in which a cyclic loop applicable to the present disclosure exists.
- FIG. 14 shows an operation structure of a recurrent neural network applicable to the present disclosure.
- 17 and 18 show an example of a joint learning process of terminals applicable to the present disclosure.
- 21 shows an example of a terminal operation procedure applicable to the present disclosure.
- each component or feature may be considered optional unless explicitly stated otherwise.
- Each component or feature may be implemented in a form that is not combined with other components or features.
- some components and/or features may be combined to configure an embodiment of the present disclosure.
- the order of operations described in embodiments of the present disclosure may be changed. Some configurations or features of one embodiment may be included in other embodiments, or may be replaced with corresponding configurations or features of other embodiments.
- the base station has a meaning as a terminal node of a network that directly communicates with the mobile station.
- a specific operation described as being performed by the base station in this document may be performed by an upper node of the base station in some cases.
- the 'base station' is a term such as a fixed station, a Node B, an eNB (eNode B), a gNB (gNode B), an ng-eNB, an advanced base station (ABS) or an access point (access point).
- eNode B eNode B
- gNode B gNode B
- ng-eNB ng-eNB
- ABS advanced base station
- access point access point
- a terminal includes a user equipment (UE), a mobile station (MS), a subscriber station (SS), a mobile subscriber station (MSS), It may be replaced by terms such as a mobile terminal or an advanced mobile station (AMS).
- UE user equipment
- MS mobile station
- SS subscriber station
- MSS mobile subscriber station
- AMS advanced mobile station
- a transmitting end refers to a fixed and/or mobile node that provides a data service or a voice service
- a receiving end refers to a fixed and/or mobile node that receives a data service or a voice service.
- the mobile station may be a transmitting end, and the base station may be a receiving end.
- the mobile station may be the receiving end, and the base station may be the transmitting end.
- Embodiments of the present disclosure are wireless access systems IEEE 802.xx system, 3rd Generation Partnership Project (3GPP) system, 3GPP Long Term Evolution (LTE) system, 3GPP 5G (5th generation) NR (New Radio) system, and 3GPP2 system among It may be supported by standard documents disclosed in at least one, and in particular, embodiments of the present disclosure are supported by 3GPP TS (technical specification) 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents. can be
- embodiments of the present disclosure may be applied to other wireless access systems, and are not limited to the above-described system. As an example, it may be applicable to a system applied after the 3GPP 5G NR system, and is not limited to a specific system.
- CDMA code division multiple access
- FDMA frequency division multiple access
- TDMA time division multiple access
- OFDMA orthogonal frequency division multiple access
- SC-FDMA single carrier frequency division multiple access
- LTE is 3GPP TS 36.xxx Release 8 or later
- LTE technology after 3GPP TS 36.xxx Release 10 may be referred to as LTE-A
- xxx Release 13 may be referred to as LTE-A pro.
- 3GPP NR may mean technology after TS 38.xxx Release 15.
- 3GPP 6G may mean technology after TS Release 17 and/or Release 18.
- "xxx" means standard document detail number LTE/NR/6G may be collectively referred to as a 3GPP system.
- FIG. 1 is a diagram illustrating an example of a communication system applied to the present disclosure.
- a communication system 100 applied to the present disclosure includes a wireless device, a base station, and a network.
- the wireless device means a device that performs communication using a wireless access technology (eg, 5G NR, LTE), and may be referred to as a communication/wireless/5G device.
- the wireless device may include a robot 100a, a vehicle 100b-1, 100b-2, an extended reality (XR) device 100c, a hand-held device 100d, and a home appliance. appliance) 100e, an Internet of Things (IoT) device 100f, and an artificial intelligence (AI) device/server 100g.
- a wireless access technology eg, 5G NR, LTE
- XR extended reality
- IoT Internet of Things
- AI artificial intelligence
- the vehicle may include a vehicle equipped with a wireless communication function, an autonomous driving vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
- the vehicles 100b-1 and 100b-2 may include an unmanned aerial vehicle (UAV) (eg, a drone).
- UAV unmanned aerial vehicle
- the XR device 100c includes augmented reality (AR)/virtual reality (VR)/mixed reality (MR) devices, and includes a head-mounted device (HMD), a head-up display (HUD) provided in a vehicle, a television, It may be implemented in the form of a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
- the portable device 100d may include a smart phone, a smart pad, a wearable device (eg, smart watch, smart glasses), and a computer (eg, a laptop computer).
- the home appliance 100e may include a TV, a refrigerator, a washing machine, and the like.
- the IoT device 100f may include a sensor, a smart meter, and the like.
- the base station 120 and the network 130 may be implemented as a wireless device, and a specific wireless device 120a may operate as a base station/network node to other wireless devices.
- the wireless devices 100a to 100f may be connected to the network 130 through the base station 120 .
- AI technology may be applied to the wireless devices 100a to 100f , and the wireless devices 100a to 100f may be connected to the AI server 100g through the network 130 .
- the network 130 may be configured using a 3G network, a 4G (eg, LTE) network, or a 5G (eg, NR) network.
- the wireless devices 100a to 100f may communicate with each other through the base station 120/network 130, but communicate directly without going through the base station 120/network 130 (eg, sidelink communication) You may.
- the vehicles 100b-1 and 100b-2 may perform direct communication (eg, vehicle to vehicle (V2V)/vehicle to everything (V2X) communication).
- the IoT device 100f eg, a sensor
- Wireless communication/connection 150a, 150b, and 150c may be performed between the wireless devices 100a to 100f/base station 120 and the base station 120/base station 120 .
- wireless communication/connection includes uplink/downlink communication 150a and sidelink communication 150b (or D2D communication), and communication between base stations 150c (eg, relay, integrated access backhaul (IAB)). This may be achieved through radio access technology (eg, 5G NR).
- IAB integrated access backhaul
- the wireless device and the base station/wireless device, and the base station and the base station may transmit/receive wireless signals to each other.
- the wireless communication/connection 150a , 150b , 150c may transmit/receive signals through various physical channels.
- various configuration information setting processes for transmission/reception of wireless signals various signal processing processes (eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.) , at least a part of a resource allocation process may be performed.
- signal processing processes eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.
- FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
- a first wireless device 200a and a second wireless device 200b may transmit/receive wireless signals through various wireless access technologies (eg, LTE, NR).
- ⁇ first wireless device 200a, second wireless device 200b ⁇ is ⁇ wireless device 100x, base station 120 ⁇ of FIG. 1 and/or ⁇ wireless device 100x, wireless device 100x) ⁇ can be matched.
- the first wireless device 200a includes one or more processors 202a and one or more memories 204a, and may further include one or more transceivers 206a and/or one or more antennas 208a.
- the processor 202a controls the memory 204a and/or the transceiver 206a and may be configured to implement the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein.
- the processor 202a may process information in the memory 204a to generate first information/signal, and then transmit a wireless signal including the first information/signal through the transceiver 206a.
- the processor 202a may receive the radio signal including the second information/signal through the transceiver 206a, and then store the information obtained from the signal processing of the second information/signal in the memory 204a.
- the memory 204a may be connected to the processor 202a and may store various information related to the operation of the processor 202a.
- the memory 204a may provide instructions for performing some or all of the processes controlled by the processor 202a, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
- the processor 202a and the memory 204a may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
- a wireless communication technology eg, LTE, NR
- the transceiver 206a may be coupled to the processor 202a and may transmit and/or receive wireless signals via one or more antennas 208a.
- the transceiver 206a may include a transmitter and/or a receiver.
- the transceiver 206a may be used interchangeably with a radio frequency (RF) unit.
- RF radio frequency
- a wireless device may refer to a communication modem/circuit/chip.
- the second wireless device 200b includes one or more processors 202b, one or more memories 204b, and may further include one or more transceivers 206b and/or one or more antennas 208b.
- the processor 202b controls the memory 204b and/or the transceiver 206b and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein.
- the processor 202b may process information in the memory 204b to generate third information/signal, and then transmit a wireless signal including the third information/signal through the transceiver 206b.
- the processor 202b may receive the radio signal including the fourth information/signal through the transceiver 206b, and then store information obtained from signal processing of the fourth information/signal in the memory 204b.
- the memory 204b may be connected to the processor 202b and may store various information related to the operation of the processor 202b.
- the memory 204b may provide instructions for performing some or all of the processes controlled by the processor 202b, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
- the processor 202b and the memory 204b may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
- a wireless communication technology eg, LTE, NR
- the transceiver 206b may be coupled to the processor 202b and may transmit and/or receive wireless signals via one or more antennas 208b.
- Transceiver 206b may include a transmitter and/or receiver.
- Transceiver 206b may be used interchangeably with an RF unit.
- a wireless device may refer to a communication modem/circuit/chip.
- one or more protocol layers may be implemented by one or more processors 202a, 202b.
- one or more processors 202a, 202b may include one or more layers (eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control) and a functional layer such as service data adaptation protocol (SDAP)).
- layers eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control
- SDAP service data adaptation protocol
- the one or more processors 202a, 202b may be configured to process one or more protocol data units (PDUs) and/or one or more service data units (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein. can create The one or more processors 202a, 202b may generate messages, control information, data, or information according to the description, function, procedure, proposal, method, and/or flow charts disclosed herein. The one or more processors 202a, 202b generate a signal (eg, a baseband signal) including a PDU, SDU, message, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein.
- a signal eg, a baseband signal
- processors 202a, 202b may receive signals (eg, baseband signals) from one or more transceivers 206a, 206b, and the descriptions, functions, procedures, proposals, methods, and/or flowcharts of operation disclosed herein.
- PDUs, SDUs, messages, control information, data, or information may be acquired according to the fields.
- One or more processors 202a, 202b may be referred to as a controller, microcontroller, microprocessor, or microcomputer.
- One or more processors 202a, 202b may be implemented by hardware, firmware, software, or a combination thereof.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGAs field programmable gate arrays
- firmware or software may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, and the like.
- the descriptions, functions, procedures, proposals, methods, and/or flow charts disclosed in this document provide that firmware or software configured to perform is included in one or more processors 202a, 202b, or stored in one or more memories 204a, 204b. It may be driven by the above processors 202a and 202b.
- the descriptions, functions, procedures, proposals, methods, and/or flowcharts of operations disclosed herein may be implemented using firmware or software in the form of code, instructions, and/or a set of instructions.
- One or more memories 204a, 204b may be coupled to one or more processors 202a, 202b and may store various types of data, signals, messages, information, programs, codes, instructions, and/or instructions.
- One or more memories 204a, 204b may include read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, hard drives, registers, cache memory, computer readable storage media and/or It may be composed of a combination of these.
- One or more memories 204a, 204b may be located inside and/or external to one or more processors 202a, 202b. Additionally, one or more memories 204a, 204b may be coupled to one or more processors 202a, 202b through various technologies, such as wired or wireless connections.
- the one or more transceivers 206a, 206b may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flowcharts of this document to one or more other devices.
- the one or more transceivers 206a, 206b may receive user data, control information, radio signals/channels, etc. referred to in the descriptions, functions, procedures, suggestions, methods and/or flow charts, etc. disclosed herein, from one or more other devices. have.
- one or more transceivers 206a , 206b may be coupled to one or more processors 202a , 202b and may transmit and receive wireless signals.
- one or more processors 202a, 202b may control one or more transceivers 206a, 206b to transmit user data, control information, or wireless signals to one or more other devices. Additionally, one or more processors 202a, 202b may control one or more transceivers 206a, 206b to receive user data, control information, or wireless signals from one or more other devices. Further, one or more transceivers 206a, 206b may be coupled with one or more antennas 208a, 208b, and the one or more transceivers 206a, 206b may be connected via one or more antennas 208a, 208b. , may be set to transmit and receive user data, control information, radio signals/channels, etc.
- one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
- the one or more transceivers 206a, 206b converts the received radio signal/channel, etc. from the RF band signal to process the received user data, control information, radio signal/channel, etc. using the one or more processors 202a, 202b. It can be converted into a baseband signal.
- One or more transceivers 206a, 206b may convert user data, control information, radio signals/channels, etc. processed using one or more processors 202a, 202b from baseband signals to RF band signals.
- one or more transceivers 206a, 206b may include (analog) oscillators and/or filters.
- FIG. 3 is a diagram illustrating another example of a wireless device applied to the present disclosure.
- a wireless device 300 corresponds to the wireless devices 200a and 200b of FIG. 2 , and includes various elements, components, units/units, and/or modules. ) can be composed of
- the wireless device 300 may include a communication unit 310 , a control unit 320 , a memory unit 330 , and an additional element 340 .
- the communication unit may include communication circuitry 312 and transceiver(s) 314 .
- communication circuitry 312 may include one or more processors 202a, 202b and/or one or more memories 204a, 204b of FIG. 2 .
- the transceiver(s) 314 may include one or more transceivers 206a , 206b and/or one or more antennas 208a , 208b of FIG. 2 .
- the control unit 320 is electrically connected to the communication unit 310 , the memory unit 330 , and the additional element 340 and controls general operations of the wireless device.
- the controller 320 may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit 330 .
- control unit 320 transmits the information stored in the memory unit 330 to the outside (eg, another communication device) through the communication unit 310 through a wireless/wired interface, or externally (eg, through the communication unit 310) Information received through a wireless/wired interface from another communication device) may be stored in the memory unit 330 .
- the additional element 340 may be configured in various ways according to the type of the wireless device.
- the additional element 340 may include at least one of a power unit/battery, an input/output unit, a driving unit, and a computing unit.
- the wireless device 300 may include a robot ( FIGS. 1 and 100a ), a vehicle ( FIGS. 1 , 100b-1 , 100b-2 ), an XR device ( FIGS. 1 and 100c ), and a mobile device ( FIGS. 1 and 100d ). ), home appliances (FIG. 1, 100e), IoT device (FIG.
- the wireless device may be mobile or used in a fixed location depending on the use-example/service.
- various elements, components, units/units, and/or modules in the wireless device 300 may be all interconnected through a wired interface, or at least some may be wirelessly connected through the communication unit 310 .
- the control unit 320 and the communication unit 310 are connected by wire, and the control unit 320 and the first unit (eg, 130 , 140 ) are connected wirelessly through the communication unit 310 .
- each element, component, unit/unit, and/or module within the wireless device 300 may further include one or more elements.
- the controller 320 may include one or more processor sets.
- control unit 320 may be configured as a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
- memory unit 330 may include RAM, dynamic RAM (DRAM), ROM, flash memory, volatile memory, non-volatile memory, and/or a combination thereof. can be configured.
- FIG. 4 is a diagram illustrating an example of a mobile device applied to the present disclosure.
- the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), and a portable computer (eg, a laptop computer).
- the mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
- MS mobile station
- UT user terminal
- MSS mobile subscriber station
- SS subscriber station
- AMS advanced mobile station
- WT wireless terminal
- the mobile device 400 includes an antenna unit 408 , a communication unit 410 , a control unit 420 , a memory unit 430 , a power supply unit 440a , an interface unit 440b , and an input/output unit 440c .
- the antenna unit 408 may be configured as a part of the communication unit 410 .
- Blocks 410 to 430/440a to 440c respectively correspond to blocks 310 to 330/340 of FIG. 3 .
- the communication unit 410 may transmit and receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
- the controller 420 may control components of the portable device 400 to perform various operations.
- the controller 420 may include an application processor (AP).
- the memory unit 430 may store data/parameters/programs/codes/commands necessary for driving the portable device 400 . Also, the memory unit 430 may store input/output data/information.
- the power supply unit 440a supplies power to the portable device 400 and may include a wired/wireless charging circuit, a battery, and the like.
- the interface unit 440b may support a connection between the portable device 400 and other external devices.
- the interface unit 440b may include various ports (eg, an audio input/output port and a video input/output port) for connection with an external device.
- the input/output unit 440c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
- the input/output unit 440c may include a camera, a microphone, a user input unit, a display unit 440d, a speaker, and/or a haptic module.
- the input/output unit 440c obtains information/signals (eg, touch, text, voice, image, video) input from the user, and the obtained information/signals are stored in the memory unit 430 . can be saved.
- the communication unit 410 may convert the information/signal stored in the memory into a wireless signal, and transmit the converted wireless signal directly to another wireless device or to a base station. Also, after receiving a radio signal from another radio device or base station, the communication unit 410 may restore the received radio signal to original information/signal.
- the restored information/signal may be stored in the memory unit 430 and output in various forms (eg, text, voice, image, video, haptic) through the input/output unit 440c.
- FIG. 5 is a diagram illustrating an example of a vehicle or autonomous driving vehicle applied to the present disclosure.
- the vehicle or autonomous driving vehicle may be implemented as a mobile robot, a vehicle, a train, an aerial vehicle (AV), a ship, and the like, but is not limited to the shape of the vehicle.
- AV aerial vehicle
- the vehicle or autonomous driving vehicle 500 includes an antenna unit 508 , a communication unit 510 , a control unit 520 , a driving unit 540a , a power supply unit 540b , a sensor unit 540c and autonomous driving.
- a unit 540d may be included.
- the antenna unit 550 may be configured as a part of the communication unit 510 .
- Blocks 510/530/540a to 540d respectively correspond to blocks 410/430/440 of FIG. 4 .
- the communication unit 510 may transmit/receive signals (eg, data, control signals, etc.) to and from external devices such as other vehicles, base stations (eg, base stations, roadside units, etc.), and servers.
- the controller 520 may control elements of the vehicle or the autonomous driving vehicle 500 to perform various operations.
- the controller 520 may include an electronic control unit (ECU).
- ECU electronice control unit
- AI devices include TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc. It may be implemented as a device or a mobile device.
- the AI device 600 includes a communication unit 610 , a control unit 620 , a memory unit 630 , input/output units 640a/640b , a learning processor unit 640c and a sensor unit 640d.
- a communication unit 610 may include Blocks 610 to 630/640a to 640d may correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
- the communication unit 610 uses wired and wireless communication technology to communicate with external devices such as other AI devices (eg, FIGS. 1, 100x, 120, 140) or an AI server ( FIGS. 1 and 140 ) and wired and wireless signals (eg, sensor information, user input, learning model, control signal, etc.). To this end, the communication unit 610 may transmit information in the memory unit 630 to an external device or transmit a signal received from the external device to the memory unit 630 .
- AI devices eg, FIGS. 1, 100x, 120, 140
- an AI server FIGS. 1 and 140
- wired and wireless signals eg, sensor information, user input, learning model, control signal, etc.
- the controller 620 may determine at least one executable operation of the AI device 600 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. Then, the controller 620 may control the components of the AI device 600 to perform the determined operation. For example, the control unit 620 may request, search, receive, or utilize the data of the learning processor unit 640c or the memory unit 630, and is determined to be a predicted operation or desirable among at least one executable operation. Components of the AI device 600 may be controlled to execute the operation. In addition, the control unit 620 collects history information including user feedback on the operation contents or operation of the AI device 600 and stores it in the memory unit 630 or the learning processor unit 640c, or the AI server ( 1 and 140), and the like may be transmitted to an external device. The collected historical information may be used to update the learning model.
- the memory unit 630 may store data supporting various functions of the AI device 600 .
- the memory unit 630 may store data obtained from the input unit 640a , data obtained from the communication unit 610 , output data of the learning processor unit 640c , and data obtained from the sensing unit 640 .
- the memory unit 630 may store control information and/or software codes necessary for the operation/execution of the control unit 620 .
- the input unit 640a may acquire various types of data from the outside of the AI device 600 .
- the input unit 620 may obtain training data for model learning, input data to which the learning model is applied, and the like.
- the input unit 640a may include a camera, a microphone, and/or a user input unit.
- the output unit 640b may generate an output related to sight, hearing, or touch.
- the output unit 640b may include a display unit, a speaker, and/or a haptic module.
- the sensing unit 640 may obtain at least one of internal information of the AI device 600 , surrounding environment information of the AI device 600 , and user information by using various sensors.
- the sensing unit 640 may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. have.
- the learning processor unit 640c may train a model composed of an artificial neural network by using the training data.
- the learning processor unit 640c may perform AI processing together with the learning processor unit of the AI server ( FIGS. 1 and 140 ).
- the learning processor unit 640c may process information received from an external device through the communication unit 610 and/or information stored in the memory unit 630 . Also, the output value of the learning processor unit 640c may be transmitted to an external device through the communication unit 610 and/or stored in the memory unit 630 .
- the transmission signal may be processed by a signal processing circuit.
- the signal processing circuit 700 may include a scrambler 710 , a modulator 720 , a layer mapper 730 , a precoder 740 , a resource mapper 750 , and a signal generator 760 .
- the operation/function of FIG. 7 may be performed by the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
- blocks 710 to 760 may be implemented in the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
- blocks 710 to 760 may be implemented in the processors 202a and 202b of FIG. 2 .
- blocks 710 to 750 may be implemented in the processors 202a and 202b of FIG. 2
- block 760 may be implemented in the transceivers 206a and 206b of FIG. 2 , and the embodiment is not limited thereto.
- the codeword may be converted into a wireless signal through the signal processing circuit 700 of FIG. 7 .
- the codeword is a coded bit sequence of an information block.
- the information block may include a transport block (eg, a UL-SCH transport block, a DL-SCH transport block).
- the radio signal may be transmitted through various physical channels (eg, PUSCH, PDSCH).
- the codeword may be converted into a scrambled bit sequence by the scrambler 710 .
- a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device, and the like.
- the scrambled bit sequence may be modulated by a modulator 720 into a modulation symbol sequence.
- the modulation method may include pi/2-binary phase shift keying (pi/2-BPSK), m-phase shift keying (m-PSK), m-quadrature amplitude modulation (m-QAM), and the like.
- the complex modulation symbol sequence may be mapped to one or more transport layers by a layer mapper 730 .
- Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by a precoder 740 (precoding).
- the output z of the precoder 740 may be obtained by multiplying the output y of the layer mapper 730 by the precoding matrix W of N*M.
- N is the number of antenna ports
- M is the number of transport layers.
- the precoder 740 may perform precoding after performing transform precoding (eg, discrete fourier transform (DFT) transform) on the complex modulation symbols. Also, the precoder 740 may perform precoding without performing transform precoding.
- transform precoding eg, discrete fourier transform (DFT) transform
- the resource mapper 750 may map modulation symbols of each antenna port to a time-frequency resource.
- the time-frequency resource may include a plurality of symbols (eg, a CP-OFDMA symbol, a DFT-s-OFDMA symbol) in the time domain and a plurality of subcarriers in the frequency domain.
- the signal generator 760 generates a radio signal from the mapped modulation symbols, and the generated radio signal may be transmitted to another device through each antenna.
- the signal generator 760 may include an inverse fast fourier transform (IFFT) module and a cyclic prefix (CP) inserter, a digital-to-analog converter (DAC), a frequency uplink converter, and the like. .
- IFFT inverse fast fourier transform
- CP cyclic prefix
- DAC digital-to-analog converter
- the signal processing process for the received signal in the wireless device may be configured in reverse of the signal processing processes 710 to 760 of FIG. 7 .
- the wireless device eg, 200a or 200b of FIG. 2
- the received radio signal may be converted into a baseband signal through a signal restorer.
- the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast fourier transform (FFT) module.
- ADC analog-to-digital converter
- FFT fast fourier transform
- the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a descrambling process.
- the codeword may be restored to the original information block through decoding.
- the signal processing circuit (not shown) for the received signal may include a signal restorer, a resource de-mapper, a post coder, a demodulator, a descrambler, and a decoder.
- AI The most important and newly introduced technology for 6G systems is AI.
- AI was not involved in the 4G system.
- 5G systems will support partial or very limited AI.
- the 6G system will be AI-enabled for full automation.
- Advances in machine learning will create more intelligent networks for real-time communication in 6G.
- Incorporating AI into communications can simplify and enhance real-time data transmission.
- AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
- AI can also play an important role in M2M, machine-to-human and human-to-machine communication.
- AI can be a rapid communication in the BCI (brain computer interface).
- BCI brain computer interface
- AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
- AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
- a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
- deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based multiple input multiple output (MIMO) mechanism It may include AI-based resource scheduling and allocation.
- Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a physical layer of a downlink (DL). In addition, machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
- DL downlink
- machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
- Deep learning-based AI algorithms require large amounts of training data to optimize training parameters.
- a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a wireless channel.
- signals of the physical layer of wireless communication are complex signals.
- further research on a neural network for detecting a complex domain signal is needed.
- Machine learning refers to a set of operations that trains a machine to create a machine that can perform tasks that humans can or cannot do.
- Machine learning requires data and a learning model.
- data learning methods can be roughly divided into three types: supervised learning, unsupervised learning, and reinforcement learning.
- Neural network learning is to minimize output errors. Neural network learning repeatedly inputs learning data into the neural network, calculates the output and target errors of the neural network for the training data, and backpropagates the neural network error from the output layer of the neural network to the input layer in the direction to reduce the error. ) to update the weight of each node in the neural network.
- Supervised learning uses training data in which the correct answer is labeled in the training data, and in unsupervised learning, the correct answer may not be labeled in the training data. That is, for example, learning data in the case of supervised learning related to data classification may be data in which categories are labeled for each of the training data.
- the labeled training data is input to the neural network, and an error can be calculated by comparing the output (category) of the neural network with the label of the training data.
- the calculated error is back propagated in the reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back propagation.
- the change amount of the connection weight of each node to be updated may be determined according to a learning rate.
- the computation of the neural network on the input data and the backpropagation of errors can constitute a learning cycle (epoch).
- the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of learning a neural network, a high learning rate can be used to increase the efficiency by allowing the neural network to quickly obtain a certain level of performance, and in the late learning period, a low learning rate can be used to increase the accuracy.
- the learning method may vary depending on the characteristics of the data. For example, when the purpose of accurately predicting data transmitted from a transmitter in a communication system at a receiver is, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
- the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
- the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machine (RNN) methods. and such a learning model can be applied.
- DNN deep neural networks
- CNN convolutional deep neural networks
- RNN recurrent boltzmann machine
- FIG. 8 illustrates a structure of a perceptron included in an artificial neural network applicable to the present disclosure. Also, FIG. 9 shows an artificial neural network structure applicable to the present disclosure.
- an artificial intelligence system may be applied in the 6G system.
- the artificial intelligence system may operate based on a learning model corresponding to the human brain, as described above.
- a paradigm of machine learning that uses a neural network structure with high complexity such as artificial neural networks as a learning model can be called deep learning.
- the neural network cord used as a learning method is largely a deep neural network (DNN), a convolutional deep neural network (CNN), and a recurrent neural network (RNN).
- DNN deep neural network
- CNN convolutional deep neural network
- RNN recurrent neural network
- the artificial neural network may be composed of several perceptrons.
- a perceptron If the huge artificial neural network structure extends the simplified perceptron structure shown in FIG. 8, input vectors can be applied to different multidimensional perceptrons. For convenience of description, an input value or an output value is referred to as a node.
- the perceptron structure shown in FIG. 8 may be described as being composed of a total of three layers based on an input value and an output value.
- An artificial neural network in which H (d+1)-dimensional perceptrons exist between the 1st layer and the 2nd layer and K (H+1)-dimensional perceptrons exist between the 2nd layer and the 3rd layer can be expressed as shown in FIG. can
- the layer where the input vector is located is called an input layer
- the layer where the final output value is located is called the output layer
- all layers located between the input layer and the output layer are called hidden layers.
- the artificial neural network illustrated in FIG. 9 can be understood as a total of two layers.
- the artificial neural network is constructed by connecting the perceptrons of the basic blocks in two dimensions.
- the aforementioned input layer, hidden layer, and output layer can be jointly applied in various artificial neural network structures such as CNN and RNN to be described later as well as multi-layer perceptron.
- various artificial neural network structures such as CNN and RNN to be described later as well as multi-layer perceptron.
- the artificial neural network becomes deeper, and a machine learning paradigm that uses a sufficiently deep artificial neural network as a learning model can be called deep learning.
- an artificial neural network used for deep learning may be referred to as a deep neural network (DNN).
- DNN deep neural network
- FIG. 10 illustrates a deep neural network applicable to the present disclosure.
- the deep neural network may be a multilayer perceptron composed of eight hidden layers + eight output layers.
- the multilayer perceptron structure may be expressed as a fully-connected neural network.
- a connection relationship does not exist between nodes located in the same layer, and a connection relationship can exist only between nodes located in adjacent layers.
- DNN has a fully connected neural network structure and is composed of a combination of a number of hidden layers and activation functions, so it can be usefully applied to figure out the correlation between input and output.
- the correlation characteristic may mean a joint probability of input/output.
- FIG. 11 shows a convolutional neural network applicable to the present disclosure.
- FIG. 12 shows a filter operation of a convolutional neural network applicable to the present disclosure.
- various artificial neural network structures different from the above-described DNN may be formed.
- the DNN nodes located inside one layer are arranged in a one-dimensional vertical direction.
- the nodes are two-dimensionally arranged with w horizontally and h vertical nodes. (Convolutional neural network structure in Fig. 11).
- a weight is added per connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights must be considered. Since there are h ⁇ w nodes in the input layer, a total of h 2 w 2 weights may be required between two adjacent layers.
- the convolutional neural network of FIG. 11 has a problem in that the number of weights increases exponentially according to the number of connections, so instead of considering the connection of all modes between adjacent layers, it is assumed that a filter with a small size exists.
- a weighted sum and activation function operation may be performed on a portion where the filters overlap.
- one filter has a weight corresponding to the number corresponding to its size, and learning of the weight can be performed so that a specific feature on the image can be extracted and output as a factor.
- a 3 ⁇ 3 filter is applied to the upper left 3 ⁇ 3 region of the input layer, and an output value obtained by performing weighted sum and activation function operations on the corresponding node may be stored in z 22 .
- the above-described filter may perform weighted sum and activation function calculations while moving horizontally and vertically at regular intervals while scanning the input layer, and the output value may be disposed at the current filter position. Since this operation method is similar to a convolution operation on an image in the field of computer vision, a deep neural network with such a structure is called a convolutional neural network (CNN), and the result of the convolution operation is The hidden layer may be referred to as a convolutional layer. Also, a neural network having a plurality of convolutional layers may be referred to as a deep convolutional neural network (DCNN).
- DCNN deep convolutional neural network
- the number of weights can be reduced by calculating the weighted sum by including only nodes located in the region covered by the filter in the node where the filter is currently located. Due to this, one filter can be used to focus on features for a local area. Accordingly, CNN can be effectively applied to image data processing in which physical distance on a two-dimensional domain is an important criterion. Meanwhile, in CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through the convolution operation of each filter.
- a structure in which this method is applied to an artificial neural network can be called a recurrent neural network structure.
- 13 illustrates a neural network structure in which a cyclic loop applicable to the present disclosure exists.
- 14 shows an operation structure of a recurrent neural network applicable to the present disclosure.
- a recurrent neural network is an element ⁇ x 1 (t) , x 2 (t) , . , x d (t) ⁇ in the process of input to the fully connected neural network
- the immediately preceding time point t-1 is the hidden vector ⁇ z 1 (t-1) , z 2 (t-1) , ... , z H (t-1) ⁇ can be input together to have a structure in which a weighted sum and an activation function are applied.
- the reason why the hidden vector is transferred to the next time in this way is because it is considered that information in the input vector at the previous time is accumulated in the hidden vector of the current time.
- the recurrent neural network may operate in a predetermined time sequence with respect to an input data sequence.
- the input vector ⁇ x 1 (t) , x 2 (t) , ... , x d (t) ⁇ when the hidden vector ⁇ z 1 (1) , z 2 (1) , ... , z H (1) ⁇ is the input vector ⁇ x 1 (2) , x 2 (2) , ... , x d (2) ⁇
- the vector of the hidden layer ⁇ z 1 (2) , z 2 (2) , ... , z H (2) ⁇ is determined.
- These processes are time point 2, time point 3, ... , iteratively until time point T.
- a deep recurrent neural network when a plurality of hidden layers are arranged in a recurrent neural network, this is called a deep recurrent neural network (DRNN).
- the recurrent neural network is designed to be usefully applied to sequence data (eg, natural language processing).
- neural network core used as a learning method, in addition to DNN, CNN, and RNN, restricted Boltzmann machine (RBM), deep belief networks (DBN), deep Q-Network and It includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
- RBM restricted Boltzmann machine
- DNN deep belief networks
- Q-Network includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
- AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling ( scheduling) and allocation may be included.
- Federated learning is one of the distributed machine learning techniques.
- Federated learning is a technique in which multiple devices that are the subject of learning share parameters with a server. For example, in federated learning, multiple devices that are the subject of learning share a weight or gradient of a server and a local model. The server collects the local model parameters of each device and updates the global parameters. The server does not share the raw data of each device with the devices. Accordingly, federated learning can reduce a communication overhead of a data transmission process, and can protect personal information.
- Federated learning based on orthogonal multiple access operates as shown in FIG. 15 .
- Devices 1502a, 1502b, and 1502c transmit local parameters in their respective allocated resources.
- the server 1504 performs offline aggregation on the parameters received from the device. In general, the server derives global parameters through averaging of all local parameters. Then, the server transmits the derived global parameter back to the devices. However, as the number of devices participating in learning increases under limited radio resources, the time for the server to update global parameters is delayed. In a non-IID (independently and identically distributed) environment, the raw data distribution of each device may be different. Therefore, in the Non-IID environment, the transmission frequency of local parameters of each device should be increased.
- a server may refer to a base station, and may perform joint learning with a plurality of terminals. Also, the terminal may be referred to as a user.
- AirComp-based federated learning is a method in which all devices 1602a , 1602b , and 1602c transmit local parameters to the server 1604 using the same resource.
- the server may obtain the sum of local parameters by the superposition characteristic of the analog waveform of the received signal.
- the number of devices participating in learning does not significantly affect latency because local parameters are transmitted through the same resource.
- devices must be synchronized. When multiple devices want to perform federated learning, a strict synchronization process is required for all devices to transmit parameters at the same time.
- a technique of grouping devices participating in learning in order for a plurality of devices to efficiently perform federated learning in a non-IID environment may be used. Accordingly, devices that want to participate in federated learning are grouped, and the grouped devices can perform air-comp based federated learning within the group. As the data distribution of the device group is similar to the global data distribution, when the devices learn the group model, a model similar to the global model may be obtained. When devices acquire a group model similar to the global model, a load on parameter transmission and reception may be reduced when devices perform aggregation on the group model.
- the present disclosure proposes a method for efficiently performing air-computation-based federated learning by multiple devices with strong non-IID characteristics.
- the present disclosure proposes a method of setting a device group, which is a unit for averaging local parameters when devices perform federated learning.
- the present disclosure proposes a method for setting a device group so that the distribution of group data is similar to global data.
- the present disclosure proposes a method of determining the degree of the non-IID characteristic of each device.
- each device when each device is to perform federated learning, each device may transmit a weight parameter W k of a local model learned through their own raw data to the server 1702 . .
- the weight corresponding to the split layer of the model may be transmitted to the server 1702 .
- the server and the base station may be used interchangeably. Specifically, when the size of the local model is large and the local model is trained under the same initial value situation, the device may transmit a weight corresponding to a split layer of the model to the server.
- Server 1802 receives all local models (W tot ) or split local models ( ) to the device 1804 .
- the device may perform an accuracy test on raw data based on a local model of another device. This disclosure refers to this process as model traveling. As the non-IID characteristic of the raw-data possessed by each device is stronger, the probability of an interference error according to model traveling may increase.
- the device when receiving the split local model from the server, the device may replace some layers in its local model with the received split local model. Accordingly, the device may replace some layers with the received split-local model and perform model traveling.
- the device may transmit the accuracy table generated by the device to the server.
- the device can provide information about the data non-IID characteristics of the device that wants to participate in federated learning by transmitting the accuracy table to the server. Equation 1 below expresses an example of an accuracy table.
- p i,j represents inference accuracy when the i-th device performs model traveling using the local model of the j-th device.
- the server may measure the degree of data non-IID of devices that want to perform federated learning through the above-described model traveling process.
- the server may allocate resources by grouping devices in order for devices to perform efficient air-computation-based federated learning.
- devices perform air-computation-based federated learning it is difficult to perform a strict synchronization process when the number of devices is too large.
- the non-IID characteristic between raw data is strong as the number of devices is large, the period of transmitting the local model to the server when the devices perform federated learning should be short.
- the server may allocate resources by grouping devices.
- the base station may divide the terminals into group 1 (1902), group 2 (1904) and group 3 (1906).
- the base station may allocate resources so that terminals in each group perform federated learning based on the same resource. That is, the base station may allocate resources so that terminals in each group perform air-compuation-based federated learning.
- the devices can perform efficient federated learning.
- device grouping may be performed by grouping devices having strong non-IID characteristics of raw data. Accordingly, the distribution of group data may be set to be similar to the distribution of global data.
- the server uses the received accuracy table to obtain inter-device interference accuracy (inference accuracy, ) so that the sum of the device groups ( ) can be determined.
- the device grouping described above may reduce a model reporting period between the device and the server. That is, when devices in a group perform air-computation-based federated learning, devices can learn while having a short model report cycle between the device and the server.
- the server can secure the global model by averaging the results of the local model learned within the group.
- the base station 2004 may request a local model weight from the terminal 2002.
- the base station may request weights of the local model from the terminals in order to measure non-IID characteristics.
- the base station may request a local model weight from a device that wants to learn the global model.
- the server may request a split model weight from the terminal.
- the server may determine a specific point of the local model based on the size of the entire model. Then, the server may request the split model weight based on the determined specific point of the local model.
- the terminal 2002 may report the weights of the local model to the base station 2004 through orthogonal resources. That is, the terminal may transmit the local model or the split local model learned from the respective data to the server through separate resources.
- the base station 2004 may broadcast total local models to the terminal 2002 for model traveling. That is, the base station may transmit a set of local models to the device in order to understand the non-IID characteristics of the raw data possessed by each terminal.
- the terminal may perform model traveling.
- the terminal may perform model traveling based on the received set of local models.
- the terminal may generate an accuracy table based on model traveling. That is, the terminal may generate the accuracy table based on the received set of local models.
- step S2009 the terminal may transmit the accuracy table to the base station.
- the base station may perform device grouping based on the received table. For example, the base station may group terminals with strong non-IID characteristics based on the accuracy information of each terminal.
- the base station may allocate resources for federated learning. For example, the base station may allocate the same resource to one group of terminals. That is, the base station may allocate resources for air-computation-based federated learning to each group.
- the base station and the terminals may perform federated learning based on the allocated resources.
- the base station and the terminals may perform air-computation-based federated learning based on a short-period model reporting. That is, the terminal and the base station may perform air-computation-based federated learning based on a frequent model report.
- the terminal may learn the global model together with the terminals of the same group based on the allocated resource.
- devices participate in federated learning by sending the learned local model to the server.
- the devices may transmit a split layer model.
- a device may perform model traveling by diving into another device's local model from the server.
- model traveling may be performed by replacing a part of its local model with the received local model.
- This split-layer-based model traveling method can reduce traffic for identifying non-IID characteristics between devices.
- the base station may group devices with strong non-IID characteristics by using a table obtained based on model traveling. Also, the base station may allocate the same resource to one group of terminals. Accordingly, terminals in the group may perform air-computaiton-based federated learning. When the terminals in the group perform federated learning, the transmission period of the local model can be shortened. Accordingly, even if terminals having non-IID data perform federated learning, it is possible to secure the accuracy of a model obtained through federated learning.
- Table 1 shows whether it is possible to transmit model parameters of a short period according to the federated learning technique.
- terminals may take time to transmit a model due to limited resources. Therefore, it is difficult for the terminals to transmit the local model to the server in a short period.
- the terminal learns the local model.
- the terminal may receive federated learning-related configuration information from a base station or a server, and the terminal may learn a local model based on the federated learning-related configuration information.
- the terminal may receive a local model weight request message from the base station or the server.
- the terminal may transmit a response message based on the received weight request message.
- the terminal may transmit local model related information including split local model information to the base station or the server. That is, the response message may include split local model information.
- the terminal receives the total local model related information and transmits a response message thereto.
- the total local model may include local model information of other terminals participating in the federated learning.
- the total local model related information may include split local model information.
- the terminal may receive the total local model related information and perform model traveling, which will be described in detail in FIGS. 17 to 20 .
- the terminal may change some layers of the local model of the terminal to the split local model based on the received total local model information including the split local model information. Also, the terminal may change some layers of the local model to a split local model, and perform model traveling based on this. That is, the terminal may compare the data of its own local model with the local model-related data of other terminals participating in federated learning based on the local model in which some layers are changed to the layer of the split local model. Accordingly, the response message may include comparison information between the local model-related data of another terminal participating in the federated learning and the local model-related data of the terminal.
- the base station or the server may determine a group related to federated learning based on the response message.
- the group may be determined based on comparison information between local model-related data of other terminals participating in the federated learning and local model-related data of the terminal.
- the base station or the server may determine that data of terminals in the group have non-IID characteristics. Accordingly, a difference in data distribution between terminals within the determined group may be greater than a difference in data distribution between the determined groups.
- the base station or the server may allocate resources so that terminal groups perform air-compuation-based federated learning, respectively, and transmit resource allocation related information to the terminals.
- step S2105 the terminal receives resource allocation related information and performs federated learning based on it. All of the terminal and the terminals of the group to which the terminal belongs may perform federated learning based on the same resource. That is, the terminal may perform air-computation-based federated learning.
- the base station transmits total local model related information and receives a response message thereto.
- the base station may transmit federated learning related configuration information to the terminal.
- the terminal may learn the local model based on the received configuration information.
- the base station may transmit a local model weight request message to the terminal.
- the terminal may transmit a response message to the base station based on the local model weight request message.
- the terminal may receive the response message including the local model-related information, and generate total local model-related information based thereon. That is, the base station may generate total local model related information by receiving local models from a plurality of terminals.
- the base station may transmit total local model related information to the terminal.
- the terminal may perform model traveling and transmit a response message, and the base station may receive the response message.
- the base station performs device grouping based on the response message.
- the base station may perform device grouping based on an accuracy table.
- the base station may allocate resources so that terminal groups perform air-compuation-based federated learning, respectively, and transmit resource allocation related information to the terminals.
- the base station transmits resource allocation related information.
- the terminal receives resource allocation related information and performs federated learning based on it. All of the terminal and the terminals of the group to which the terminal belongs may perform federated learning based on the same resource. That is, the terminal may perform air-computation-based federated learning.
- examples of the above-described proposed method may also be included as one of the implementation methods of the present disclosure, it is clear that they may be regarded as a kind of proposed method.
- the above-described proposed methods may be implemented independently, or may be implemented in the form of a combination (or merge) of some of the proposed methods.
- Rules may be defined so that the base station informs the terminal of whether the proposed methods are applied or not (or information about the rules of the proposed methods) through a predefined signal (eg, a physical layer signal or a higher layer signal). .
- examples of the above-described proposed method may also be included as one of the implementation methods of the present disclosure, it is clear that they may be regarded as a kind of proposed method.
- the above-described proposed methods may be implemented independently, or may be implemented in the form of a combination (or merge) of some of the proposed methods.
- Rules may be defined so that the base station informs the terminal of whether the proposed methods are applied or not (or information on the rules of the proposed methods) through a predefined signal (eg, a physical layer signal or a higher layer signal) to the terminal. .
- Embodiments of the present disclosure may be applied to various wireless access systems.
- various radio access systems there is a 3rd Generation Partnership Project (3GPP) or a 3GPP2 system.
- 3GPP 3rd Generation Partnership Project
- 3GPP2 3rd Generation Partnership Project2
- Embodiments of the present disclosure may be applied not only to the various radio access systems, but also to all technical fields to which the various radio access systems are applied. Furthermore, the proposed method can be applied to mmWave and THz communication systems using very high frequency bands.
- embodiments of the present disclosure may be applied to various applications such as free-running vehicles and drones.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
Claims (18)
- 무선 통신 시스템에서 단말의 동작 방법에 있어서,상기 단말이 연합 학습(federated learning) 관련 설정 정보를 수신하는 단계;상기 단말이 상기 연합 학습 관련 설정 정보에 기초하여 로컬 모델을 학습하는 단계;상기 단말이 로컬 모델 가중치 요청 메시지를 수신하는 단계;상기 수신한 가중치 요청 메시지에 기초한 제1 응답 메시지를 전송하는 단계;상기 제1 응답 메시지에 기초한 토탈 로컬 모델(total local model) 관련 정보를 수신하는 단계;상기 수신한 토탈 로컬 모델 관련 정보에 기초한 제2 응답 메시지를 전송하는 단계;상기 제2 응답 메시지에 기초한 자원 할당 관련 정보를 수신하는 단계; 및상기 수신한 자원 할당 관련 정보에 기초하여 연합 학습(federated learning)을 수행하는 단계;를 포함하되,상기 토탈 로컬 모델은 상기 연합 학습에 참여하는 다른 단말들의 로컬 모델 정보를 포함하고, 상기 제2 응답 메시지에 기초하여 연합 학습과 관련된 그룹이 결정되되, 상기 연합 학습은 상기 결정된 그룹에 기초하여 수행되는, 방법.
- 제1항에 있어서,상기 제1 응답 메시지는 스플릿 로컬 모델(split local model) 정보를 포함하는, 방법.
- 제1항에 있어서,상기 토탈 로컬 모델(total local model) 관련 정보가 스플릿 로컬 모델 정보를 포함하는, 방법.
- 제3항에 있어서,상기 단말이 상기 수신한 스플릿 로컬 모델 정보를 포함하는 토탈 로컬 모델 정보에 기초하여 상기 단말의 로컬 모델의 일부 레이어(layer)를 상기 스플릿 로컬 모델로 변경하는, 방법.
- 제1항에 있어서,상기 제2 응답 메시지는 상기 연합 학습에 참여하는 다른 단말의 로컬 모델 관련 데이터 및 상기 단말의 로컬 모델 관련 데이터의 비교 정보를 포함하는, 방법.
- 제1항에 있어서,상기 단말 및 상기 단말이 속한 그룹의 단말들은 모두 동일한 자원에 기초하여 연합 학습을 수행하는, 방법.
- 제5항에 있어서,상기 그룹은 상기 연합 학습에 참여하는 다른 단말의 로컬 모델 관련 데이터 및 상기 단말의 로컬 모델 관련 데이터의 비교 정보에 기초하여 결정되되,상기 결정된 그룹 내의 단말 간 데이터 분포의 차이가 상기 결정된 그룹 간 데이터 분포의 차이보다 큰, 방법.
- 무선 통신 시스템에서 단말에 있어서,송수신기; 및상기 송수신기와 연결된 프로세서를 포함하며,상기 프로세서는,상기 송수신기가 연합 학습(federated learning) 관련 설정 정보를 수신하도록 제어하고,상기 연합 학습 관련 설정 정보에 기초하여 로컬 모델을 학습하고,상기 송수신기가 로컬 모델 가중치 요청 메시지를 수신하도록 제어하고,상기 송수신기가 상기 수신한 가중치 요청 메시지에 기초한 제1 응답 메시지를 전송하도록 제어하고,상기 송수신기가 상기 제1 응답 메시지에 기초한 토탈 로컬 모델(total local model) 관련 정보를 수신하도록 제어하고,상기 송수신기가 상기 수신한 토탈 로컬 모델 관련 정보에 기초한 제2 응답 메시지를 전송하도록 제어하고,상기 송수신기가 상기 제2 응답 메시지에 기초한 자원 할당 관련 정보를 수신하도록 제어하고,상기 수신한 자원 할당 관련 정보에 기초하여 연합 학습(federated learning)을 수행하되,상기 토탈 로컬 모델은 상기 연합 학습에 참여하는 다른 단말들의 로컬 모델 정보를 포함하고, 상기 제2 응답 메시지에 기초하여 연합 학습과 관련된 그룹이 결정되되, 상기 연합 학습은 상기 결정된 그룹에 기초하여 수행되는, 단말.
- 제8항에 있어서,상기 제1 응답 메시지는 스플릿 로컬 모델(split local model) 정보를 포함하는, 단말.
- 제8항에 있어서,상기 토탈 로컬 모델(total local model) 관련 정보가 스플릿 로컬 모델 정보를 포함하는, 단말.
- 제10항에 있어서,상기 프로세서가 상기 수신한 스플릿 로컬 모델 정보를 포함하는 토탈 로컬 모델 정보에 기초하여 상기 단말의 로컬 모델의 일부 레이어(layer)를 상기 스플릿 로컬 모델로 변경하는, 단말.
- 제8항에 있어서,상기 제2 응답 메시지는 상기 연합 학습에 참여하는 다른 단말의 로컬 모델 관련 데이터 및 상기 단말의 로컬 모델 관련 데이터의 비교 정보를 포함하는, 단말.
- 제8항에 있어서,상기 단말 및 상기 단말이 속한 그룹의 단말들은 모두 동일한 자원에 기초하여 연합 학습을 수행하는, 단말.
- 제12항에 있어서,상기 그룹은 상기 연합 학습에 참여하는 다른 단말의 로컬 모델 관련 데이터 및 상기 단말의 로컬 모델 관련 데이터의 비교 정보에 기초하여 결정되되,상기 결정된 그룹 내의 단말 간 데이터 분포의 차이가 상기 결정된 그룹 간 데이터 분포의 차이보다 큰, 단말.
- 통신 장치에 있어서,적어도 하나의 프로세서;상기 적어도 하나의 프로세서와 연결되며, 상기 적어도 하나의 프로세서에 의해 실행됨에 따라 동작들을 지시하는 명령어를 저장하는 적어도 하나의 컴퓨터 메모리를 포함하며,상기 프로세서는 상기 통신 장치가,연합 학습(federated learning) 관련 설정 정보를 수신하도록 제어하고,상기 연합 학습 관련 설정 정보에 기초하여 로컬 모델을 학습하도록 제어하고,로컬 모델 가중치 요청 메시지를 수신하도록 제어하고,상기 수신한 가중치 요청 메시지에 기초한 제1 응답 메시지를 전송하도록 제어하고,상기 제1 응답 메시지에 기초한 토탈 로컬 모델(total local model) 관련 정보를 수신하도록 제어하고,상기 수신한 토탈 로컬 모델 관련 정보에 기초한 제2 응답 메시지를 전송하도록 제어하고,상기 제2 응답 메시지에 기초한 자원 할당 관련 정보를 수신하도록 제어하고,상기 수신한 자원 할당 관련 정보에 기초하여 연합 학습(federated learning)을 수행하도록 제어하고,상기 토탈 로컬 모델은 상기 연합 학습에 참여하는 다른 단말들의 로컬 모델 정보를 포함하고, 상기 제2 응답 메시지에 기초하여 연합 학습과 관련된 그룹이 결정되되, 상기 연합 학습은 상기 결정된 그룹에 기초하여 수행되는, 통신 장치.
- 적어도 하나의 명령어(instructions)을 저장하는 비-일시적인(non-transitory) 컴퓨터 판독 가능 매체(computer-readable medium)에 있어서,프로세서에 의해 실행 가능한(executable) 상기 적어도 하나의 명령어를 포함하며,상기 적어도 하나의 명령어는 상기 컴퓨터 판독 가능 매체가,연합 학습(federated learning) 관련 설정 정보를 수신하도록 지시하고,상기 연합 학습 관련 설정 정보에 기초하여 로컬 모델을 학습하도록 지시하고,로컬 모델 가중치 요청 메시지를 수신하도록 지시하고,상기 수신한 가중치 요청 메시지에 기초한 제1 응답 메시지를 전송하도록 지시하고,상기 제1 응답 메시지에 기초한 토탈 로컬 모델(total local model) 관련 정보를 수신하도록 지시하고,상기 수신한 토탈 로컬 모델 관련 정보에 기초한 제2 응답 메시지를 전송하도록 지시하고,상기 제2 응답 메시지에 기초한 자원 할당 관련 정보를 수신하도록 지시하고,상기 수신한 자원 할당 관련 정보에 기초하여 연합 학습(federated learning)을 수행하도록 지시하고,상기 토탈 로컬 모델은 상기 연합 학습에 참여하는 다른 단말들의 로컬 모델 정보를 포함하고, 상기 제2 응답 메시지에 기초하여 연합 학습과 관련된 그룹이 결정되되, 상기 연합 학습은 상기 결정된 그룹에 기초하여 수행되는, 컴퓨터 판독 가능 매체.
- 무선 통신 시스템에서 기지국의 동작 방법에 있어서,상기 기지국이 연합 학습(federated learning) 관련 설정 정보를 전송하는 단계;상기 기지국이 로컬 모델 가중치 요청 메시지를 전송하는 단계;상기 가중치 요청 메시지에 기초한 제1 응답 메시지를 수신하는 단계;상기 제1 응답 메시지에 기초한 토탈 로컬 모델(total local model) 관련 정보를 전송하는 단계;상기 수신한 토탈 로컬 모델 관련 정보에 기초한 제2 응답 메시지를 수신하는 단계;상기 제2 응답 메시지에 기초한 자원 할당 관련 정보를 전송하는 단계; 및상기 수신한 자원 할당 관련 정보에 기초하여 연합 학습(federated learning)을 수행하는 단계;를 포함하되,상기 연합 학습 관련 설정 정보에 기초하여 로컬 모델이 학습되고, 상기 토탈 로컬 모델은 상기 연합 학습에 참여하는 다른 단말들의 로컬 모델 정보를 포함하고, 상기 제2 응답 메시지에 기초하여 연합 학습과 관련된 그룹이 결정되되, 상기 연합 학습은 상기 결정된 그룹에 기초하여 수행되는, 방법.
- 무선 통신 시스템에서 기지국에 있어서,송수신기; 및상기 송수신기와 연결된 프로세서를 포함하며,상기 프로세서는상기 송수신기가 연합 학습(federated learning) 관련 설정 정보를 전송하도록 제어하고,상기 송수신기가 로컬 모델 가중치 요청 메시지를 전송하도록 제어하고,상기 송수신기가 가중치 요청 메시지에 기초한 제1 응답 메시지를 수신하도록 제어하고,상기 송수신기가 상기 제1 응답 메시지에 기초한 토탈 로컬 모델(total local model) 관련 정보를 전송하도록 제어하고,상기 송수신기가 수신한 토탈 로컬 모델 관련 정보에 기초한 제2 응답 메시지를 수신하도록 제어하고,상기 송수신기가 상기 제2 응답 메시지에 기초한 자원 할당 관련 정보를 전송하도록 제어하고,상기 수신한 자원 할당 관련 정보에 기초하여 연합 학습(federated learning)을 수행하되,상기 연합 학습 관련 설정 정보에 기초하여 로컬 모델이 학습되고, 상기 토탈 로컬 모델은 상기 연합 학습에 참여하는 다른 단말들의 로컬 모델 정보를 포함하고, 상기 제2 응답 메시지에 기초하여 연합 학습과 관련된 그룹이 결정되되, 상기 연합 학습은 상기 결정된 그룹에 기초하여 수행되는, 기지국.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/266,569 US20240054351A1 (en) | 2020-12-11 | 2021-12-06 | Device and method for signal transmission in wireless communication system |
KR1020237013063A KR20230074512A (ko) | 2020-12-11 | 2021-12-06 | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20200173557 | 2020-12-11 | ||
KR10-2020-0173557 | 2020-12-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022124729A1 true WO2022124729A1 (ko) | 2022-06-16 |
Family
ID=81973831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2021/018356 WO2022124729A1 (ko) | 2020-12-11 | 2021-12-06 | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240054351A1 (ko) |
KR (1) | KR20230074512A (ko) |
WO (1) | WO2022124729A1 (ko) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401552A (zh) * | 2020-03-11 | 2020-07-10 | 浙江大学 | 一种基于调整批量大小与梯度压缩率的联邦学习方法和系统 |
CN111866954A (zh) * | 2020-07-21 | 2020-10-30 | 重庆邮电大学 | 一种基于联邦学习的用户选择和资源分配方法 |
-
2021
- 2021-12-06 KR KR1020237013063A patent/KR20230074512A/ko unknown
- 2021-12-06 US US18/266,569 patent/US20240054351A1/en active Pending
- 2021-12-06 WO PCT/KR2021/018356 patent/WO2022124729A1/ko active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401552A (zh) * | 2020-03-11 | 2020-07-10 | 浙江大学 | 一种基于调整批量大小与梯度压缩率的联邦学习方法和系统 |
CN111866954A (zh) * | 2020-07-21 | 2020-10-30 | 重庆邮电大学 | 一种基于联邦学习的用户选择和资源分配方法 |
Non-Patent Citations (3)
Title |
---|
CHEN CHENG; CHEN ZIYI; ZHOU YI; KAILKHURA BHAVYA: "FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling", 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), IEEE, 10 December 2020 (2020-12-10), pages 5017 - 5026, XP033889766, DOI: 10.1109/BigData50022.2020.9377960 * |
FELIX SATTLER; KLAUS-ROBERT M\"ULLER; WOJCIECH SAMEK: "Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 4 October 2019 (2019-10-04), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081510830 * |
SHI WENQI; ZHOU SHENG; NIU ZHISHENG; JIANG MIAO; GENG LU: "Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning", IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, NJ., US, vol. 20, no. 1, 28 September 2020 (2020-09-28), US , pages 453 - 467, XP011831150, ISSN: 1536-1276, DOI: 10.1109/TWC.2020.3025446 * |
Also Published As
Publication number | Publication date |
---|---|
US20240054351A1 (en) | 2024-02-15 |
KR20230074512A (ko) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022039295A1 (ko) | 무선 통신 시스템에서 하향링크를 전처리하는 방법 및 이를 위한 장치 | |
WO2020105833A1 (ko) | 차세대 통신 시스템에서 릴레이 노드를 위한 자원 할당 방법 및 이를 위한 장치 | |
WO2020197125A1 (en) | Method and apparatus for performing measurement in wireless communication system | |
WO2022050468A1 (ko) | 무선 통신 시스템에서 연합학습을 수행하는 방법 및 이를 위한 장치 | |
WO2020004923A1 (en) | Method for performing measurement and device supporting the same | |
WO2022124729A1 (ko) | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 | |
WO2022050565A1 (ko) | 무선 통신 시스템에서 핸드오버를 위한 장치 및 방법 | |
WO2022092859A1 (ko) | 무선 통신 시스템에서 스플릿 포인트 조정을 위한 장치 및 방법 | |
WO2022139230A1 (ko) | 무선 통신 시스템에서 스플릿 포인트를 조정하는 방법 및 장치 | |
WO2023017881A1 (ko) | 무선 통신 시스템에서 측정 갭을 이용한 측정 결과에 기반하여 핸드오버를 수행하기 위한 장치 및 방법 | |
WO2022092905A1 (ko) | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 | |
WO2022050444A1 (ko) | 연합학습을 위한 통신방법 및 이를 수행하는 디바이스 | |
WO2022119424A1 (ko) | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 | |
WO2023120768A1 (ko) | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 | |
WO2020105832A1 (ko) | 차세대 통신 시스템에서 릴레이 노드가 신호를 송수신하기 위한 빔 세트를 설정하는 방법 및 이를 위한 장치 | |
WO2023120781A1 (ko) | 무선 통신 시스템에서 신호 전송을 위한 장치 및 방법 | |
WO2023022251A1 (ko) | 무선 통신 시스템에서 신호 전송 방법 및 장치 | |
WO2020096174A1 (ko) | 차세대 통신 시스템에서 릴레이 노드를 위한 하향링크 송신 타이밍 결정 방법 및 이를 위한 장치 | |
WO2023013857A1 (ko) | 무선 통신 시스템에서 데이터 학습을 수행하는 방법 및 장치 | |
WO2024122667A1 (ko) | 무선 통신 시스템에서 앙상블 모델 기반의 수신기에 대한 학습을 수행하기 위한 장치 및 방법 | |
WO2024117296A1 (ko) | 무선 통신 시스템에서 조절 가능한 파라미터를 가지는 송수신기를 이용하여 신호를 송수신하기 위한 방법 및 장치 | |
WO2024048816A1 (ko) | 무선 통신 시스템에서 신호를 송수신하기 위한 장치 및 방법 | |
WO2023042941A1 (ko) | 무선 통신 시스템에서 신호 전송 방법 및 장치 | |
WO2022270649A1 (ko) | 무선 통신 시스템에서 음성 통신을 수행하기 위한 장치 및 방법 | |
WO2024122659A1 (ko) | 무선 통신 시스템에서 신호를 송수신하기 위한 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21903775 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237013063 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18266569 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21903775 Country of ref document: EP Kind code of ref document: A1 |