WO2021261611A1 - Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil - Google Patents

Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil Download PDF

Info

Publication number
WO2021261611A1
WO2021261611A1 PCT/KR2020/008203 KR2020008203W WO2021261611A1 WO 2021261611 A1 WO2021261611 A1 WO 2021261611A1 KR 2020008203 W KR2020008203 W KR 2020008203W WO 2021261611 A1 WO2021261611 A1 WO 2021261611A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
server
connection
weight
information related
Prior art date
Application number
PCT/KR2020/008203
Other languages
English (en)
Korean (ko)
Inventor
이명희
김일환
이종구
김성진
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2020/008203 priority Critical patent/WO2021261611A1/fr
Publication of WO2021261611A1 publication Critical patent/WO2021261611A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/18Network planning tools

Definitions

  • the following description relates to a wireless communication system, and to a method and apparatus for performing federated learning in a wireless communication system.
  • a wireless access system is a multiple access system that can support communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
  • Examples of the multiple access system include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, and a single carrier frequency (SC-FDMA) system. division multiple access) systems.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • an enhanced mobile broadband (eMBB) communication technology has been proposed compared to the existing radio access technology (RAT).
  • eMBB enhanced mobile broadband
  • RAT radio access technology
  • UE reliability and latency sensitive services/user equipment
  • mMTC massive machine type communications
  • the present disclosure relates to a method and apparatus for efficiently performing federated learning in a wireless communication system.
  • the present disclosure relates to a method and apparatus for reducing a resource of a radio link required for federated learning in a wireless communication system.
  • a method of operating a terminal in a wireless communication system includes receiving information related to an initial network model from a server, configuring a dense network based on the initial network model, and the dense network changing at least one weight of at least one connection by performing training on It may include the step of transmitting.
  • a method of operating a server in a wireless communication system includes the steps of: transmitting information related to an initial network model to a terminal; receiving information related to a weight change amount for at least one connection from the terminal; The method may include updating weights of the initial network model based on information related to a weight change amount, and removing at least one connection based on the updated weights.
  • a terminal in a wireless communication system may include a transceiver and a processor connected to the transceiver.
  • the processor receives information related to an initial network model from a server, configures a dense network based on the initial network model, and performs training on the dense network by performing at least one of at least one connection. It is possible to control to change one weight and transmit information related to a weight change amount for at least one connection selected based on the change amount of the at least one weight to the server.
  • a server may include a transceiver and a processor connected to the transceiver.
  • the processor transmits information related to the initial network model to the terminal, receives information related to a weight change amount for at least one connection from the terminal, and calculates weights of the initial network model based on the information related to the weight change amount update, and control to remove at least one connection based on the updated weights.
  • federated learning can be performed more effectively in a wireless communication system.
  • FIG. 1 is a diagram illustrating an example of a communication system applied to the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
  • FIG. 3 is a diagram illustrating another example of a wireless device applied to the present disclosure.
  • FIG. 4 is a diagram illustrating an example of a mobile device applied to the present disclosure.
  • FIG. 5 is a diagram illustrating an example of a vehicle or autonomous driving vehicle applied to the present disclosure.
  • FIG. 6 is a diagram illustrating an example of a movable body applied to the present disclosure.
  • FIG. 7 is a diagram illustrating an example of an XR device applied to the present disclosure.
  • FIG. 8 is a diagram illustrating an example of a robot applied to the present disclosure.
  • AI artificial intelligence
  • FIG. 10 is a diagram illustrating physical channels applied to the present disclosure and a signal transmission method using the same.
  • FIG. 11 is a diagram illustrating a control plane and a user plane structure of a radio interface protocol applied to the present disclosure.
  • FIG. 12 is a diagram illustrating a method of processing a transmission signal applied to the present disclosure.
  • FIG. 13 is a diagram illustrating a structure of a radio frame applicable to the present disclosure.
  • FIG. 14 is a diagram illustrating a slot structure applicable to the present disclosure.
  • 15 is a diagram illustrating an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
  • 16 is a diagram illustrating an electromagnetic spectrum applicable to the present disclosure.
  • 17 is a diagram illustrating a THz communication method applicable to the present disclosure.
  • FIG. 18 is a diagram illustrating a THz wireless communication transceiver applicable to the present disclosure.
  • FIG. 19 is a diagram illustrating a method for generating a THz signal applicable to the present disclosure.
  • 20 is a diagram illustrating a wireless communication transceiver applicable to the present disclosure.
  • 21 is a diagram illustrating a structure of a transmitter applicable to the present disclosure.
  • 22 is a diagram illustrating a modulator structure applicable to the present disclosure.
  • FIG. 23 is a diagram illustrating a structure of a perceptron included in an artificial neural network applicable to the present disclosure.
  • 24 is a diagram illustrating an artificial neural network structure applicable to the present disclosure.
  • 25 is a diagram illustrating a deep neural network applicable to the present disclosure.
  • 26 is a diagram illustrating a convolutional neural network applicable to the present disclosure.
  • FIG. 27 is a diagram illustrating a filter operation of a convolutional neural network applicable to the present disclosure.
  • FIG. 28 is a diagram illustrating a neural network structure in which a cyclic loop applicable to the present disclosure exists.
  • 29 is a diagram illustrating an operation structure of a recurrent neural network applicable to the present disclosure.
  • 30 is a diagram illustrating the concept of associative learning applicable to the present disclosure.
  • 31 is a diagram showing an example of a protocol of associative learning applicable to the present disclosure.
  • 32 is a diagram illustrating the concept of connection and weight learning applicable to the present disclosure.
  • 33 is a diagram illustrating examples of networks before and after pruning according to connection and weight learning applicable to the present disclosure.
  • 34A and 34B are diagrams illustrating pruning sensitivity of an AlexNet network.
  • 35 is a diagram illustrating an embodiment of a procedure for performing federated learning in a terminal applicable to the present disclosure.
  • 36 is a diagram illustrating an embodiment of a procedure for performing federated learning in a server applicable to the present disclosure.
  • FIG. 37 is a diagram illustrating an embodiment of a procedure for performing compressed federated learning in a server applicable to the present disclosure.
  • 38 is a diagram illustrating an embodiment of a procedure for performing federated collection during each iteration step in a server applicable to the present disclosure.
  • 39 is a diagram illustrating an embodiment of a procedure for performing federated collection during each iteration in a terminal applicable to the present disclosure.
  • 40 is a diagram illustrating an embodiment of a procedure for performing pruning in a server applicable to the present disclosure.
  • 41 is a diagram illustrating an example of a protocol of the first two iterations in compressed associative learning applicable to the present disclosure.
  • FIG. 42 is a diagram illustrating an example of a protocol of the latter two iterations in compressed associative learning applicable to the present disclosure.
  • FIG. 43 is a diagram illustrating an example of a protocol for restarting a federated collection operation in compressed federated learning applicable to the present disclosure.
  • 44 is a diagram illustrating an example of signal exchange in the first half of associative learning applicable to the present disclosure.
  • 45 is a diagram illustrating an example of signal exchange in the second half of compressed associative learning applicable to the present disclosure.
  • 46 is a diagram illustrating an example of a packet format for transmitting information related to weights applicable to the present disclosure.
  • 47 is a diagram illustrating another example of a packet format for transmitting information related to weights applicable to the present disclosure.
  • each component or feature may be considered optional unless explicitly stated otherwise.
  • Each component or feature may be implemented in a form that is not combined with other components or features.
  • some components and/or features may be combined to configure an embodiment of the present disclosure.
  • the order of operations described in embodiments of the present disclosure may be changed. Some configurations or features of one embodiment may be included in other embodiments, or may be replaced with corresponding configurations or features of other embodiments.
  • the base station has a meaning as a terminal node of a network that directly communicates with the mobile station.
  • a specific operation described as being performed by the base station in this document may be performed by an upper node of the base station in some cases.
  • the 'base station' is a term such as a fixed station, a Node B, an eNB (eNode B), a gNB (gNode B), an ng-eNB, an advanced base station (ABS) or an access point (access point).
  • eNode B eNode B
  • gNode B gNode B
  • ng-eNB ng-eNB
  • ABS advanced base station
  • access point access point
  • a terminal includes a user equipment (UE), a mobile station (MS), a subscriber station (SS), a mobile subscriber station (MSS), It may be replaced by terms such as a mobile terminal or an advanced mobile station (AMS).
  • UE user equipment
  • MS mobile station
  • SS subscriber station
  • MSS mobile subscriber station
  • AMS advanced mobile station
  • a transmitting end refers to a fixed and/or mobile node that provides a data service or a voice service
  • a receiving end refers to a fixed and/or mobile node that receives a data service or a voice service.
  • the mobile station may be a transmitting end, and the base station may be a receiving end.
  • the mobile station may be the receiving end, and the base station may be the transmitting end.
  • Embodiments of the present disclosure IEEE 802.xx system, (3rd Generation Partnership Project) 3GPP access system, which are wireless systems, 3GPP LTE (Long Term Evolution) systems, 3GPP 5G (5 th generation) NR (New Radio) system, 3GPP2 system and It may be supported by standard documents disclosed in at least one of, in particular, embodiments of the present disclosure by 3GPP TS (technical specification) 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents. can be supported
  • embodiments of the present disclosure may be applied to other wireless access systems, and are not limited to the above-described system. As an example, it may be applicable to a system applied after the 3GPP 5G NR system, and is not limited to a specific system.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • LTE may mean 3GPP TS 36.xxx Release 8 or later technology.
  • LTE technology after 3GPP TS 36.xxx Release 10 may be referred to as LTE-A
  • LTE technology after 3GPP TS 36.xxx Release 13 may be referred to as LTE-A pro.
  • 3GPP NR may refer to technology after TS 38.xxx Release 15.
  • 3GPP 6G may refer to technology after TS Release 17 and/or Release 18.
  • "xxx" stands for standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • FIG. 1 is a diagram illustrating an example of a communication system applied to the present disclosure.
  • a communication system 100 applied to the present disclosure includes a wireless device, a base station, and a network.
  • the wireless device means a device that performs communication using a wireless access technology (eg, 5G NR, LTE), and may be referred to as a communication/wireless/5G device.
  • the wireless device may include a robot 100a, a vehicle 100b-1, 100b-2, an extended reality (XR) device 100c, a hand-held device 100d, and a home appliance. appliance) 100e, an Internet of Things (IoT) device 100f, and an artificial intelligence (AI) device/server 100g.
  • a wireless access technology eg, 5G NR, LTE
  • XR extended reality
  • IoT Internet of Things
  • AI artificial intelligence
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous driving vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
  • the vehicles 100b-1 and 100b-2 may include an unmanned aerial vehicle (UAV) (eg, a drone).
  • UAV unmanned aerial vehicle
  • the XR device 100c includes augmented reality (AR)/virtual reality (VR)/mixed reality (MR) devices, and includes a head-mounted device (HMD), a head-up display (HUD) provided in a vehicle, a television, It may be implemented in the form of a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • the portable device 100d may include a smart phone, a smart pad, a wearable device (eg, smart watch, smart glasses), and a computer (eg, a laptop computer).
  • the home appliance 100e may include a TV, a refrigerator, a washing machine, and the like.
  • the IoT device 100f may include a sensor, a smart meter, and the like.
  • the base station 120 and the network 130 may be implemented as a wireless device, and a specific wireless device 120a may operate as a base station/network node to other wireless devices.
  • the wireless devices 100a to 100f may be connected to the network 130 through the base station 120 .
  • AI technology may be applied to the wireless devices 100a to 100f , and the wireless devices 100a to 100f may be connected to the AI server 100g through the network 130 .
  • the network 130 may be configured using a 3G network, a 4G (eg, LTE) network, or a 5G (eg, NR) network.
  • the wireless devices 100a to 100f may communicate with each other through the base station 120/network 130, but communicate directly without going through the base station 120/network 130 (eg, sidelink communication) You may.
  • the vehicles 100b-1 and 100b-2 may perform direct communication (eg, vehicle to vehicle (V2V)/vehicle to everything (V2X) communication).
  • the IoT device 100f eg, a sensor
  • Wireless communication/connection 150a, 150b, and 150c may be performed between the wireless devices 100a to 100f/base station 120 and the base station 120/base station 120 .
  • wireless communication/connection includes uplink/downlink communication 150a and sidelink communication 150b (or D2D communication), and communication between base stations 150c (eg, relay, integrated access backhaul (IAB)). This may be achieved through radio access technology (eg, 5G NR).
  • IAB integrated access backhaul
  • the wireless device and the base station/wireless device, and the base station and the base station may transmit/receive wireless signals to each other.
  • the wireless communication/connection 150a , 150b , 150c may transmit/receive signals through various physical channels.
  • various configuration information setting processes for transmission/reception of wireless signals various signal processing processes (eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.) , at least a part of a resource allocation process may be performed.
  • signal processing processes eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
  • a first wireless device 200a and a second wireless device 200b may transmit/receive wireless signals through various wireless access technologies (eg, LTE, NR).
  • ⁇ first wireless device 200a, second wireless device 200b ⁇ is ⁇ wireless device 100x, base station 120 ⁇ of FIG. 1 and/or ⁇ wireless device 100x, wireless device 100x) ⁇ can be matched.
  • the first wireless device 200a includes one or more processors 202a and one or more memories 204a, and may further include one or more transceivers 206a and/or one or more antennas 208a.
  • the processor 202a controls the memory 204a and/or the transceiver 206a and may be configured to implement the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein.
  • the processor 202a may process information in the memory 204a to generate first information/signal, and then transmit a wireless signal including the first information/signal through the transceiver 206a.
  • the processor 202a may receive the radio signal including the second information/signal through the transceiver 206a, and then store the information obtained from the signal processing of the second information/signal in the memory 204a.
  • the memory 204a may be connected to the processor 202a and may store various information related to the operation of the processor 202a.
  • the memory 204a may provide instructions for performing some or all of the processes controlled by the processor 202a, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
  • the processor 202a and the memory 204a may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • a wireless communication technology eg, LTE, NR
  • the transceiver 206a may be coupled with the processor 202a and may transmit and/or receive wireless signals via one or more antennas 208a.
  • the transceiver 206a may include a transmitter and/or a receiver.
  • the transceiver 206a may be used interchangeably with a radio frequency (RF) unit.
  • RF radio frequency
  • a wireless device may refer to a communication modem/circuit/chip.
  • the second wireless device 200b includes one or more processors 202b, one or more memories 204b, and may further include one or more transceivers 206b and/or one or more antennas 208b.
  • the processor 202b controls the memory 204b and/or the transceiver 206b and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein.
  • the processor 202b may process information in the memory 204b to generate third information/signal, and then transmit a wireless signal including the third information/signal through the transceiver 206b.
  • the processor 202b may receive the radio signal including the fourth information/signal through the transceiver 206b, and then store information obtained from signal processing of the fourth information/signal in the memory 204b.
  • the memory 204b may be connected to the processor 202b and may store various information related to the operation of the processor 202b.
  • the memory 204b may provide instructions for performing some or all of the processes controlled by the processor 202b, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
  • the processor 202b and the memory 204b may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • a wireless communication technology eg, LTE, NR
  • the transceiver 206b may be coupled to the processor 202b and may transmit and/or receive wireless signals via one or more antennas 208b.
  • Transceiver 206b may include a transmitter and/or receiver.
  • Transceiver 206b may be used interchangeably with an RF unit.
  • a wireless device may refer to a communication modem/circuit/chip.
  • one or more protocol layers may be implemented by one or more processors 202a, 202b.
  • one or more processors 202a, 202b may include one or more layers (eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control) and a functional layer such as service data adaptation protocol (SDAP)).
  • layers eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control
  • SDAP service data adaptation protocol
  • the one or more processors 202a, 202b may be configured to process one or more protocol data units (PDUs) and/or one or more service data units (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein. can create The one or more processors 202a, 202b may generate messages, control information, data, or information according to the description, function, procedure, proposal, method, and/or flow charts disclosed herein. The one or more processors 202a, 202b generate a signal (eg, a baseband signal) including a PDU, SDU, message, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein.
  • a signal eg, a baseband signal
  • processors 202a, 202b may receive signals (eg, baseband signals) from one or more transceivers 206a, 206b, and the descriptions, functions, procedures, proposals, methods, and/or flowcharts of operation disclosed herein.
  • PDUs, SDUs, messages, control information, data, or information may be acquired according to the fields.
  • One or more processors 202a, 202b may be referred to as a controller, microcontroller, microprocessor, or microcomputer.
  • One or more processors 202a, 202b may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • firmware or software may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, and the like.
  • the descriptions, functions, procedures, proposals, methods, and/or flow charts disclosed in this document provide that firmware or software configured to perform is included in one or more processors 202a, 202b, or stored in one or more memories 204a, 204b. It may be driven by the above processors 202a and 202b.
  • the descriptions, functions, procedures, suggestions, methods, and/or flowcharts of operations disclosed herein may be implemented using firmware or software in the form of code, instructions, and/or a set of instructions.
  • One or more memories 204a, 204b may be coupled to one or more processors 202a, 202b and may store various types of data, signals, messages, information, programs, codes, instructions, and/or instructions.
  • One or more memories 204a, 204b may include read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, hard drives, registers, cache memory, computer readable storage media and/or It may be composed of a combination of these.
  • One or more memories 204a, 204b may be located inside and/or external to one or more processors 202a, 202b. Additionally, one or more memories 204a, 204b may be coupled to one or more processors 202a, 202b through various technologies, such as wired or wireless connections.
  • the one or more transceivers 206a, 206b may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flowcharts of this document to one or more other devices.
  • the one or more transceivers 206a, 206b may receive user data, control information, radio signals/channels, etc. referred to in the descriptions, functions, procedures, suggestions, methods and/or flow charts, etc. disclosed herein, from one or more other devices. have.
  • one or more transceivers 206a , 206b may be coupled to one or more processors 202a , 202b and may transmit and receive wireless signals.
  • one or more processors 202a, 202b may control one or more transceivers 206a, 206b to transmit user data, control information, or wireless signals to one or more other devices. Additionally, one or more processors 202a, 202b may control one or more transceivers 206a, 206b to receive user data, control information, or wireless signals from one or more other devices. Further, one or more transceivers 206a, 206b may be coupled with one or more antennas 208a, 208b, and the one or more transceivers 206a, 206b may be connected via one or more antennas 208a, 208b. , may be set to transmit and receive user data, control information, radio signals/channels, etc.
  • one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
  • the one or more transceivers 206a, 206b converts the received radio signal/channel, etc. from the RF band signal to process the received user data, control information, radio signal/channel, etc. using the one or more processors 202a, 202b. It can be converted into a baseband signal.
  • One or more transceivers 206a, 206b may convert user data, control information, radio signals/channels, etc. processed using one or more processors 202a, 202b from baseband signals to RF band signals.
  • one or more transceivers 206a, 206b may include (analog) oscillators and/or filters.
  • FIG. 3 is a diagram illustrating another example of a wireless device applied to the present disclosure.
  • a wireless device 300 corresponds to the wireless devices 200a and 200b of FIG. 2 , and includes various elements, components, units/units, and/or modules. ) can be composed of
  • the wireless device 300 may include a communication unit 310 , a control unit 320 , a memory unit 330 , and an additional element 340 .
  • the communication unit may include communication circuitry 312 and transceiver(s) 314 .
  • communication circuitry 312 may include one or more processors 202a, 202b and/or one or more memories 204a, 204b of FIG. 2 .
  • the transceiver(s) 314 may include one or more transceivers 206a , 206b and/or one or more antennas 208a , 208b of FIG. 2 .
  • the control unit 320 is electrically connected to the communication unit 310 , the memory unit 330 , and the additional element 340 and controls general operations of the wireless device.
  • the controller 320 may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit 330 .
  • control unit 320 transmits the information stored in the memory unit 330 to the outside (eg, another communication device) through the communication unit 310 through a wireless/wired interface, or externally (eg, through the communication unit 310) Information received through a wireless/wired interface from another communication device) may be stored in the memory unit 330 .
  • the additional element 340 may be configured in various ways according to the type of the wireless device.
  • the additional element 340 may include at least one of a power unit/battery, an input/output unit, a driving unit, and a computing unit.
  • the wireless device 300 may include a robot ( FIGS. 1 and 100a ), a vehicle ( FIGS. 1 , 100b-1 , 100b-2 ), an XR device ( FIGS. 1 and 100c ), and a mobile device ( FIGS. 1 and 100d ). ), home appliances (FIG. 1, 100e), IoT device (FIG.
  • the wireless device may be mobile or used in a fixed location depending on the use-example/service.
  • various elements, components, units/units, and/or modules in the wireless device 300 may be all interconnected through a wired interface, or at least some may be wirelessly connected through the communication unit 310 .
  • the control unit 320 and the communication unit 310 are connected by wire, and the control unit 320 and the first unit (eg, 130 , 140 ) are connected wirelessly through the communication unit 310 .
  • each element, component, unit/unit, and/or module within the wireless device 300 may further include one or more elements.
  • the controller 320 may include one or more processor sets.
  • control unit 320 may be configured as a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
  • memory unit 330 may include RAM, dynamic RAM (DRAM), ROM, flash memory, volatile memory, non-volatile memory, and/or a combination thereof. can be configured.
  • FIG. 4 is a diagram illustrating an example of a mobile device applied to the present disclosure.
  • the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), and a portable computer (eg, a laptop computer).
  • the mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
  • MS mobile station
  • UT user terminal
  • MSS mobile subscriber station
  • SS subscriber station
  • AMS advanced mobile station
  • WT wireless terminal
  • the mobile device 400 includes an antenna unit 408 , a communication unit 410 , a control unit 420 , a memory unit 430 , a power supply unit 440a , an interface unit 440b , and an input/output unit 440c .
  • the antenna unit 408 may be configured as a part of the communication unit 410 .
  • Blocks 410 to 430/440a to 440c respectively correspond to blocks 310 to 330/340 of FIG. 3 .
  • the communication unit 410 may transmit and receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
  • the controller 420 may control components of the portable device 400 to perform various operations.
  • the controller 420 may include an application processor (AP).
  • the memory unit 430 may store data/parameters/programs/codes/commands necessary for driving the portable device 400 . Also, the memory unit 430 may store input/output data/information.
  • the power supply unit 440a supplies power to the portable device 400 and may include a wired/wireless charging circuit, a battery, and the like.
  • the interface unit 440b may support a connection between the portable device 400 and other external devices.
  • the interface unit 440b may include various ports (eg, an audio input/output port and a video input/output port) for connection with an external device.
  • the input/output unit 440c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
  • the input/output unit 440c may include a camera, a microphone, a user input unit, a display unit 440d, a speaker, and/or a haptic module.
  • the input/output unit 440c obtains information/signals (eg, touch, text, voice, image, video) input from the user, and the obtained information/signals are stored in the memory unit 430 . can be saved.
  • the communication unit 410 may convert the information/signal stored in the memory into a wireless signal, and transmit the converted wireless signal directly to another wireless device or to a base station. Also, after receiving a radio signal from another radio device or base station, the communication unit 410 may restore the received radio signal to original information/signal.
  • the restored information/signal may be stored in the memory unit 430 and output in various forms (eg, text, voice, image, video, haptic) through the input/output unit 440c.
  • FIG. 5 is a diagram illustrating an example of a vehicle or autonomous driving vehicle applied to the present disclosure.
  • the vehicle or autonomous driving vehicle may be implemented as a mobile robot, a vehicle, a train, an aerial vehicle (AV), a ship, and the like, but is not limited to the shape of the vehicle.
  • AV aerial vehicle
  • the vehicle or autonomous driving vehicle 500 includes an antenna unit 508 , a communication unit 510 , a control unit 520 , a driving unit 540a , a power supply unit 540b , a sensor unit 540c and autonomous driving.
  • a unit 540d may be included.
  • the antenna unit 550 may be configured as a part of the communication unit 510 .
  • Blocks 510/530/540a to 540d respectively correspond to blocks 410/430/440 of FIG. 4 .
  • the communication unit 510 may transmit/receive signals (eg, data, control signals, etc.) to and from external devices such as other vehicles, base stations (eg, base stations, roadside units, etc.), and servers.
  • the controller 520 may control elements of the vehicle or the autonomous driving vehicle 500 to perform various operations.
  • the controller 520 may include an electronic control unit (ECU).
  • the driving unit 540a may cause the vehicle or the autonomous driving vehicle 500 to run on the ground.
  • the driving unit 540a may include an engine, a motor, a power train, a wheel, a brake, a steering device, and the like.
  • the power supply unit 540b supplies power to the vehicle or the autonomous driving vehicle 500 , and may include a wired/wireless charging circuit, a battery, and the like.
  • the sensor unit 540c may obtain vehicle state, surrounding environment information, user information, and the like.
  • the sensor unit 540c includes an inertial measurement unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, an inclination sensor, a weight sensor, a heading sensor, a position module, and a vehicle forward movement.
  • IMU inertial measurement unit
  • a collision sensor a wheel sensor
  • a speed sensor a speed sensor
  • an inclination sensor a weight sensor
  • a heading sensor a position module
  • a vehicle forward movement / may include a reverse sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, a pedal position sensor, and the like.
  • the autonomous driving unit 540d includes a technology for maintaining a driving lane, a technology for automatically adjusting speed such as adaptive cruise control, a technology for automatically driving along a predetermined route, and a technology for automatically setting a route when a destination is set. technology can be implemented.
  • the communication unit 510 may receive map data, traffic information data, and the like from an external server.
  • the autonomous driving unit 540d may generate an autonomous driving route and a driving plan based on the acquired data.
  • the controller 520 may control the driving unit 540a to move the vehicle or the autonomous driving vehicle 500 along the autonomous driving path (eg, speed/direction adjustment) according to the driving plan.
  • the communication unit 510 may obtain the latest traffic information data from an external server non/periodically, and may acquire surrounding traffic information data from surrounding vehicles.
  • the sensor unit 540c may acquire vehicle state and surrounding environment information.
  • the autonomous driving unit 540d may update the autonomous driving route and driving plan based on the newly acquired data/information.
  • the communication unit 510 may transmit information about a vehicle location, an autonomous driving route, a driving plan, and the like to an external server.
  • the external server may predict traffic information data in advance using AI technology or the like based on information collected from the vehicle or autonomous vehicles, and may provide the predicted traffic information data to the vehicle or autonomous vehicles.
  • FIG. 6 is a diagram illustrating an example of a movable body applied to the present disclosure.
  • the moving object applied to the present disclosure may be implemented as at least any one of means of transport, train, aircraft, and ship.
  • the movable body applied to the present disclosure may be implemented in other forms, and is not limited to the above-described embodiment.
  • the mobile unit 600 may include a communication unit 610 , a control unit 620 , a memory unit 630 , an input/output unit 640a , and a position measurement unit 640b .
  • blocks 610 to 630/640a to 640b correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
  • the communication unit 610 may transmit/receive signals (eg, data, control signals, etc.) to/from other mobile devices or external devices such as a base station.
  • the controller 620 may perform various operations by controlling the components of the movable body 600 .
  • the memory unit 630 may store data/parameters/programs/codes/commands supporting various functions of the mobile unit 600 .
  • the input/output unit 640a may output an AR/VR object based on information in the memory unit 630 .
  • the input/output unit 640a may include a HUD.
  • the position measuring unit 640b may acquire position information of the moving object 600 .
  • the location information may include absolute location information of the moving object 600 , location information within a driving line, acceleration information, and location information with a surrounding vehicle.
  • the location measuring unit 640b may include a GPS and various sensors.
  • the communication unit 610 of the mobile unit 600 may receive map information, traffic information, and the like from an external server and store it in the memory unit 630 .
  • the position measurement unit 640b may obtain information about the location of the moving object through GPS and various sensors and store it in the memory unit 630 .
  • the controller 620 may generate a virtual object based on map information, traffic information, and location information of a moving object, and the input/output unit 640a may display the generated virtual object on a window inside the moving object (651, 652). Also, the control unit 620 may determine whether the moving object 600 is normally operating within the driving line based on the moving object location information.
  • the control unit 620 may display a warning on the glass window of the moving object through the input/output unit 640a. Also, the control unit 620 may broadcast a warning message regarding the driving abnormality to surrounding moving objects through the communication unit 610 . Depending on the situation, the control unit 620 may transmit the location information of the moving object and information on the driving/moving object abnormality to the related organization through the communication unit 610 .
  • the XR device may be implemented as an HMD, a head-up display (HUD) provided in a vehicle, a television, a smart phone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • HMD head-up display
  • a television a smart phone
  • a computer a wearable device
  • a home appliance a digital signage
  • a vehicle a robot, and the like.
  • the XR device 700a may include a communication unit 710 , a control unit 720 , a memory unit 730 , an input/output unit 740a , a sensor unit 740b , and a power supply unit 740c .
  • blocks 710 to 730/740a to 740c may correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
  • the communication unit 710 may transmit/receive signals (eg, media data, control signals, etc.) to/from external devices such as other wireless devices, portable devices, or media servers.
  • Media data may include images, images, and sounds.
  • the controller 720 may perform various operations by controlling the components of the XR device 700a.
  • the controller 720 may be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation and processing.
  • the memory unit 730 may store data/parameters/programs/codes/commands necessary for driving the XR device 700a/creating an XR object.
  • the input/output unit 740a may obtain control information, data, etc. from the outside, and may output the generated XR object.
  • the input/output unit 740a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 740b may obtain an XR device state, surrounding environment information, user information, and the like.
  • the sensor unit 740b includes a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, a red green blue (RGB) sensor, an infrared (IR) sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone and / or radar or the like.
  • the power supply unit 740c supplies power to the XR device 700a, and may include a wired/wireless charging circuit, a battery, and the like.
  • the memory unit 730 of the XR device 700a may include information (eg, data, etc.) necessary for generating an XR object (eg, AR/VR/MR object).
  • the input/output unit 740a may obtain a command to operate the XR device 700a from the user, and the controller 720 may drive the XR device 700a according to the user's driving command. For example, when the user intends to watch a movie or news through the XR device 700a, the controller 720 transmits the content request information through the communication unit 730 to another device (eg, the mobile device 700b) or can be sent to the media server.
  • another device eg, the mobile device 700b
  • the communication unit 730 may download/stream contents such as movies and news from another device (eg, the portable device 700b) or a media server to the memory unit 730 .
  • the controller 720 controls and/or performs procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing for the content, and is acquired through the input/output unit 740a/sensor unit 740b It is possible to generate/output an XR object based on information about one surrounding space or a real object.
  • the XR device 700a is wirelessly connected to the portable device 700b through the communication unit 710 , and the operation of the XR device 700a may be controlled by the portable device 700b.
  • the portable device 700b may operate as a controller for the XR device 700a.
  • the XR device 700a may obtain 3D location information of the portable device 700b, and then generate and output an XR object corresponding to the portable device 700b.
  • the robot 800 may include a communication unit 810 , a control unit 820 , a memory unit 830 , an input/output unit 840a , a sensor unit 840b , and a driving unit 840c .
  • blocks 810 to 830/840a to 840c may correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
  • the communication unit 810 may transmit and receive signals (eg, driving information, control signals, etc.) with external devices such as other wireless devices, other robots, or control servers.
  • the controller 820 may control components of the robot 800 to perform various operations.
  • the memory unit 830 may store data/parameters/programs/codes/commands supporting various functions of the robot 800 .
  • the input/output unit 840a may obtain information from the outside of the robot 800 and may output information to the outside of the robot 800 .
  • the input/output unit 840a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 840b may obtain internal information, surrounding environment information, user information, and the like of the robot 800 .
  • the sensor unit 840b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, a radar, and the like.
  • the driving unit 840c may perform various physical operations, such as moving a robot joint. Also, the driving unit 840c may cause the robot 800 to travel on the ground or to fly in the air.
  • the driving unit 840c may include an actuator, a motor, a wheel, a brake, a propeller, and the like.
  • AI devices include TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc. It may be implemented as a device or a mobile device.
  • the AI device 900 includes a communication unit 910 , a control unit 920 , a memory unit 930 , input/output units 940a/940b , a learning processor unit 940c and a sensor unit 940d.
  • the communication unit 910 uses wired/wireless communication technology to communicate with external devices such as other AI devices (eg, FIGS. 1, 100x, 120, 140) or an AI server ( FIGS. 1 and 140 ) and wired/wireless signals (eg, sensor information, user input, learning model, control signal, etc.). To this end, the communication unit 910 may transmit information in the memory unit 930 to an external device or transmit a signal received from the external device to the memory unit 930 .
  • AI devices eg, FIGS. 1, 100x, 120, 140
  • an AI server FIGS. 1 and 140
  • wired/wireless signals eg, sensor information, user input, learning model, control signal, etc.
  • the controller 920 may determine at least one executable operation of the AI device 900 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. In addition, the controller 920 may control the components of the AI device 900 to perform the determined operation. For example, the control unit 920 may request, search, receive, or utilize the data of the learning processor unit 940c or the memory unit 930, and may be a predicted operation among at least one executable operation or determined to be preferable. Components of the AI device 900 may be controlled to execute the operation.
  • control unit 920 collects history information including user feedback on the operation contents or operation of the AI device 900 and stores it in the memory unit 930 or the learning processor unit 940c, or the AI server ( 1 and 140), and the like may be transmitted to an external device.
  • the collected historical information may be used to update the learning model.
  • the memory unit 930 may store data supporting various functions of the AI device 900 .
  • the memory unit 930 may store data obtained from the input unit 940a , data obtained from the communication unit 910 , output data of the learning processor unit 940c , and data obtained from the sensing unit 940 .
  • the memory unit 930 may store control information and/or software codes necessary for the operation/execution of the control unit 920 .
  • the input unit 940a may acquire various types of data from the outside of the AI device 900 .
  • the input unit 920 may obtain training data for model learning, input data to which the learning model is applied, and the like.
  • the input unit 940a may include a camera, a microphone, and/or a user input unit.
  • the output unit 940b may generate an output related to sight, hearing, or touch.
  • the output unit 940b may include a display unit, a speaker, and/or a haptic module.
  • the sensing unit 940 may obtain at least one of internal information of the AI device 900 , surrounding environment information of the AI device 900 , and user information by using various sensors.
  • the sensing unit 940 may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. have.
  • the learning processor unit 940c may train a model composed of an artificial neural network by using the training data.
  • the learning processor unit 940c may perform AI processing together with the learning processor unit of the AI server ( FIGS. 1 and 140 ).
  • the learning processor unit 940c may process information received from an external device through the communication unit 910 and/or information stored in the memory unit 930 . Also, the output value of the learning processor unit 940c may be transmitted to an external device through the communication unit 910 and/or stored in the memory unit 930 .
  • a terminal may receive information from a base station through downlink (DL) and transmit information to a base station through uplink (UL).
  • Information transmitted and received between the base station and the terminal includes general data information and various control information, and various physical channels exist according to the type/use of the information they transmit and receive.
  • FIG. 10 is a diagram illustrating physical channels applied to the present disclosure and a signal transmission method using the same.
  • the terminal receives a primary synchronization channel (P-SCH) and a secondary synchronization channel (S-SCH) from the base station, synchronizes with the base station, and obtains information such as cell ID. .
  • P-SCH primary synchronization channel
  • S-SCH secondary synchronization channel
  • the terminal may receive a physical broadcast channel (PBCH) signal from the base station to obtain intra-cell broadcast information.
  • the UE may receive a downlink reference signal (DL RS) in the initial cell search step to check the downlink channel state.
  • DL RS downlink reference signal
  • the UE receives a physical downlink control channel (PDCCH) and a physical downlink control channel (PDSCH) according to physical downlink control channel information in step S1012 and receives a little more Specific system information can be obtained.
  • PDCCH physical downlink control channel
  • PDSCH physical downlink control channel
  • the terminal may perform a random access procedure, such as steps S1013 to S1016, to complete access to the base station.
  • the UE transmits a preamble through a physical random access channel (PRACH) (S1013), and RAR for the preamble through a physical downlink control channel and a corresponding physical downlink shared channel (S1013). random access response) may be received (S1014).
  • the UE transmits a physical uplink shared channel (PUSCH) using scheduling information in the RAR (S1015), and a contention resolution procedure such as reception of a physical downlink control channel signal and a corresponding physical downlink shared channel signal. ) can be performed (S1016).
  • PUSCH physical uplink shared channel
  • S1015 scheduling information in the RAR
  • a contention resolution procedure such as reception of a physical downlink control channel signal and a corresponding physical downlink shared channel signal.
  • the terminal After performing the procedure as described above, the terminal receives a physical downlink control channel signal and/or a physical downlink shared channel signal (S1017) and a physical uplink shared channel as a general uplink/downlink signal transmission procedure thereafter.
  • channel, PUSCH) signal and/or a physical uplink control channel (PUCCH) signal may be transmitted ( S1018 ).
  • UCI uplink control information
  • HARQ-ACK / NACK hybrid automatic repeat and request acknowledgment / negative-ACK
  • SR scheduling request
  • CQI channel quality indication
  • PMI precoding matrix indication
  • RI rank indication
  • BI beam indication
  • the UCI is generally transmitted periodically through the PUCCH, but may be transmitted through the PUSCH according to an embodiment (eg, when control information and traffic data are to be transmitted at the same time).
  • the UE may aperiodically transmit the UCI through the PUSCH.
  • FIG. 11 is a diagram illustrating a control plane and a user plane structure of a radio interface protocol applied to the present disclosure.
  • entity 1 may be a user equipment (UE).
  • the term "terminal" may be at least one of a wireless device, a portable device, a vehicle, a mobile body, an XR device, a robot, and an AI to which the present disclosure is applied in FIGS. 1 to 9 described above.
  • the terminal refers to a device to which the present disclosure can be applied and may not be limited to a specific device or device.
  • Entity 2 may be a base station.
  • the base station may be at least one of an eNB, a gNB, and an ng-eNB.
  • the base station may refer to an apparatus for transmitting a downlink signal to the terminal, and may not be limited to a specific type or apparatus. That is, the base station may be implemented in various forms or types, and may not be limited to a specific form.
  • Entity 3 may be a network device or a device performing a network function.
  • the network device may be a core network node (eg, a mobility management entity (MME), an access and mobility management function (AMF), etc.) that manages mobility.
  • the network function may mean a function implemented to perform a network function
  • entity 3 may be a device to which the function is applied. That is, the entity 3 may refer to a function or device that performs a network function, and is not limited to a specific type of device.
  • the control plane may refer to a path through which control messages used by a user equipment (UE) and a network to manage a call are transmitted.
  • the user plane may mean a path through which data generated in the application layer, for example, voice data or Internet packet data, is transmitted.
  • the physical layer which is the first layer, may provide an information transfer service to a higher layer by using a physical channel.
  • the physical layer is connected to the upper medium access control layer through a transport channel.
  • data may be moved between the medium access control layer and the physical layer through the transport channel.
  • Data can be moved between the physical layers of the transmitting side and the receiving side through a physical channel.
  • the physical channel uses time and frequency as radio resources.
  • a medium access control (MAC) layer of the second layer provides a service to a radio link control (RLC) layer, which is an upper layer, through a logical channel.
  • the RLC layer of the second layer may support reliable data transmission.
  • the function of the RLC layer may be implemented as a function block inside the MAC.
  • the packet data convergence protocol (PDCP) layer of the second layer may perform a header compression function that reduces unnecessary control information in order to efficiently transmit IP packets such as IPv4 or IPv6 in a narrow-bandwidth air interface.
  • PDCP packet data convergence protocol
  • a radio resource control (RRC) layer located at the bottom of the third layer is defined only in the control plane.
  • the RRC layer may be in charge of controlling logical channels, transport channels and physical channels in relation to configuration, re-configuration, and release of radio bearers (RBs).
  • RB may mean a service provided by the second layer for data transfer between the terminal and the network.
  • the UE and the RRC layer of the network may exchange RRC messages with each other.
  • a non-access stratum (NAS) layer above the RRC layer may perform functions such as session management and mobility management.
  • One cell constituting the base station may be set to one of various bandwidths to provide downlink or uplink transmission services to multiple terminals. Different cells may be configured to provide different bandwidths.
  • the downlink transmission channel for transmitting data from the network to the terminal includes a broadcast channel (BCH) for transmitting system information, a paging channel (PCH) for transmitting a paging message, and a downlink shared channel (SCH) for transmitting user traffic or control messages.
  • BCH broadcast channel
  • PCH paging channel
  • SCH downlink shared channel
  • RACH random access channel
  • SCH uplink shared channel
  • a logical channel that is located above the transport channel and is mapped to the transport channel includes a broadcast control channel (BCCH), a paging control channel (PCCH), a common control channel (CCCH), a multicast control channel (MCCH), and a multicast (MTCH) channel. traffic channels), etc.
  • BCCH broadcast control channel
  • PCCH paging control channel
  • CCCH common control channel
  • MCCH multicast control channel
  • MTCH multicast
  • the transmission signal may be processed by a signal processing circuit.
  • the signal processing circuit 1200 may include a scrambler 1210 , a modulator 1220 , a layer mapper 1230 , a precoder 1240 , a resource mapper 1250 , and a signal generator 1260 .
  • the operation/function of FIG. 12 may be performed by the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
  • blocks 1010 to 1060 may be implemented in the processors 202a and 202b of FIG. 2 .
  • blocks 1210 to 1250 may be implemented in the processors 202a and 202b of FIG. 2
  • block 1260 may be implemented in the transceivers 206a and 206b of FIG. 2 , and the embodiment is not limited thereto.
  • the codeword may be converted into a wireless signal through the signal processing circuit 1200 of FIG. 12 .
  • the codeword is a coded bit sequence of an information block.
  • the information block may include a transport block (eg, a UL-SCH transport block, a DL-SCH transport block).
  • the radio signal may be transmitted through various physical channels (eg, PUSCH, PDSCH) of FIG. 10 .
  • the codeword may be converted into a scrambled bit sequence by the scrambler 1210 .
  • a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device, and the like.
  • the scrambled bit sequence may be modulated by a modulator 1220 into a modulation symbol sequence.
  • the modulation method may include pi/2-binary phase shift keying (pi/2-BPSK), m-phase shift keying (m-PSK), m-quadrature amplitude modulation (m-QAM), and the like.
  • the complex modulation symbol sequence may be mapped to one or more transport layers by a layer mapper 1230 .
  • Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 1240 (precoding).
  • the output z of the precoder 1240 may be obtained by multiplying the output y of the layer mapper 1230 by the precoding matrix W of N*M.
  • N is the number of antenna ports
  • M is the number of transport layers.
  • the precoder 1240 may perform precoding after performing transform precoding (eg, discrete fourier transform (DFT) transform) on the complex modulation symbols. Also, the precoder 1240 may perform precoding without performing transform precoding.
  • transform precoding eg, discrete fourier transform (DFT) transform
  • the resource mapper 1250 may map modulation symbols of each antenna port to a time-frequency resource.
  • the time-frequency resource may include a plurality of symbols (eg, a CP-OFDMA symbol, a DFT-s-OFDMA symbol) in the time domain and a plurality of subcarriers in the frequency domain.
  • the signal generator 1260 generates a radio signal from the mapped modulation symbols, and the generated radio signal may be transmitted to another device through each antenna.
  • the signal generator 1260 may include an inverse fast fourier transform (IFFT) module and a cyclic prefix (CP) inserter, a digital-to-analog converter (DAC), a frequency uplink converter, and the like. .
  • IFFT inverse fast fourier transform
  • CP cyclic prefix
  • DAC digital-to-analog converter
  • the signal processing process for the received signal in the wireless device may be configured in reverse of the signal processing process 1210 to 1260 of FIG. 12 .
  • the wireless device eg, 200a or 200b of FIG. 2
  • the received radio signal may be converted into a baseband signal through a signal restorer.
  • the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast fourier transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT fast fourier transform
  • the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a descrambling process.
  • the codeword may be restored to the original information block through decoding.
  • the signal processing circuit (not shown) for the received signal may include a signal restorer, a resource de-mapper, a post coder, a demodulator, a descrambler, and a decoder.
  • FIG. 13 is a diagram illustrating a structure of a radio frame applicable to the present disclosure.
  • Uplink and downlink transmission based on the NR system may be based on a frame as shown in FIG. 13 .
  • one radio frame has a length of 10 ms and may be defined as two 5 ms half-frames (HF).
  • One half-frame may be defined as 5 1ms subframes (subframe, SF).
  • One subframe is divided into one or more slots, and the number of slots in a subframe may depend on subcarrier spacing (SCS).
  • SCS subcarrier spacing
  • each slot may include 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP).
  • CP cyclic prefix
  • each slot When a normal CP (normal CP) is used, each slot may include 14 symbols.
  • each slot may include 12 symbols.
  • the symbol may include an OFDM symbol (or a CP-OFDM symbol) and an SC-FDMA symbol (or a DFT-s-OFDM symbol).
  • Table 1 shows the number of symbols per slot, the number of slots per frame, and the number of slots per subframe according to the SCS when the normal CP is used
  • Table 2 shows the number of slots per slot according to the SCS when the extended CSP is used. Indicates the number of symbols, the number of slots per frame, and the number of slots per subframe.
  • N slot symb may indicate the number of symbols in a slot
  • N frame may indicate the number of slots in a frame
  • ⁇ slot may indicate the number of slots in a frame
  • N subframe may indicate the number of slots in a subframe.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • an (absolute time) interval of a time resource eg, SF, slot, or TTI
  • a TU time unit
  • NR may support multiple numerology (or subcarrier spacing (SCS)) to support various 5G services. For example, when SCS is 15kHz, it supports a wide area in traditional cellular bands, and when SCS is 30kHz/60kHz, dense-urban, lower latency and a wider carrier bandwidth, and when the SCS is 60 kHz or higher, it can support a bandwidth greater than 24.25 GHz to overcome phase noise.
  • SCS subcarrier spacing
  • the NR frequency band is defined as a frequency range of two types (FR1, FR2).
  • FR1 and FR2 may be configured as shown in the table below.
  • FR2 may mean a millimeter wave (mmW).
  • the above-described pneumatic numerology may be set differently.
  • a terahertz wave (THz) band may be used as a higher frequency band than the above-described FR2.
  • the SCS may be set to be larger than that of the NR system, and the number of slots may be set differently, and it is not limited to the above-described embodiment.
  • the THz band will be described later.
  • FIG. 14 is a diagram illustrating a slot structure applicable to the present disclosure.
  • One slot includes a plurality of symbols in the time domain. For example, in the case of a normal CP, one slot may include 7 symbols, but in the case of an extended CP, one slot may include 6 symbols.
  • a carrier includes a plurality of subcarriers (subcarrier) in the frequency domain.
  • a resource block may be defined as a plurality of (eg, 12) consecutive subcarriers in the frequency domain.
  • a bandwidth part is defined as a plurality of consecutive (P)RBs in the frequency domain, and may correspond to one numerology (eg, SCS, CP length, etc.).
  • a carrier may include a maximum of N (eg, 5) BWPs. Data communication is performed through the activated BWP, and only one BWP can be activated for one terminal.
  • N e.g. 5
  • Each element in the resource grid is referred to as a resource element (RE), and one complex symbol may be mapped.
  • RE resource element
  • 6G (wireless) systems have (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery- It aims to reduce energy consumption of battery-free IoT devices, (vi) ultra-reliable connections, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system may have four aspects such as "intelligent connectivity”, “deep connectivity”, “holographic connectivity”, and “ubiquitous connectivity”, and the 6G system may satisfy the requirements shown in Table 4 below. That is, Table 4 is a table showing the requirements of the 6G system.
  • the 6G system includes enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine type communications (mmTC), AI integrated communication, and tactile Internet (tactile internet), high throughput (high throughput), high network capacity (high network capacity), high energy efficiency (high energy efficiency), low backhaul and access network congestion (low backhaul and access network congestion) and improved data security ( It may have key factors such as enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mmTC massive machine type communications
  • AI integrated communication e.g., eMBB
  • tactile Internet e internet
  • high throughput high network capacity
  • high energy efficiency high energy efficiency
  • low backhaul and access network congestion low backhaul and access network congestion
  • improved data security It may have key factors such as enhanced data security.
  • 15 is a diagram illustrating an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
  • the 6G system is expected to have 50 times higher simultaneous wireless communication connectivity than the 5G wireless communication system.
  • URLLC a key feature of 5G, is expected to become an even more important technology by providing an end-to-end delay of less than 1 ms in 6G communication.
  • the 6G system will have much better volumetric spectral efficiency, unlike the frequently used area spectral efficiency.
  • 6G systems can provide very long battery life and advanced battery technology for energy harvesting, so mobile devices in 6G systems may not need to be charged separately.
  • new network characteristics in 6G may be as follows.
  • 6G is expected to be integrated with satellites to provide a global mobile population.
  • the integration of terrestrial, satellite and public networks into one wireless communication system could be very important for 6G.
  • AI may be applied in each step of a communication procedure (or each procedure of signal processing to be described later).
  • the 6G wireless network will deliver power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Small cell networks The idea of small cell networks was introduced to improve the received signal quality as a result of improved throughput, energy efficiency and spectral efficiency in cellular systems. As a result, small cell networks are essential characteristics for communication systems beyond 5G and Beyond 5G (5GB). Accordingly, the 6G communication system also adopts the characteristics of the small cell network.
  • Ultra-dense heterogeneous networks will be another important characteristic of 6G communication system.
  • a multi-tier network composed of heterogeneous networks improves overall QoS and reduces costs.
  • the backhaul connection is characterized as a high-capacity backhaul network to support high-capacity traffic.
  • High-speed fiber optics and free-space optics (FSO) systems may be possible solutions to this problem.
  • High-precision localization (or location-based service) through communication is one of the functions of the 6G wireless communication system. Therefore, the radar system will be integrated with the 6G network.
  • Softening and virtualization are two important functions that underlie the design process in 5GB networks to ensure flexibility, reconfigurability and programmability. In addition, billions of devices can be shared in a shared physical infrastructure.
  • AI The most important and newly introduced technology for 6G systems is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Incorporating AI into communications can simplify and enhance real-time data transmission.
  • AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play an important role in M2M, machine-to-human and human-to-machine communication.
  • AI can be a rapid communication in the BCI (brain computer interface).
  • BCI brain computer interface
  • AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based multiple input multiple output (MIMO) mechanism It may include AI-based resource scheduling and allocation.
  • Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a physical layer of a downlink (DL). In addition, machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DL downlink
  • machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • Deep learning-based AI algorithms require large amounts of training data to optimize training parameters.
  • a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a wireless channel.
  • signals of the physical layer of wireless communication are complex signals.
  • further research on a neural network for detecting a complex domain signal is needed.
  • Machine learning refers to a set of operations that trains a machine to create a machine that can perform tasks that humans can or cannot do.
  • Machine learning requires data and a learning model.
  • data learning methods can be roughly divided into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network learning is to minimize output errors. Neural network learning repeatedly inputs training data into the neural network, calculates the output and target errors of the neural network for the training data, and backpropagates the neural network error from the output layer of the neural network to the input layer in the direction to reduce the error. ) to update the weight of each node in the neural network.
  • Supervised learning uses training data in which the correct answer is labeled in the training data, and in unsupervised learning, the correct answer may not be labeled in the training data. That is, for example, learning data in the case of supervised learning related to data classification may be data in which categories are labeled for each of the training data.
  • the labeled training data is input to the neural network, and an error can be calculated by comparing the output (category) of the neural network with the label of the training data.
  • the calculated error is back propagated in the reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back propagation.
  • the change amount of the connection weight of each node to be updated may be determined according to a learning rate.
  • the computation of the neural network on the input data and the backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of learning a neural network, a high learning rate can be used to increase the efficiency by allowing the neural network to quickly obtain a certain level of performance, and in the late learning period, a low learning rate can be used to increase the accuracy.
  • the learning method may vary depending on the characteristics of the data. For example, when the purpose of accurately predicting data transmitted from a transmitter in a communication system is at a receiver, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
  • the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machine (RNN) methods. and such a learning model can be applied.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent boltzmann machine
  • THz communication may be applied in the 6G system.
  • the data rate may be increased by increasing the bandwidth. This can be accomplished by using sub-THz communication with a wide bandwidth and applying advanced large-scale MIMO technology.
  • a THz wave also known as sub-millimeter radiation, generally represents a frequency band between 0.1 THz and 10 THz with a corresponding wavelength in the range of 0.03 mm-3 mm.
  • the 100GHz-300GHz band range (Sub THz band) is considered a major part of the THz band for cellular communication.
  • Sub-THz band Addition to mmWave band increases 6G cellular communication capacity.
  • 300GHz-3THz is in the far-infrared (IR) frequency band.
  • the 300GHz-3THz band is part of the broadband, but at the edge of the wideband, just behind the RF band. Thus, this 300 GHz-3 THz band shows similarities to RF.
  • THz communication The main characteristics of THz communication include (i) widely available bandwidth to support very high data rates, and (ii) high path loss occurring at high frequencies (high directional antennas are indispensable).
  • the narrow beamwidth produced by the highly directional antenna reduces interference.
  • the small wavelength of the THz signal allows a much larger number of antenna elements to be integrated into devices and BSs operating in this band. This allows the use of advanced adaptive nesting techniques that can overcome range limitations.
  • Optical wireless communication (OWC) technology is envisaged for 6G communication in addition to RF-based communication for all possible device-to-access networks. These networks connect to network-to-backhaul/fronthaul network connections.
  • OWC technology has already been used since the 4G communication system, but will be used more widely to meet the needs of the 6G communication system.
  • OWC technologies such as light fidelity, visible light communication, optical camera communication, and free space optical (FSO) communication based on a light band are well known technologies. Communication based on optical radio technology can provide very high data rates, low latency and secure communication.
  • Light detection and ranging (LiDAR) can also be used for ultra-high-resolution 3D mapping in 6G communication based on a wide band.
  • FSO The transmitter and receiver characteristics of an FSO system are similar to those of a fiber optic network.
  • data transmission in an FSO system is similar to that of a fiber optic system. Therefore, FSO can be a good technology to provide backhaul connectivity in 6G systems along with fiber optic networks.
  • FSO supports high-capacity backhaul connections for remote and non-remote areas such as sea, space, underwater, and isolated islands.
  • FSO also supports cellular base station connectivity.
  • MIMO technology improves, so does the spectral efficiency. Therefore, large-scale MIMO technology will be important in 6G systems. Since the MIMO technology uses multiple paths, a multiplexing technique and a beam generation and operation technique suitable for the THz band should also be considered important so that a data signal can be transmitted through one or more paths.
  • Blockchain will become an important technology for managing large amounts of data in future communication systems.
  • Blockchain is a form of distributed ledger technology, which is a database distributed across numerous nodes or computing devices. Each node replicates and stores an identical copy of the ledger.
  • the blockchain is managed as a peer-to-peer (P2P) network. It can exist without being managed by a centralized authority or server. Data on the blockchain is collected together and organized into blocks. Blocks are linked together and protected using encryption.
  • Blockchain in nature perfectly complements IoT at scale with improved interoperability, security, privacy, reliability and scalability. Therefore, blockchain technology provides several features such as interoperability between devices, traceability of large amounts of data, autonomous interaction of different IoT systems, and large-scale connection stability of 6G communication systems.
  • the 6G system integrates terrestrial and public networks to support vertical expansion of user communications.
  • 3D BS will be provided via low orbit satellites and UAVs. Adding a new dimension in terms of elevation and associated degrees of freedom makes 3D connections significantly different from traditional 2D networks.
  • Unmanned aerial vehicles or drones will become an important element in 6G wireless communications.
  • UAVs Unmanned aerial vehicles
  • a base station entity is installed in the UAV to provide cellular connectivity.
  • UAVs have certain features not found in fixed base station infrastructure, such as easy deployment, strong line-of-sight links, and degrees of freedom with controlled mobility.
  • the deployment of terrestrial communications infrastructure is not economically feasible and sometimes cannot provide services in volatile environments.
  • a UAV can easily handle this situation.
  • UAV will become a new paradigm in the field of wireless communication. This technology facilitates the three basic requirements of wireless networks: eMBB, URLLC and mMTC.
  • UAVs can also serve several purposes, such as improving network connectivity, fire detection, disaster emergency services, security and surveillance, pollution monitoring, parking monitoring, incident monitoring, and more. Therefore, UAV technology is recognized as one of the most important technologies for 6G communication.
  • Tight integration of multiple frequencies and heterogeneous communication technologies is very important in 6G systems. As a result, users can seamlessly move from one network to another without having to make any manual configuration on the device. The best network is automatically selected from the available communication technologies. This will break the limitations of the cell concept in wireless communication. Currently, user movement from one cell to another causes too many handovers in high-density networks, causing handover failures, handover delays, data loss and ping-pong effects. 6G cell-free communication will overcome all of this and provide better QoS. Cell-free communication will be achieved through multi-connectivity and multi-tier hybrid technologies and different heterogeneous radios of devices.
  • WIET Wireless information and energy transfer
  • WIET uses the same fields and waves as wireless communication systems.
  • the sensor and smartphone will be charged using wireless power transfer during communication.
  • WIET is a promising technology for extending the life of battery-charging wireless systems. Therefore, devices without batteries will be supported in 6G communication.
  • An autonomous wireless network is a function that can continuously detect dynamically changing environmental conditions and exchange information between different nodes.
  • sensing will be tightly integrated with communications to support autonomous systems.
  • each access network is connected by backhaul connections such as fiber optic and FSO networks.
  • backhaul connections such as fiber optic and FSO networks.
  • Beamforming is a signal processing procedure that adjusts an antenna array to transmit a radio signal in a specific direction.
  • Beamforming technology has several advantages, such as high signal-to-noise ratio, interference prevention and rejection, and high network efficiency.
  • Hologram beamforming (HBF) is a new beamforming method that is significantly different from MIMO systems because it uses a software-defined antenna. HBF will be a very effective approach for efficient and flexible transmission and reception of signals in multi-antenna communication devices in 6G.
  • Big data analytics is a complex process for analyzing various large data sets or big data. This process ensures complete data management by finding information such as hidden data, unknown correlations and customer propensity. Big data is gathered from a variety of sources such as videos, social networks, images and sensors. This technology is widely used to process massive amounts of data in 6G systems.
  • the LIS is an artificial surface made of electromagnetic materials, and can change the propagation of incoming and outgoing radio waves.
  • LIS can be viewed as an extension of massive MIMO, but has a different array structure and operation mechanism from that of massive MIMO.
  • LIS is low in that it operates as a reconfigurable reflector with passive elements, that is, only passively reflects the signal without using an active RF chain. It has the advantage of having power consumption.
  • each of the passive reflectors of the LIS must independently adjust the phase shift of the incoming signal, it can be advantageous for a wireless communication channel.
  • the reflected signal can be gathered at the target receiver to boost the received signal power.
  • 17 is a diagram illustrating a THz communication method applicable to the present disclosure.
  • THz wave is located between RF (Radio Frequency)/millimeter (mm) and infrared band, (i) It transmits non-metal/non-polar material better than visible light/infrared light, and has a shorter wavelength than RF/millimeter wave, so it has high straightness. Beam focusing may be possible.
  • the frequency band expected to be used for THz wireless communication may be a D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz) band with low propagation loss due to absorption of molecules in the air.
  • Standardization discussion on THz wireless communication is being discussed centered on IEEE 802.15 THz working group (WG) in addition to 3GPP, and standard documents issued by TG (task group) (eg, TG3d, TG3e) of IEEE 802.15 are described in this specification. It can be specified or supplemented.
  • THz wireless communication may be applied to wireless recognition, sensing, imaging, wireless communication, THz navigation, and the like.
  • a THz wireless communication scenario may be classified into a macro network, a micro network, and a nanoscale network.
  • THz wireless communication can be applied to a vehicle-to-vehicle (V2V) connection and a backhaul/fronthaul connection.
  • V2V vehicle-to-vehicle
  • THz wireless communication in micro networks is applied to indoor small cells, fixed point-to-point or multi-point connections such as wireless connections in data centers, and near-field communication such as kiosk downloading.
  • Table 5 below is a table showing an example of a technique that can be used in the THz wave.
  • FIG. 18 is a diagram illustrating a THz wireless communication transceiver applicable to the present disclosure.
  • THz wireless communication may be classified based on a method for generating and receiving THz.
  • the THz generation method can be classified into an optical device or an electronic device-based technology.
  • the method of generating THz using an electronic device is a method using a semiconductor device such as a resonant tunneling diode (RTD), a method using a local oscillator and a multiplier, a compound semiconductor HEMT (high electron mobility transistor) based
  • a monolithic microwave integrated circuit (MMIC) method using an integrated circuit a method using a Si-CMOS-based integrated circuit, and the like.
  • MMIC monolithic microwave integrated circuit
  • a doubler, tripler, or multiplier is applied to increase the frequency, and it is radiated by the antenna through the sub-harmonic mixer. Since the THz band forms a high frequency, a multiplier is essential.
  • the multiplier is a circuit that has an output frequency that is N times that of the input, matches the desired harmonic frequency, and filters out all other frequencies.
  • an array antenna or the like may be applied to the antenna of FIG. 18 to implement beamforming.
  • IF denotes an intermediate frequency
  • tripler and multiplier denote a multiplier
  • PA denotes a power amplifier
  • LNA denotes a low noise amplifier.
  • PLL represents a phase-locked loop.
  • FIG. 19 is a diagram illustrating a method for generating a THz signal applicable to the present disclosure.
  • FIG. 20 is a diagram illustrating a wireless communication transceiver applicable to the present disclosure.
  • the optical device-based THz wireless communication technology refers to a method of generating and modulating a THz signal using an optical device.
  • the optical element-based THz signal generation technology is a technology that generates a high-speed optical signal using a laser and an optical modulator, and converts it into a THz signal using an ultra-high-speed photodetector. In this technology, it is easier to increase the frequency compared to the technology using only electronic devices, it is possible to generate a high-power signal, and it is possible to obtain a flat response characteristic in a wide frequency band.
  • a laser diode, a broadband optical modulator, and a high-speed photodetector are required to generate an optical device-based THz signal.
  • an optical coupler refers to a semiconductor device that transmits electrical signals using light waves to provide coupling with electrical insulation between circuits or systems
  • UTC-PD uni-travelling carrier photo- The detector
  • UTC-PD is one of the photodetectors, which uses electrons as active carriers and reduces the movement time of electrons by bandgap grading.
  • UTC-PD is capable of photodetection above 150GHz.
  • an erbium-doped fiber amplifier indicates an erbium-doped optical fiber amplifier
  • a photo detector indicates a semiconductor device capable of converting an optical signal into an electrical signal
  • the OSA indicates various optical communication functions (eg, .
  • FIG. 21 is a diagram illustrating a structure of a transmitter applicable to the present disclosure.
  • FIG. 22 is a diagram illustrating a modulator structure applicable to the present disclosure.
  • a phase of a signal may be changed by passing an optical source of a laser through an optical wave guide.
  • data is loaded by changing electrical characteristics through microwave contact or the like.
  • an optical modulator output is formed as a modulated waveform.
  • the photoelectric modulator (O/E converter) is an optical rectification operation by a nonlinear crystal (nonlinear crystal), photoelectric conversion (O / E conversion) by a photoconductive antenna (photoconductive antenna), a bunch of electrons in the light beam (bunch of) THz pulses can be generated by, for example, emission from relativistic electrons.
  • a terahertz pulse (THz pulse) generated in the above manner may have a length in units of femtoseconds to picoseconds.
  • An O/E converter performs down conversion by using non-linearity of a device.
  • a number of contiguous GHz bands for fixed or mobile service use for the terahertz system are used. likely to use
  • available bandwidth may be classified based on oxygen attenuation of 10 ⁇ 2 dB/km in a spectrum up to 1 THz. Accordingly, a framework in which the available bandwidth is composed of several band chunks may be considered.
  • the bandwidth (BW) becomes about 20 GHz.
  • Effective down conversion from the infrared band to the THz band depends on how the nonlinearity of the O/E converter is exploited. That is, in order to down-convert to a desired terahertz band (THz band), the O/E converter having the most ideal non-linearity for transfer to the terahertz band (THz band) is design is required. If an O/E converter that does not fit the target frequency band is used, there is a high possibility that an error may occur with respect to the amplitude and phase of the corresponding pulse.
  • a terahertz transmission/reception system may be implemented using one photoelectric converter in a single carrier system. Although it depends on the channel environment, as many photoelectric converters as the number of carriers may be required in a far-carrier system. In particular, in the case of a multi-carrier system using several broadbands according to the above-described spectrum usage-related scheme, the phenomenon will become conspicuous. In this regard, a frame structure for the multi-carrier system may be considered.
  • the down-frequency-converted signal based on the photoelectric converter may be transmitted in a specific resource region (eg, a specific frame).
  • the frequency domain of the specific resource region may include a plurality of chunks. Each chunk may be composed of at least one component carrier (CC).
  • FIG. 23 is a diagram illustrating a structure of a perceptron included in an artificial neural network applicable to the present disclosure. Also, FIG. 24 is a diagram illustrating an artificial neural network structure applicable to the present disclosure.
  • an artificial intelligence system may be applied in the 6G system.
  • the artificial intelligence system may operate based on a learning model corresponding to the human brain, as described above.
  • a paradigm of machine learning that uses a neural network structure with high complexity such as artificial neural networks as a learning model can be called deep learning.
  • the neural network cord used as a learning method is largely a deep neural network (DNN), a convolutional deep neural network (CNN), and a recurrent neural network (RNN).
  • DNN deep neural network
  • CNN convolutional deep neural network
  • RNN recurrent neural network
  • the artificial neural network may be composed of several perceptrons.
  • a perceptron If the huge artificial neural network structure extends the simplified perceptron structure shown in FIG. 23, input vectors can be applied to different multidimensional perceptrons. For convenience of description, an input value or an output value is referred to as a node.
  • the perceptron structure shown in FIG. 23 may be described as being composed of a total of three layers based on an input value and an output value. 1 st layer and 2 nd layer between, the (d + 1) pieces perceptron H of D, 2, (H + 1) between nd layer and a 3 rd layer level perceptron is to be described as an artificial neural network 24 present the K can
  • the layer where the input vector is located is called an input layer
  • the layer where the final output value is located is called the output layer
  • all layers located between the input layer and the output layer are called hidden layers.
  • the artificial neural network illustrated in FIG. 24 can be understood as a total of two layers.
  • the artificial neural network is constructed by connecting the perceptrons of the basic blocks in two dimensions.
  • the aforementioned input layer, hidden layer, and output layer can be jointly applied in various artificial neural network structures such as CNN and RNN to be described later as well as multi-layer perceptron.
  • various artificial neural network structures such as CNN and RNN to be described later as well as multi-layer perceptron.
  • the artificial neural network becomes deeper, and a machine learning paradigm that uses a sufficiently deep artificial neural network as a learning model can be called deep learning.
  • an artificial neural network used for deep learning may be referred to as a deep neural network (DNN).
  • DNN deep neural network
  • 25 is a diagram illustrating a deep neural network applicable to the present disclosure.
  • the deep neural network may be a multilayer perceptron composed of eight hidden layers + eight output layers.
  • the multilayer perceptron structure may be expressed as a fully-connected neural network.
  • a connection relationship does not exist between nodes located in the same layer, and a connection relationship can exist only between nodes located in adjacent layers.
  • DNN has a fully connected neural network structure and is composed of a combination of a number of hidden layers and activation functions, so it can be usefully applied to figure out the correlation between input and output.
  • the correlation characteristic may mean a joint probability of input/output.
  • 26 is a diagram illustrating a convolutional neural network applicable to the present disclosure.
  • 27 is a diagram illustrating a filter operation of a convolutional neural network applicable to the present disclosure.
  • various artificial neural network structures different from the above-described DNN may be formed.
  • the DNN nodes located inside one layer are arranged in a one-dimensional vertical direction.
  • the nodes are two-dimensionally arranged with w horizontally and h vertical nodes. (Convolutional neural network structure in Fig. 26).
  • a weight is added per connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights must be considered. Since there are h ⁇ w nodes in the input layer, a total of h 2 w 2 weights may be required between two adjacent layers.
  • the convolutional neural network of FIG. 26 has a problem in that the number of weights increases exponentially according to the number of connections, so instead of considering the connection of all modes between adjacent layers, it is assumed that a filter with a small size exists.
  • a weighted sum and activation function operation may be performed on a portion where the filters overlap.
  • one filter has a weight corresponding to the number corresponding to its size, and learning of the weight can be performed so that a specific feature on the image can be extracted and output as a factor.
  • a 3 ⁇ 3 filter is applied to the upper left 3 ⁇ 3 region of the input layer, and an output value obtained by performing weighted sum and activation function operations on a corresponding node may be stored in z 22 .
  • the above-described filter may perform weighted sum and activation function calculations while moving horizontally and vertically at regular intervals while scanning the input layer, and the output value may be disposed at the current filter position. Since this operation method is similar to a convolution operation on an image in the field of computer vision, a deep neural network with such a structure is called a convolutional neural network (CNN), and the result of the convolution operation is The hidden layer may be referred to as a convolutional layer. Also, a neural network having a plurality of convolutional layers may be referred to as a deep convolutional neural network (DCNN).
  • DCNN deep convolutional neural network
  • the number of weights can be reduced by calculating the weighted sum by including only nodes located in the region covered by the filter in the node where the filter is currently located. Due to this, one filter can be used to focus on features for a local area. Accordingly, CNN can be effectively applied to image data processing in which physical distance on a two-dimensional domain is an important criterion. Meanwhile, in CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through the convolution operation of each filter.
  • a structure in which this method is applied to an artificial neural network can be called a recurrent neural network structure.
  • 28 is a diagram illustrating a neural network structure in which a cyclic loop applicable to the present disclosure exists.
  • 29 is a diagram illustrating an operation structure of a recurrent neural network applicable to the present disclosure.
  • a recurrent neural network is an element ⁇ x 1 (t) , x 2 (t) , . , x d (t) ⁇ in the process of input to the fully connected neural network
  • the immediately preceding time point t-1 is the hidden vector ⁇ z 1 (t-1) , z 2 (t-1) , ... , z H (t-1) ⁇ can be input together to have a structure in which a weighted sum and an activation function are applied.
  • the reason why the hidden vector is transferred to the next time point in this way is that information in the input vector at previous time points is considered to be accumulated in the hidden vector of the current time point.
  • the recurrent neural network may operate in a predetermined time sequence with respect to an input data sequence.
  • the input vector ⁇ x 1 (t) , x 2 (t) , ... , x d (t) ⁇ when the hidden vector ⁇ z 1 (1) , z 2 (1) , ... , z H (1) ⁇ is the input vector ⁇ x 1 (2) , x 2 (2) , ... , x d (2) ⁇
  • the vector of the hidden layer ⁇ z 1 (2) , z 2 (2) , ... , z H (2) ⁇ is determined.
  • These processes are time point 2, time point 3, ... , iteratively until time point T.
  • a deep recurrent neural network when a plurality of hidden layers are arranged in a recurrent neural network, this is called a deep recurrent neural network (DRNN).
  • the recurrent neural network is designed to be usefully applied to sequence data (eg, natural language processing).
  • neural network core used as a learning method, in addition to DNN, CNN, and RNN, restricted Boltzmann machine (RBM), deep belief networks (DBN), deep Q-Network and It includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • RBM restricted Boltzmann machine
  • DNN deep belief networks
  • Q-Network includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling ( scheduling) and allocation may be included.
  • the present disclosure is for performing federated learning in a wireless communication system, and in particular, it is required to provide weights or gradients in a situation where compressed federated learning is performed. Techniques for reducing bandwidth consumption will be described.
  • Wireless communication systems are developing extensively to provide various types of communication services such as voice and data, and recent attempts to graft AI into communication systems are rapidly increasing.
  • the methods of grafting AI that are being attempted can be broadly divided into 'C4AI (communications for AI)', which develops communication technology to support AI, and 'AI4C (AI for communications)', which uses AI to improve communication performance.
  • AI4C there is a study to increase design efficiency by replacing a channel encoder/decoder with an end-to-end autoencoder.
  • C4AI using federated learning, which is a technique of distributed learning, without sharing the raw data of the device, only the weight or gradient of the model with the server
  • federated learning which is a technique of distributed learning, without sharing the raw data of the device, only the weight or gradient of the model with the server
  • federated learning which is a technique of distributed learning, without sharing the raw data of the device, only the weight or gradient of the model with the server
  • federated learning which is a technique of
  • Federated learning is a type of distributed machine learning that trains machine learning models in a central server using distributed data stored in mobile devices such as smartphones. Unlike other distributed machine learning techniques that assume a wired communication environment, federated learning is a distributed machine learning technique in which a training process is performed on each device using data collected from individual users' mobile devices. Mobile devices generate a lot of data through interaction with users, and most of the data may include sensitive personal information. Data such as photos, user's location, chat history, video, and voice are subject to personal information protection, and the user does not want these data to be leaked to the outside of the device. Even if data is encrypted or anonymity is guaranteed, users do not want the data to be leaked outside. Therefore, there is a limitation in that data cannot be collected to a central server such as a data center, and this limitation makes it difficult to apply the data to distributed machine learning and deep learning techniques.
  • FIG. 30 is a diagram illustrating the concept of associative learning applicable to the present disclosure.
  • a first terminal 3010a and a second terminal 3010b participate in federated learning.
  • the first terminal 3010a obtains an updated first model 3012a by performing learning on the model provided from the base station 3020, and the second terminal 3010b learns the model provided from the base station 3020.
  • By performing an updated second model 3012b is obtained.
  • Each of the first terminal 3010a and the second terminal 3010b transmits the weight w 1 or w 2 of the updated model 3012a or 3012b to the base station 3020 .
  • the terminals 3010a and 3010b transmit information about the weight or gradient changed after learning to the base station 3020 through the uplink of the communication channel. Accordingly, the base station 3020 updates the model 3022 stored in the base station 3020 using the aggregated weights w 1 and w 2 . For example, the base station 3020 may update the weight of the model 3022 as an average value of the fed back weights.
  • a federated averaging algorithm is a technique for reducing the number of communication rounds by using a minibatch.
  • a local data set of each terminal is trained in mini-batch units, updated weights/gradients are transmitted to the server, and weight averaging is performed in the server. ) to update the global common prediction model.
  • One of the main challenges is not only to reduce the number of communication rounds, but also to save the bandwidth required to transmit weights/gradients in the uplink.
  • the federated averaging algorithm is expressed as a pseudo code, as shown in Table 6 below. In Table 6, k denotes an index of K terminals or clients, B denotes a miniBatch size, E denotes a number of a local epoch, and ⁇ denotes a learning rate.
  • FIG. 31 is a diagram showing an example of a protocol of associative learning applicable to the present disclosure.
  • terminals 3110a to 3110e that satisfy conditions such as charging, power on state, and connection with a base station notify the server 3130 that they are ready to register as a participant.
  • the server 3130 selects an optimal number of terminals, and then provides the selected terminals 3110a to 3110c with tasks and calculations to be performed as participants. Transmits data structures such as graph information for The server 3130 notifies the unselected terminals 3110d and 3110e that the next reconnection will be performed.
  • a configuration step 3104-i of round i the selected terminals 3110a to 3110c perform training using the global network model received from the server 3130 and local data stored in the terminal.
  • the terminals 3110a to 3110c transmit information on weights or gradients updated through training to the server 3130, and the server 3130 aggregates information is reflected in the global network model.
  • the above-described operations are one round.
  • the selection phase 3102-(i+1)), the setting phase 3104-(i+1)), and the reporting phase 3106-(i+1)) proceed similarly.
  • the terminals 3110a to 3110c and the server 3130 selected as participants maintain a connected state.
  • the server 3130 ignores the participant and proceeds with the round. Therefore, it is desirable that the federated learning protocol be designed so that no failure occurs even when the round is carried out, ignoring the participant who has failed to connect.
  • Learning may include training for connection and training for weight.
  • the concept of learning for both connections and weights is described below with reference to FIGS. 32 and 33 .
  • connection and weight learning includes a train connectivity procedure 3201 , a prune connections procedure 3203 , and a weight training procedure 3205 .
  • the connection training procedure 3201 is a procedure for training the network to learn which connections are important, and the connection pruning procedure 3203 prunes insignificant connections (eg, connections whose weight is less than a threshold).
  • procedure, and the weight training procedure 3205 is a retraining procedure, which is a procedure for learning weights by performing retraining in a pruned sparse connection state.
  • the network before pruning is a dense network 3310 in which all possible connections are formed. With all connections established, the network is trained and weights are calculated to learn which connection is more important. Then, a connection having a weight lower than the threshold is treated as an insignificant connection, and the insignificant connection is pruned. That is, the network is converted from a dense network 3310 to a sparse network 3320 . Finally, by learning the weights through retraining in the pruned sparse connection state, the weights are fine-tuned. By defining a threshold at a level without loss of accuracy, and repeating the connection pruning procedure 3203 and the weight training procedure 3205 , a minimum number of combinations of connections can be derived.
  • Table 7 below shows examples of parameter compression rates for each network model.
  • AlexNet consists of 5 convolution layers and 3 fully-connected layers, and the last fully connected layer is used to classify 1000 categories.
  • FIGS. 34A and 34B are diagrams illustrating pruning sensitivity of an AlexNet network.
  • Fig. 34A shows the pruning sensitivity of the convolutional layer
  • Fig. 34B shows the pruning sensitivity of the fully connected layer.
  • conv1 is more sensitive to pruning than other layers.
  • the limit that can be pruned without compromising accuracy is about 20%. That is, among the connections of conv1, 20% of connections having a low weight do not have a significant effect on performance, so 20% of connections having a low weight may be pruned.
  • FIG. 34B in the case of fc3, about 53% of connections can be pruned without degradation of accuracy.
  • the present disclosure describes various embodiments for saving bandwidth when transmitting information related to weights or gradients using a communication link during a federated learning procedure.
  • Federated learning may be applied to various artificial neural networks applicable to a communication system.
  • various embodiments to be described below may be used to learn various network models, such as a network model for an autoencoder performing a function of a channel encoder/decoder, and a network model for channel estimation.
  • 35 is a diagram illustrating an embodiment of a procedure for performing federated learning in a terminal applicable to the present disclosure. 35 illustrates an operation method of a terminal participating in federated learning.
  • the terminal receives information related to the initial network model.
  • the initial network model is a network model stored in the server and is a subject to be updated by this procedure.
  • the initial network model provided in this step may be a basic model that has not been updated at all or a model updated at least once.
  • the information related to the initial network model may include information related to at least one of the number of nodes for each layer of the network model, connections of nodes, and weights of connections.
  • step S3503 the terminal configures the initial network model as a dense network. That is, the terminal configures a dense network by adding all possible connections between nodes.
  • a dense network may be referred to as a fully-connected network.
  • the weight of the newly added connection may be set to a predefined value.
  • the terminal changes the weights through training on the dense network.
  • the UE may newly determine weights of connections by performing training in a state in which all nodes are connected.
  • the terminal may acquire training data and perform training using the acquired training data.
  • the training data may be extracted or generated from data stored in the terminal.
  • the terminal may update at least one weight by performing a backpropagation operation.
  • step S3507 the terminal transmits information related to the weight of at least one connection. That is, the terminal selects at least one of the connections included in the dense network, and transmits information related to the weight of the selected at least one connection.
  • the terminal may select at least one connection based on the amount of change in weight due to training.
  • the weight-related information is for notifying the weight changed through training, and may include, for example, a change amount.
  • any of the connections may not be selected, and in this case, step S3507 may be omitted.
  • 36 is a diagram illustrating an embodiment of a procedure for performing federated learning in a server applicable to the present disclosure.
  • 36 illustrates an operation method of a server that controls federated learning.
  • the operating subject of the procedure illustrated in FIG. 36 is described as a 'server', where the server may be included in the base station or may be a core network entity.
  • the server transmits information related to the initial network model.
  • the initial network model is a network model stored in the server and is a subject to be updated by this procedure.
  • the initial network model provided in this step may be a basic model that has not been updated at all or a model updated at least once.
  • the information related to the initial network model may include information related to at least one of the number of nodes for each layer of the network model, connections of nodes, and weights of connections.
  • the server receives information related to the weight of at least one connection. That is, the server receives information related to at least one of the weights of the connection updated by the training performed in the terminal.
  • the weight-related information is for notifying the weight changed through training performed in the terminal, and may include, for example, a change amount.
  • the server may receive information related to weights from at least one terminal.
  • the server updates the weights of the network model based on the received information.
  • the server may generate a network model corresponding to each terminal by using information related to the received weight, and determine one updated network model based on the weights of the plurality of network models. For example, the server may determine the updated network model by averaging weights of network models corresponding to different terminals for each connection.
  • step S3607 the server prunes at least one connection based on the updated weights. That is, the server selects at least one connection based on the updated weights, and removes the selected at least one connection. For example, the server may select at least one connection having a weight equal to or less than a threshold. However, according to another embodiment, any of the connections may not be pruned, and in this case, step S3607 may be omitted. According to an embodiment, this step S3607 may be performed after steps S3601 to S3605 are repeated a plurality of times.
  • the terminal provides information related to the weight determined through training to the server, and the server may update the network model using the information related to the weight.
  • information related to the weight may be aggregated from a plurality of terminals. Through this, it is possible to train a machine learning model using data distributed among a plurality of terminals.
  • an operation of collecting information related to a weight and updating a network model may be repeated a plurality of times.
  • an operation of selecting at least one terminal to participate in training may be preceded, and a control signaling operation instructing the terminal to stop or start training may also be performed.
  • An embodiment of a procedure in consideration of selection of participating terminals, repetitive learning, and control signaling is as follows.
  • 37 is a diagram illustrating an embodiment of a procedure for performing compressed federated learning in a server applicable to the present disclosure.
  • 37 illustrates an operation method of a server that controls federated learning.
  • the operating subject of the procedure illustrated in FIG. 37 is described as a 'server', where the server may be included in the base station or may be a core network entity.
  • step S3701 the server selects at least one terminal to perform training.
  • a report on the resource of each terminal may be performed.
  • the report may include information related to the version of the global sparse model possessed by each terminal.
  • step S3703 the server distributes the initial global sparse model. According to another embodiment, if all terminals to be trained have the same version of the global sparse model, step S3703 may be omitted.
  • N total_UE_participated means the number of terminals participating in federated learning. Alternatively, N total_UE_participated may be understood as the number of times a report of a training result is received.
  • the server performs an i-th iteration. First, i is 1. In the i-th iteration operation, the terminal configures a dense network, determines weights of connections through training, and transmits information related to at least some of the determined weights to the server. The server receives information related to the received weight. At this time, the server counts the number of received reports.
  • step S3709 the server updates N total_UE_participated to N total_UE_participated +N _UE_participated. That is, the server increases the number of terminals participating in federated learning by the number of times it receives a weight-related report in step S3703. If the same UE performs training and reporting twice during one repetition, N total_UE_participated may increase by 2.
  • step S3711 the server compares N total_UE_participated with a threshold N total_UE_possible_participating .
  • N total_UE_possible_participating is a threshold defined for determining the end of iteration.
  • the server performs a union collection (aggregation federated) operation.
  • the server may update the network model based on the collected weight related information, and transmit information related to the updated network model to the terminals.
  • the information related to the updated network model may include updated weights. Accordingly, training using the network model updated in the i-th iterative learning in step S3707 is performed.
  • step S3715 the server transmits a training stop message to the UEs. That is, since sufficient learning has been performed, the server stops training of the terminals. In response, the terminal stops determining and reporting weights through training.
  • step S3717 the server performs pruning. In other words, the server drops at least one connection. At least one connection may be removed based on weights updated based on information collected from terminals.
  • step S3719 the server transmits a training start (train start) message to the UEs. Accordingly, the terminals perform learning again.
  • step S3721 the server sends an inference change message to the UEs.
  • the terminals perform an inference operation using the network model.
  • the server may return to step S3705 to further update the network model.
  • the repetition of steps S3705 to S3721 may be performed after a predetermined time or may be performed by the occurrence of a defined event.
  • 38 is a diagram illustrating an embodiment of a procedure for performing federated collection during each iteration step in a server applicable to the present disclosure. 38 illustrates an operating method of a server. An operating subject of the procedure illustrated in FIG. 38 is described as a 'server', where the server may be a base station or a core network entity.
  • the server reconfigures the sparse global weight based on information related to the weight collected from each terminal.
  • the server stores a plurality of network models corresponding to each of the terminals participating in the learning, and reconfigures the weights of the network models corresponding to each terminal by using information related to the weights for each terminal.
  • the server reconfigures the weight of the first network model based on the information received from the first terminal, and reconfigures the weight of the second network model based on the information received from the second terminal.
  • the information received from the terminal may include only weight-related information for some of the connections.
  • weights for the reconfiguration of weights for a connection provided with weight-related information from the terminal hereinafter referred to as 'received connection'
  • 'unreceived connection' a weight for a connection for which weight-related information is not provided from the terminal.
  • the reconstruction of can be performed differently.
  • the server applies the weight of the connection included in the existing global sparse weight w sparse_old as it is.
  • the server modifies the weight of the corresponding connection included in the existing global sparse weight w sparse_old using the received information, and then applies it. If the received information is a pruned weight difference vector diff(w k ) indicating the change in weight, the server calculates the weight and change in weight of the corresponding connection included in the existing global sparse weight w sparse_old . By summing, the weights of the corresponding connections are reconstructed.
  • step S3803 the server determines new weights by averaging the reconstructed weight vectors. Since a plurality of network models having different weights corresponding to each of the plurality of terminals are derived through step S3801, the server may update the global network model based on the plurality of network models. To this end, the server may average the reconstructed weights for each connection ( ).
  • step S3807 the server transmits a new sparse global model to the terminals. In other words, the server transmits weight information w new_sparse of the sparse global model including the weights updated in step S3805.
  • the server reconfigures a corresponding network model based on information received from the terminal.
  • the server determines the weight to be reconstructed by adding up the existing weight and the received change amount.
  • a weight may be assigned to the received change amount. That is, a weight less than 1 may be applied to reduce the influence of the received change in weight reconstruction, and a weight greater than 1 may be applied to increase the influence.
  • the server updates the global network model by averaging the reconstructed weights.
  • different averaging weights may be applied to the weights. For example, a larger averaging weight may be applied to a network model corresponding to a terminal having high learning accuracy or using the network model relatively frequently compared to other network models.
  • 39 is a diagram illustrating an embodiment of a procedure for performing federated collection during each iteration in a terminal applicable to the present disclosure. 39 illustrates an operation method of a terminal participating in federated learning.
  • the terminal receives an initial sparse network.
  • the terminal may receive the initial sparse network in the initial registration process.
  • the initial sparse network may refer to a global network that is received at the beginning of federated learning, and a global network that is transmitted in iterations after the initial may be referred to as a sparse network.
  • step S3903 the terminal configures a dense network and then performs training. After the terminal constructs a dense network by forming all connections of the sparse network, it trains with local data to learn which connection is important. Accordingly, the weight of at least one connection may be changed.
  • the terminal prunes at least one unimportant connection. For example, whether the connection is important may be determined based on the weight of the connection after training. According to an embodiment, the terminal may check the amount of change in the weight of each connection by training, and determine that the connection in which the change amount is smaller than a threshold is not important. For example, a connection whose weight changed from 0 to 0.10 could be treated as more meaningful, that is, more important, than a connection whose weight changed from 0.90 to 0.92 by training.
  • step S3907 the terminal generates a change amount vector including the weight of at least one unpruned connection.
  • the change amount vector includes a change amount of at least one connection whose weight change amount due to training is greater than a threshold value.
  • the connections included in the variation vector are selected by the weight variation by training, they may or may not match the connections included in the initial sparse network.
  • step S3909 the terminal transmits the change amount vector to the server.
  • a weight for at least one connection not included in the change amount vector may be reconfigured by the server. Connections that have been pruned in the terminal and have not received a weight to the server are not ignored, but may be included in the final network model by the server's determination. That is, the pruning operation in the terminal is to reduce the data size of the weight vector transmitted through the uplink.
  • each iteration step operations as shown in FIGS. 38 and 39 may be performed.
  • the number of iterations is preferably determined so that the performance of the network model can be sufficiently converged for federated learning. For example, referring to FIG. 37 , whether the number of repetitions is sufficient may be determined by whether the number of terminals participating in federated learning reaches a threshold.
  • the server may determine a threshold in a range without performance degradation according to a prune ratio, and determine the number of fine tuning epochs. For example, if the pruning rate is 50%, if the convergence without performance degradation in the number of fine-tuning steps 10, the threshold for the number of terminals participating in the federated learning may be determined as follows [Equation 1].
  • N total UE possible participaing is the threshold for the number of terminals participating in federated learning
  • N fine tunnig epoch is the number of fine-tuning steps
  • N total init data is the initial used when generating the initial pruning model
  • the number of data, batches of size B means the size of the unit that the terminal learns during every training.
  • the number of fine-tuning steps is 10
  • the number of initial data used to create the initial pruning model is 65000
  • the batch size B is 65
  • the total number of UEs that can participate in iteration for Each terminal learns according to the batch size B during every training, and transmits the weight change vector to the server. This operation is repeated I times, and I may be expressed as in [Equation 2] below.
  • N total UE possible participation is a threshold for the number of terminals participating in federated learning
  • N UE participating_i is the number of terminals participating in the i-th iteration
  • I is the total number of iterations.
  • the server calculates the accumulated number of training participation terminals at the collection time, and when [Equation 2] is satisfied, the training operation of the terminals is stopped by delivering a training stop message to all terminals. Accordingly, unnecessary waste of computing resources of each terminal is reduced. After that, the server performs pruning.
  • 40 is a diagram illustrating an embodiment of a procedure for performing pruning in a server applicable to the present disclosure. 40 illustrates an operating method of a server.
  • the operating subject of the procedure illustrated in FIG. 38 is described as a 'server', where the server may be included in the base station or may be a core network entity.
  • the server reconstructs sparse global weights based on information related to weights collected from each terminal.
  • the server stores a plurality of network models corresponding to each of the terminals participating in the learning, and reconfigures the weights of the network models corresponding to each terminal by using information related to the weights for each terminal.
  • the server reconfigures the weight of the first network model based on the information received from the first terminal, and reconfigures the weight of the second network model based on the information received from the second terminal.
  • the server prunes unimportant connections.
  • the server updates N total UE participated by accumulating N UE participated in every iteration of federated learning. If N total UE participated is greater than or equal to N total UE possible participating , the server performs pruning. For example, the server may construct a weight vector w new_sparse_model of the new global sparse network model by removing at least one connection with a threshold lower than the threshold.
  • the server transmits a new sparse global model to the terminals.
  • the server transmits sparse global model information w new_sparse_model including the weights updated in step S4007 .
  • the sparse global model information includes information related to a structure of nodes in a network model, a structure of connections, and weights of connections.
  • CFL compressed federated learning
  • 41 is a diagram illustrating an example of a protocol of the first two iterations in compressed associative learning applicable to the present disclosure. 41 illustrates a protocol of federated learning in which four terminals 4110a to 4110d and the server 4120 can participate.
  • training terminal selection step 4102-1 in the first iteration of CFL, training terminal selection step 4102-1, initial global sparse model distribution step 4104-1, training result reporting step (4106-1) proceeds.
  • the terminals 4110a to 4110d transmit a resource report to the server 4120, and the server 4120 selects a device (eg, a terminal) to participate in training. do.
  • the server 4120 sends a global sparse model distribution (GSMD) activation message indicating the start of distribution of the global sparse model, information about the global sparse model, and the global sparse model of the global sparse model.
  • GSMD global sparse model distribution
  • each of the terminals 4110a to 4110d transforms the global sparse model into a dense network and performs training.
  • each of the terminals 4110a to 4110d reports a training result including a vector of change in weight determined through training, and the server 4120 collects the training result, Update the weights of the global network model.
  • the training terminal selection step 4102-2, the initial global sparse model distribution step 4104-2, and the training result reporting step 4106-2 proceed.
  • three terminals 4110a, 4110b, and 4110d participate in training, and transmission of a GSMD activation message, sparse global model distribution, GSMD deactivation message transmission, training start message, and inference start message is transmitted. is omitted, and a sparse weight distribution operation is performed.
  • 42 is a diagram illustrating an example of a protocol of the latter two iterations in compressed associative learning applicable to the present disclosure. 42 illustrates a protocol of federated learning in which four terminals 4210a to 4210d and the server 4220 can participate.
  • the training terminal selection step 4202-(I-1)), the initial global sparse model distribution step 4204-(I) -1)), the training result reporting step 4206-(I-1)) proceeds.
  • Three terminals (4210a, 4210b, 4210d) participate in the training.
  • a training terminal selection step 4202-I, an initial global sparse model distribution step 4204-I, and a training result reporting step 4206-I are performed.
  • the server 4220 stops the training operation of the terminals 4210b and 4210d by transmitting a training stop message to the selected terminals 4210b and 4210d that have not yet reported the training result. For this reason, computational resources of the terminals 4210b and 4210d are saved, and uplink bandwidth is saved by not transmitting unnecessary training result reports.
  • the server 4220 performs a pruning step to generate a new global sparse model.
  • 43 is a diagram illustrating an example of a protocol for restarting a federated collection operation in compressed federated learning applicable to the present disclosure. 43 illustrates a protocol of federated learning in which four terminals 4310a to 4310d and the server 4320 can participate.
  • a training terminal selection step 4302 a new global sparse model information distribution step 4304 , and a training result reporting step 4306 are performed.
  • the server 4320 transmits a global sparse model change (GSMC) message as a control message to the terminals 4310a to 4310d, thereby providing new global sparse model information (new global sparse model-) info) will be distributed.
  • the server 420 transmits the new global sparse model information by using the data channel, and sends a training start message and an inference change message.
  • GSMC global sparse model change
  • the terminals 4310a, 4310b, and 4310d perform training and inference. Specifically, each of the terminals 4310a, 4310b, and 4310d converts the global sparse model into a dense network, performs training, reports a training result including a vector of change in weight determined through training, and the server 4320 collects the training results and updates the weights of the global network model.
  • 44 is a diagram illustrating an example of signal exchange in the first half of compressed associative learning applicable to the present disclosure. 44 illustrates the exchange of control messages and data messages between the first terminal 4410a, the Nth terminal 4410b, and the base station 4420 at the initial global sparse model distribution and every i-th iteration in a CFL environment.
  • the base station 4420 includes a server that controls federated learning. In the following description, descriptions of operations overlapping those of the first terminal 4410a among the operations of the N-th terminal 4410b will be omitted.
  • the first terminal 4410a transmits a UE resource report message to the base station 4420 .
  • the base station 4420 selects terminals to participate in training.
  • the base station 4420 transmits a GSMD activation message.
  • the base station 4420 distributes a sparse model.
  • the base station 4420 transmits a GSMD deactivation message.
  • the base station 4420 transmits a training start message.
  • the base station 4420 transmits a speculation start message.
  • step S4415 the first terminal 4410a converts the global sparse model into a dense network and performs training.
  • step S4417 the first terminal 4410a transmits a training result report.
  • the training result report includes a vector of change in weights determined through training.
  • step S4419 the server 4420 performs compressed federation collection. That is, the server 4420 collects training results from a plurality of terminals including the first terminal 4410a and the N-th terminal 4410b and updates the weights of the global network model. After that, the next iteration proceeds.
  • step S4421 the first terminal 4410a transmits a UE resource report message to the base station 4420 .
  • the base station 4420 selects terminals to participate in training.
  • step S4425 the base station 4420 distributes information related to the sparse weight.
  • the information related to the sparse weight includes information about the weights of the network model to be used for learning of the terminals 4410a and 4410b in the corresponding iteration.
  • 45 is a diagram illustrating an example of signal exchange in the second half of associative learning applicable to the present disclosure.
  • 44 illustrates the exchange of control messages and data messages between the first terminal 4510a, the N-th terminal 4520b, and the base station 4520 in the I-th iteration in a CFL environment.
  • the base station 4520 includes a server that controls federated learning. In the following description, descriptions of operations overlapping those of the first terminal 4510a among the operations of the N-th terminal 4520b will be omitted.
  • the first terminal 4520a transmits a UE resource report message to the base station 4520 .
  • the base station 4520 selects terminals to participate in training. Subsequently, although not shown in FIG. 45 , the base station 4520 may distribute information related to the sparse weight.
  • the first terminal 4510a transmits a training result report.
  • the training result report includes a vector of change in weights determined through training.
  • the base station 4520 Upon receiving the training result report from the first terminal 4510a, the base station 4520 determines that sufficient training results have been collected. Accordingly, in step S4509, the base station 4520 transmits a training stop message, which is a control message, to the N-th terminal 4510b. Accordingly, the N-th terminal 4510b ends the training.
  • a training stop message which is a control message
  • step S4511 the server 4520 generates a new sparse model by performing a server pruning step.
  • step S4513 the first terminal 4520a transmits a UE resource report message to the base station 4520 .
  • step S4515 the base station 4520 selects terminals.
  • step S4517 the server 4520 transmits a global sparse model change (GSMC) message.
  • step S4519 the server 4520 distributes new sparse model information.
  • steps S4521 and S4523 the server 4520 controls the terminals 4510a and 4510b to perform training and inference with a new sparse model by transmitting a training start message and an inference change message.
  • the terminal transmits information indicating a training result to a server or a base station.
  • the training result includes at least one value indicating the change amount of the weight.
  • the change amount of the weight may be variously expressed.
  • 46 is a diagram illustrating an example of a packet format for transmitting information related to weights applicable to the present disclosure. 46 is an example of a packet structure supporting two formats usable in uplink.
  • the packet includes a connection information (CI) type 4602 . If the CI type 4602 is a first value (eg, 0), the packet follows the first format 4610 . If the CI type 4602 is a second value (eg, 1), the packet follows the second format 4620 .
  • the first format 4610 is based on a bit mask header scheme, and the second format 4620 is based on an index:variance dictionary scheme.
  • a packet in a first format 4610 includes a CI 4612 and a diff(w k ) 4614 .
  • the CI 4612 includes a header for information related to the connection.
  • the CI 4612 indicates at least one connection to which a weight change amount is provided according to a bitmap method. For example, when a total of four connections exist and weight information for the first, third, and fourth connections is transmitted, the CI 4612 may be set to [1011].
  • diff(w k ) 4614 includes a weight change of at least one connection designated by CI 4612 . For example, if CI 4612 is [1011], diff(w k ) 4614 may include three weight change values.
  • the weight change of the four connections is [0.009, 0.000009, 0.9, 0.5]
  • the reporting threshold is 0.0001
  • the change amount of the second connection 0.000009 is less than the threshold 0.0001. Accordingly, the weight change amount of the remaining connections except for the second connection is reported.
  • CI 4612 is set to [1011]
  • diff(w k ) 4614 is set to [0.009, 0.9, 0.5].
  • the packet in the second format 4620 includes at least one CI index 4622 or 4626 and at least one diff(w k ) 4624 or 4628 .
  • One CI index 4622 or 4626 and one diff(w k ) 4624 or 4628 form a pair.
  • the CI index 4622 or 4626 indicates a reported connection
  • diff(w k ) 4624 or 4628 includes a value indicating the amount of weight change of the connection indicated by the CI index 4622 or 4626 .
  • the packet contains as many pairs of CI index-diff(w k ) as the number of reported connections.
  • the weight change of the 4 connections is [0.009, 0.00009, 0.9, 0.00005], and the reporting threshold is 0.0001, the change amount of the second connection 0.00009 and the change amount of the fourth connection 0.00005 are less than the threshold of 0.0001. Accordingly, the weight change amount of connections other than the second connection and the fourth connection is reported.
  • the CI index 4622:diff(w k ) 4624 is set to [1:0.009]
  • the CI index 4626:diff(w k ) 4628 is set to [3:0.9]. Accordingly, there is an effect of reducing the uplink bandwidth usage compared to the case where all weight changes are transmitted as they are.
  • the terminal can selectively use a format having a smaller packet size. For example, when transmitting a training result report, the terminal may generate packets or predict sizes according to two formats, and may transmit a packet in a format having a smaller size.
  • 47 is a diagram illustrating another example of a packet format for transmitting information related to weights applicable to the present disclosure. 47 illustrates a packet format usable in downlink.
  • the packet includes CI 4702 and W new_sparse 4704 .
  • the CI 4702 includes a bit mask header indicating a connection corresponding to at least one weight included in the W new_sparse 4704 .
  • W new_sparse 4704 includes at least one weight value of the at least one connections indicated by CI 4702 . For example, if the weights of the 1st, 3rd, and 4th connections among 4 connections are passed, CI 4702 is set to [1011] and W new_sparse (4704) is set to [0.009, 0.9, 0.5] can be
  • the format of the packet illustrated in FIG. 47 may be used by a server or base station to provide information related to new weights to be trained in every iteration.
  • the server or the base station may provide information related to the new weight using a format similar to the second format 4620 of FIG. 46 .
  • diff(w k ) included in the second format 4620 may be replaced with w k indicating a weight value.
  • examples of the above-described proposed method may also be included as one of the implementation methods of the present disclosure, it is clear that they may be regarded as a kind of proposed method.
  • the above-described proposed methods may be implemented independently, but may also be implemented in the form of a combination (or merge) of some of the proposed methods.
  • Rules can be defined so that the base station informs the terminal of whether the proposed methods are applied or not (or information on the rules of the proposed methods) through a predefined signal (eg, a physical layer signal or a higher layer signal). have.
  • Embodiments of the present disclosure may be applied to various wireless access systems.
  • various radio access systems there is a 3rd Generation Partnership Project (3GPP) or a 3GPP2 system.
  • 3GPP 3rd Generation Partnership Project
  • 3GPP2 3rd Generation Partnership Project2
  • Embodiments of the present disclosure may be applied not only to the various radio access systems, but also to all technical fields to which the various radio access systems are applied. Furthermore, the proposed method can be applied to mmWave and THzWave communication systems using very high frequency bands.
  • embodiments of the present disclosure may be applied to various applications such as free-running vehicles and drones.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne un procédé de fonctionnement d'un terminal et d'une station de base dans un système de communication sans fil, et un dispositif prenant en charge ledit procédé. Selon un exemple de la présente invention, un procédé de fonctionnement d'un terminal dans un système de communication sans fil peut comprendre les étapes consistant à : recevoir, en provenance d'un serveur, des informations relatives à un modèle de réseau initial ; élaborer un réseau dense sur la base du modèle de réseau initial ; modifier au moins un poids d'au moins une connexion par apprentissage du réseau dense ; et transmettre, au serveur, des informations relatives à une variation de poids pour au moins une connexion sélectionnée sur la base de la variation dudit poids.
PCT/KR2020/008203 2020-06-23 2020-06-23 Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil WO2021261611A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/008203 WO2021261611A1 (fr) 2020-06-23 2020-06-23 Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/008203 WO2021261611A1 (fr) 2020-06-23 2020-06-23 Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil

Publications (1)

Publication Number Publication Date
WO2021261611A1 true WO2021261611A1 (fr) 2021-12-30

Family

ID=79281390

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/008203 WO2021261611A1 (fr) 2020-06-23 2020-06-23 Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil

Country Status (1)

Country Link
WO (1) WO2021261611A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220060235A1 (en) * 2020-08-18 2022-02-24 Qualcomm Incorporated Federated learning for client-specific neural network parameter generation for wireless communication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610242A (zh) * 2019-09-02 2019-12-24 深圳前海微众银行股份有限公司 一种联邦学习中参与者权重的设置方法及装置
US10657461B2 (en) * 2016-09-26 2020-05-19 Google Llc Communication efficient federated learning
WO2020115273A1 (fr) * 2018-12-07 2020-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Prédiction de performances de communication d'un réseau à l'aide d'un apprentissage fédéré

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10657461B2 (en) * 2016-09-26 2020-05-19 Google Llc Communication efficient federated learning
WO2020115273A1 (fr) * 2018-12-07 2020-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Prédiction de performances de communication d'un réseau à l'aide d'un apprentissage fédéré
CN110610242A (zh) * 2019-09-02 2019-12-24 深圳前海微众银行股份有限公司 一种联邦学习中参与者权重的设置方法及装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FELIX SATTLER; SIMON WIEDEMANN; KLAUS-ROBERT M\"ULLER; WOJCIECH SAMEK: "Robust and Communication-Efficient Federated Learning from Non-IID Data", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 March 2019 (2019-03-07), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081130568 *
YUANG JIANG; SHIQIANG WANG; BONG JUN KO; WEI-HAN LEE; LEANDROS TASSIULAS: "Model Pruning Enables Efficient Federated Learning on Edge Devices", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 26 September 2019 (2019-09-26), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081483900 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220060235A1 (en) * 2020-08-18 2022-02-24 Qualcomm Incorporated Federated learning for client-specific neural network parameter generation for wireless communication
US11909482B2 (en) * 2020-08-18 2024-02-20 Qualcomm Incorporated Federated learning for client-specific neural network parameter generation for wireless communication

Similar Documents

Publication Publication Date Title
WO2021112360A1 (fr) Procédé et dispositif d'estimation de canal dans un système de communication sans fil
WO2022050432A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021256584A1 (fr) Procédé d'émission ou de réception de données dans un système de communication sans fil et appareil associé
WO2022075493A1 (fr) Procédé de réalisation d'un apprentissage par renforcement par un dispositif de communication v2x dans un système de conduite autonome
WO2022014732A1 (fr) Procédé et appareil d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2022054981A1 (fr) Procédé et dispositif d'exécution d'apprentissage fédéré par compression
WO2022054985A1 (fr) Procédé et appareil d'émission et de réception de signaux par un équipement utilisateur, et station de base dans un système de communication sans fil
WO2021251523A1 (fr) Procédé et dispositif permettant à un ue et à une station de base d'émettre et de recevoir un signal dans un système de communication sans fil
WO2022025321A1 (fr) Procédé et dispositif de randomisation de signal d'un appareil de communication
WO2022019352A1 (fr) Procédé et appareil de transmission et de réception de signal pour un terminal et une station de base dans un système de communication sans fil
WO2022045399A1 (fr) Procédé d'apprentissage fédéré basé sur une transmission de poids sélective et terminal associé
WO2022014751A1 (fr) Procédé et appareil de génération de mots uniques pour estimation de canal dans le domaine fréquentiel dans un système de communication sans fil
WO2021251511A1 (fr) Procédé d'émission/réception de signal de liaison montante de bande de fréquences haute dans un système de communication sans fil, et dispositif associé
WO2021261611A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2022050528A1 (fr) Procédé et appareil pour l'exécution d'une resélection de cellule dans un système de communications sans fil
WO2022080530A1 (fr) Procédé et dispositif pour émettre et recevoir des signaux en utilisant de multiples antennes dans un système de communication sans fil
WO2022097774A1 (fr) Procédé et dispositif pour la réalisation d'une rétroaction par un terminal et une station de base dans un système de communication sans fil
WO2022050434A1 (fr) Procédé et appareil pour effectuer un transfert intercellulaire dans système de communication sans fil
WO2022045402A1 (fr) Procédé et dispositif permettant à un terminal et une station de base d'émettre et recevoir un signal dans un système de communication sans fil
WO2022014735A1 (fr) Procédé et dispositif permettant à un terminal et une station de base de transmettre et recevoir des signaux dans un système de communication sans fil
WO2021256585A1 (fr) Procédé et dispositif pour la transmission/la réception d'un signal dans un système de communication sans fil
WO2022054980A1 (fr) Procédé de codage et structure de codeur de réseau neuronal utilisables dans un système de communication sans fil
WO2022092353A1 (fr) Procédé et appareil permettant d'effectuer un codage et un décodage de canal dans un système de communication sans fil
WO2022014731A1 (fr) Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp
WO2022004927A1 (fr) Procédé d'émission ou de réception de signal avec un codeur automatique dans un système de communication sans fil et appareil associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942414

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942414

Country of ref document: EP

Kind code of ref document: A1