WO2022045377A1 - Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil - Google Patents

Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil Download PDF

Info

Publication number
WO2022045377A1
WO2022045377A1 PCT/KR2020/011234 KR2020011234W WO2022045377A1 WO 2022045377 A1 WO2022045377 A1 WO 2022045377A1 KR 2020011234 W KR2020011234 W KR 2020011234W WO 2022045377 A1 WO2022045377 A1 WO 2022045377A1
Authority
WO
WIPO (PCT)
Prior art keywords
base station
terminals
mcs
compression
data
Prior art date
Application number
PCT/KR2020/011234
Other languages
English (en)
Korean (ko)
Inventor
오재기
김성진
김일환
박재용
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020227034789A priority Critical patent/KR20230056622A/ko
Priority to PCT/KR2020/011234 priority patent/WO2022045377A1/fr
Publication of WO2022045377A1 publication Critical patent/WO2022045377A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/20Arrangements for detecting or preventing errors in the information received using signal quality detector
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/336Signal-to-interference ratio [SIR] or carrier-to-interference ratio [CIR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0002Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate
    • H04L1/0003Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate by switching between different modulation schemes

Definitions

  • the following description relates to a wireless communication system, and relates to a method and apparatus for transmitting and receiving signals between a terminal and a base station in a wireless communication system.
  • a wireless access system is a multiple access system that can support communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
  • Examples of the multiple access system include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, and a single carrier frequency (SC-FDMA) system. division multiple access) systems.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • an enhanced mobile broadband (eMBB) communication technology has been proposed compared to the existing radio access technology (RAT).
  • eMBB enhanced mobile broadband
  • RAT radio access technology
  • MTC Massive Machine Type Communications
  • the present disclosure relates to a method for transmitting and receiving signals between a terminal and a base station in a wireless communication system.
  • the present disclosure relates to a method for a terminal and a base station to determine a compression ratio in order to perform communication based on joint learning in a wireless communication system.
  • the present disclosure may provide a method of operating a base station in a wireless communication system.
  • the base station operating method includes transmitting a first global parameter to a plurality of terminals, receiving respective reference signals from the plurality of terminals, and a signal noise ratio (SNR) based on each of the received reference signals. ), determining a compression rate and a modulation coding scheme (MCS) based on the measured SNR, instructing the determined compression rate and information on the MCS to the plurality of terminals, respectively, the plurality of terminals Receiving data from the determined compression ratio and MCS based on the MCS and updating the first global parameter to a second global parameter based on the data received from each of the plurality of terminals.
  • SNR signal noise ratio
  • MCS modulation coding scheme
  • a base station operating in a wireless communication system may include at least one transmitter, at least one receiver, and at least one processor.
  • the at least one memory is operably connected to the at least one processor and includes at least one memory storing instructions that, when executed, cause the at least one processor to perform a specific operation, wherein the specific operation includes: Transmitting a first global parameter to terminals, receiving respective reference signals from the plurality of terminals, measuring a signal noise ratio (SNR) based on each of the received reference signals, and measuring the measured SNR determines a compression rate and MCS (Modulation Coding Scheme) based on received, and the first global parameter may be updated with a second global parameter based on data received from each of the plurality of terminals.
  • SNR signal noise ratio
  • MCS Modulation Coding Scheme
  • the base station may determine the MCS from the measured SNR based on a preset MCS table.
  • the base station determines the MCS from the measured SNR based on reinforcement learning, wherein the reinforcement learning is a reward in consideration of frequency efficiency (Spectral Efficiency) and the measured SNR may be used as an input value, and the MCS may be derived as an output value based on the input value.
  • the reinforcement learning is a reward in consideration of frequency efficiency (Spectral Efficiency) and the measured SNR may be used as an input value, and the MCS may be derived as an output value based on the input value.
  • the base station determines the MCS through the measured SNR, and the determined MCS, original data size (Data Size, DS), terminal performance ( ) and base station performance ( ) may be determined based on at least one of the compression ratio.
  • the base station may determine the compression rate at which transmission delay is minimized based on a full connected layer method.
  • the base station determines the compression rate at which transmission delay is minimized based on reinforcement learning, wherein the reinforcement learning is a reward in consideration of delay, the determined MCS, and the original data Size (Data Size, DS), the terminal performance ( ) and the base station performance ( ) may be used as an input value, and the compression ratio may be determined based on the input value.
  • reinforcement learning is a reward in consideration of delay, the determined MCS, and the original data Size (Data Size, DS)
  • the terminal performance ( ) and the base station performance ( ) may be used as an input value
  • the compression ratio may be determined based on the input value.
  • the terminal performance ( ) transmitting a message requesting information and each terminal capability from the plurality of terminals ( ) may further include the step of receiving the information.
  • the base station may determine the MCS through the measured SNR, and determine the compression ratio based on the number of the plurality of terminals.
  • the base station determines the compression rate at which the transmission capacity is minimized based on reinforcement learning, wherein the reinforcement learning receives a reward in consideration of a target compression loss rate and the number of the plurality of terminals as an input value , and the compression ratio may be determined based on the input value.
  • the target compression loss ratio may be set differently based on the number of the plurality of terminals.
  • the method may further include, by the base station, confirming the number of the plurality of terminals.
  • the base station when the base station determines the transmission method based on the pre-stored plurality of terminal information, the base station calculates threshold number of terminals information, and based on the calculated threshold number of terminals information to determine the transmission method.
  • the terminal may transmit a signal in consideration of a federated learning method.
  • the terminal can flexibly set the transmission method in consideration of the wireless environment.
  • the terminal and the base station may determine a modulation coding scheme (MCS) and a compression rate based on joint learning.
  • MCS modulation coding scheme
  • FIG. 1 is a diagram illustrating an example of a communication system applicable to the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
  • FIG. 3 is a diagram illustrating another example of a wireless device applicable to the present disclosure.
  • FIG. 4 is a diagram illustrating an example of a portable device applicable to the present disclosure.
  • FIG. 5 is a diagram illustrating an example of a vehicle or autonomous driving vehicle applicable to the present disclosure.
  • FIG. 6 is a view showing an example of a movable body applicable to the present disclosure.
  • FIG. 7 is a diagram illustrating an example of an XR device applicable to the present disclosure.
  • FIG. 8 is a view showing an example of a robot applicable to the present disclosure.
  • AI Artificial Intelligence
  • FIG. 10 is a diagram illustrating physical channels applicable to the present disclosure and a signal transmission method using the same.
  • FIG. 11 is a diagram illustrating a control plane and a user plane structure of a radio interface protocol applicable to the present disclosure.
  • FIG. 12 is a diagram illustrating a method of processing a transmission signal applicable to the present disclosure.
  • FIG. 13 is a diagram illustrating a structure of a radio frame applicable to the present disclosure.
  • FIG. 14 is a diagram illustrating a slot structure applicable to the present disclosure.
  • 15 is a diagram illustrating an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
  • 16 is a diagram illustrating an electromagnetic spectrum applicable to the present disclosure.
  • 17 is a diagram illustrating a THz communication method applicable to the present disclosure.
  • FIG. 18 is a diagram illustrating a THz wireless communication transceiver applicable to the present disclosure.
  • FIG. 19 is a diagram illustrating a method for generating a THz signal applicable to the present disclosure.
  • 20 is a diagram illustrating a wireless communication transceiver applicable to the present disclosure.
  • 21 is a diagram illustrating a structure of a transmitter applicable to the present disclosure.
  • 22 is a diagram illustrating a modulator structure applicable to the present disclosure.
  • FIG. 23 is a diagram illustrating a neural network applicable to the present disclosure.
  • 24 is a diagram illustrating an activation node in a neural network applicable to the present disclosure.
  • 25 is a diagram illustrating a method of calculating a gradient using a chain rule applicable to the present disclosure.
  • 26 is a diagram illustrating a learning model based on RNN applicable to the present disclosure.
  • 27 is a view showing an autoencoder applicable to the present disclosure.
  • FIG. 28 is a diagram illustrating a federated learning method based on a compression rate applicable to the present disclosure.
  • 29 is a diagram illustrating a processing time and a transmission time according to a compression rate applicable to the present disclosure.
  • FIG. 30 is a diagram illustrating a method of determining a compression rate and MCS for low-latency joint learning applicable to the present disclosure.
  • 31 is a flowchart for a method of determining a compression ratio and MCS for low-latency joint learning applicable to the present disclosure.
  • AMC adaptive modulation and coding
  • 33 is a diagram illustrating a method of predicting a compression rate based on a fully connected layer in order to minimize transmission delay applicable to the present disclosure.
  • 34 is a diagram illustrating a method of controlling a compression rate to minimize transmission delay applicable to the present disclosure.
  • 35 is a diagram illustrating a flow for a method of controlling a compression rate and MCS to minimize transmission delay applicable to the present disclosure.
  • 36 is a diagram illustrating a method of controlling a compression rate to minimize a transmission capacity applicable to the present disclosure.
  • FIG. 37 is a diagram illustrating a flow for a method of controlling a compression rate and MCS in order to minimize a transmission capacity applicable to the present disclosure.
  • 38 is a diagram illustrating a method of operating a base station applicable to the present disclosure.
  • each component or feature may be considered optional unless explicitly stated otherwise.
  • Each component or feature may be implemented in a form that is not combined with other components or features.
  • some components and/or features may be combined to configure an embodiment of the present disclosure.
  • the order of operations described in embodiments of the present disclosure may be changed. Some configurations or features of one embodiment may be included in other embodiments, or may be replaced with corresponding configurations or features of other embodiments.
  • the base station has a meaning as a terminal node of a network that directly communicates with the mobile station.
  • a specific operation described as being performed by the base station in this document may be performed by an upper node of the base station in some cases.
  • the 'base station' is a term such as a fixed station, a Node B, an eNB (eNode B), a gNB (gNode B), an ng-eNB, an advanced base station (ABS) or an access point (access point).
  • eNode B eNode B
  • gNode B gNode B
  • ng-eNB ng-eNB
  • ABS advanced base station
  • access point access point
  • a terminal includes a user equipment (UE), a mobile station (MS), a subscriber station (SS), a mobile subscriber station (MSS), It may be replaced by terms such as a mobile terminal or an advanced mobile station (AMS).
  • UE user equipment
  • MS mobile station
  • SS subscriber station
  • MSS mobile subscriber station
  • AMS advanced mobile station
  • a transmitting end refers to a fixed and/or mobile node that provides a data service or a voice service
  • a receiving end refers to a fixed and/or mobile node that receives a data service or a voice service.
  • the mobile station may be a transmitting end, and the base station may be a receiving end.
  • the mobile station may be the receiving end, and the base station may be the transmitting end.
  • Embodiments of the present disclosure are wireless access systems IEEE 802.xx system, 3rd Generation Partnership Project (3GPP) system, 3GPP Long Term Evolution (LTE) system, 3GPP 5G (5th generation) NR (New Radio) system, and 3GPP2 system among It may be supported by standard documents disclosed in at least one, and in particular, embodiments of the present disclosure are supported by 3GPP TS (technical specification) 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents. can be
  • embodiments of the present disclosure may be applied to other wireless access systems, and are not limited to the above-described system. As an example, it may be applicable to a system applied after the 3GPP 5G NR system, and is not limited to a specific system.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • LTE is 3GPP TS 36.xxx Release 8 or later
  • LTE technology after 3GPP TS 36.xxx Release 10 may be referred to as LTE-A
  • xxx Release 13 may be referred to as LTE-A pro.
  • 3GPP NR may mean technology after TS 38.xxx Release 15.
  • 3GPP 6G may mean technology after TS Release 17 and/or Release 18.
  • "xxx" means standard document detail number LTE/NR/6G may be collectively referred to as a 3GPP system.
  • a communication system 100 applied to the present disclosure includes a wireless device, a base station, and a network.
  • the wireless device means a device that performs communication using a wireless access technology (eg, 5G NR, LTE), and may be referred to as a communication/wireless/5G device.
  • the wireless device may include a robot 100a, a vehicle 100b-1, 100b-2, an extended reality (XR) device 100c, a hand-held device 100d, and a home appliance. appliance) 100e, an Internet of Things (IoT) device 100f, and an artificial intelligence (AI) device/server 100g.
  • a wireless access technology eg, 5G NR, LTE
  • XR extended reality
  • AI artificial intelligence
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous driving vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
  • the vehicles 100b-1 and 100b-2 may include an unmanned aerial vehicle (UAV) (eg, a drone).
  • UAV unmanned aerial vehicle
  • the XR device 100c includes augmented reality (AR)/virtual reality (VR)/mixed reality (MR) devices, and includes a head-mounted device (HMD), a head-up display (HUD) provided in a vehicle, a television, It may be implemented in the form of a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • the portable device 100d may include a smart phone, a smart pad, a wearable device (eg, smart watch, smart glasses), and a computer (eg, a laptop computer).
  • the home appliance 100e may include a TV, a refrigerator, a washing machine, and the like.
  • the IoT device 100f may include a sensor, a smart meter, and the like.
  • the base station 120 and the network 130 may be implemented as a wireless device, and a specific wireless device 120a may operate as a base station/network node to other wireless devices.
  • the wireless devices 100a to 100f may be connected to the network 130 through the base station 120 .
  • AI technology may be applied to the wireless devices 100a to 100f , and the wireless devices 100a to 100f may be connected to the AI server 100g through the network 130 .
  • the network 130 may be configured using a 3G network, a 4G (eg, LTE) network, or a 5G (eg, NR) network.
  • the wireless devices 100a to 100f may communicate with each other through the base station 120/network 130, but communicate directly without going through the base station 120/network 130 (eg, sidelink communication) You may.
  • the vehicles 100b-1 and 100b-2 may perform direct communication (eg, vehicle to vehicle (V2V)/vehicle to everything (V2X) communication).
  • the IoT device 100f eg, a sensor
  • Wireless communication/connection 150a, 150b, and 150c may be performed between the wireless devices 100a to 100f/base station 120 and the base station 120/base station 120 .
  • wireless communication/connection includes uplink/downlink communication 150a and sidelink communication 150b (or D2D communication), and communication between base stations 150c (eg, relay, integrated access backhaul (IAB)). This may be achieved through radio access technology (eg, 5G NR).
  • IAB integrated access backhaul
  • the wireless device and the base station/wireless device, and the base station and the base station may transmit/receive wireless signals to each other.
  • the wireless communication/connection 150a , 150b , 150c may transmit/receive signals through various physical channels.
  • various configuration information setting processes for transmission/reception of wireless signals various signal processing processes (eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.) , at least a part of a resource allocation process may be performed.
  • signal processing processes eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
  • a first wireless device 200a and a second wireless device 200b may transmit/receive wireless signals through various wireless access technologies (eg, LTE, NR).
  • ⁇ first wireless device 200a, second wireless device 200b ⁇ is ⁇ wireless device 100x, base station 120 ⁇ of FIG. 1 and/or ⁇ wireless device 100x, wireless device 100x) ⁇ can be matched.
  • the first wireless device 200a includes one or more processors 202a and one or more memories 204a, and may further include one or more transceivers 206a and/or one or more antennas 208a.
  • the processor 202a controls the memory 204a and/or the transceiver 206a and may be configured to implement the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein.
  • the processor 202a may process information in the memory 204a to generate first information/signal, and then transmit a wireless signal including the first information/signal through the transceiver 206a.
  • the processor 202a may receive the radio signal including the second information/signal through the transceiver 206a, and then store the information obtained from the signal processing of the second information/signal in the memory 204a.
  • the memory 204a may be connected to the processor 202a and may store various information related to the operation of the processor 202a.
  • the memory 204a may provide instructions for performing some or all of the processes controlled by the processor 202a, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
  • the processor 202a and the memory 204a may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • a wireless communication technology eg, LTE, NR
  • the transceiver 206a may be coupled to the processor 202a and may transmit and/or receive wireless signals via one or more antennas 208a.
  • the transceiver 206a may include a transmitter and/or a receiver.
  • the transceiver 206a may be used interchangeably with a radio frequency (RF) unit.
  • RF radio frequency
  • a wireless device may refer to a communication modem/circuit/chip.
  • the second wireless device 200b includes one or more processors 202b, one or more memories 204b, and may further include one or more transceivers 206b and/or one or more antennas 208b.
  • the processor 202b controls the memory 204b and/or the transceiver 206b and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein.
  • the processor 202b may process information in the memory 204b to generate third information/signal, and then transmit a wireless signal including the third information/signal through the transceiver 206b.
  • the processor 202b may receive the radio signal including the fourth information/signal through the transceiver 206b, and then store information obtained from signal processing of the fourth information/signal in the memory 204b.
  • the memory 204b may be connected to the processor 202b and may store various information related to the operation of the processor 202b.
  • the memory 204b may provide instructions for performing some or all of the processes controlled by the processor 202b, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
  • the processor 202b and the memory 204b may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • a wireless communication technology eg, LTE, NR
  • the transceiver 206b may be coupled to the processor 202b and may transmit and/or receive wireless signals via one or more antennas 208b.
  • Transceiver 206b may include a transmitter and/or receiver.
  • Transceiver 206b may be used interchangeably with an RF unit.
  • a wireless device may refer to a communication modem/circuit/chip.
  • one or more protocol layers may be implemented by one or more processors 202a, 202b.
  • one or more processors 202a, 202b may include one or more layers (eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control) and a functional layer such as service data adaptation protocol (SDAP)).
  • layers eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control
  • SDAP service data adaptation protocol
  • the one or more processors 202a, 202b may be configured to process one or more protocol data units (PDUs) and/or one or more service data units (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein. can create The one or more processors 202a, 202b may generate messages, control information, data, or information according to the description, function, procedure, proposal, method, and/or flow charts disclosed herein. The one or more processors 202a, 202b generate a signal (eg, a baseband signal) including a PDU, SDU, message, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein.
  • a signal eg, a baseband signal
  • processors 202a, 202b may receive signals (eg, baseband signals) from one or more transceivers 206a, 206b, and the descriptions, functions, procedures, proposals, methods, and/or flowcharts of operation disclosed herein.
  • PDUs, SDUs, messages, control information, data, or information may be acquired according to the fields.
  • One or more processors 202a, 202b may be referred to as a controller, microcontroller, microprocessor, or microcomputer.
  • One or more processors 202a, 202b may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • firmware or software may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, and the like.
  • the descriptions, functions, procedures, proposals, methods, and/or flow charts disclosed in this document provide that firmware or software configured to perform is included in one or more processors 202a, 202b, or stored in one or more memories 204a, 204b. It may be driven by the above processors 202a and 202b.
  • the descriptions, functions, procedures, proposals, methods, and/or flowcharts of operations disclosed herein may be implemented using firmware or software in the form of code, instructions, and/or a set of instructions.
  • One or more memories 204a, 204b may be coupled to one or more processors 202a, 202b and may store various types of data, signals, messages, information, programs, codes, instructions, and/or instructions.
  • One or more memories 204a, 204b may include read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, hard drives, registers, cache memory, computer readable storage media and/or It may be composed of a combination of these.
  • One or more memories 204a, 204b may be located inside and/or external to one or more processors 202a, 202b. Additionally, one or more memories 204a, 204b may be coupled to one or more processors 202a, 202b through various technologies, such as wired or wireless connections.
  • the one or more transceivers 206a, 206b may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flowcharts of this document to one or more other devices.
  • the one or more transceivers 206a, 206b may receive user data, control information, radio signals/channels, etc. referred to in the descriptions, functions, procedures, suggestions, methods and/or flow charts, etc. disclosed herein, from one or more other devices. there is.
  • one or more transceivers 206a , 206b may be coupled to one or more processors 202a , 202b and may transmit and receive wireless signals.
  • one or more processors 202a, 202b may control one or more transceivers 206a, 206b to transmit user data, control information, or wireless signals to one or more other devices. Additionally, one or more processors 202a, 202b may control one or more transceivers 206a, 206b to receive user data, control information, or wireless signals from one or more other devices. Further, one or more transceivers 206a, 206b may be coupled with one or more antennas 208a, 208b, and the one or more transceivers 206a, 206b may be connected via one or more antennas 208a, 208b. , may be set to transmit and receive user data, control information, radio signals/channels, etc.
  • one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
  • the one or more transceivers 206a, 206b converts the received radio signal/channel, etc. from the RF band signal to process the received user data, control information, radio signal/channel, etc. using the one or more processors 202a, 202b. It can be converted into a baseband signal.
  • One or more transceivers 206a, 206b may convert user data, control information, radio signals/channels, etc. processed using one or more processors 202a, 202b from baseband signals to RF band signals.
  • one or more transceivers 206a, 206b may include (analog) oscillators and/or filters.
  • FIG. 3 is a diagram illustrating another example of a wireless device applied to the present disclosure.
  • a wireless device 300 corresponds to the wireless devices 200a and 200b of FIG. 2 , and includes various elements, components, units/units, and/or modules. ) can be composed of
  • the wireless device 300 may include a communication unit 310 , a control unit 320 , a memory unit 330 , and an additional element 340 .
  • the communication unit may include communication circuitry 312 and transceiver(s) 314 .
  • communication circuitry 312 may include one or more processors 202a, 202b and/or one or more memories 204a, 204b of FIG. 2 .
  • the transceiver(s) 314 may include one or more transceivers 206a , 206b and/or one or more antennas 208a , 208b of FIG. 2 .
  • the control unit 320 is electrically connected to the communication unit 310 , the memory unit 330 , and the additional element 340 and controls general operations of the wireless device.
  • the controller 320 may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit 330 .
  • control unit 320 transmits the information stored in the memory unit 330 to the outside (eg, another communication device) through the communication unit 310 through a wireless/wired interface, or externally (eg, through the communication unit 310) Information received through a wireless/wired interface from another communication device) may be stored in the memory unit 330 .
  • the additional element 340 may be configured in various ways according to the type of the wireless device.
  • the additional element 340 may include at least one of a power unit/battery, an input/output unit, a driving unit, and a computing unit.
  • the wireless device 300 may include a robot ( FIGS. 1 and 100a ), a vehicle ( FIGS. 1 , 100b-1 , 100b-2 ), an XR device ( FIGS. 1 and 100c ), and a mobile device ( FIGS. 1 and 100d ). ), home appliances (FIG. 1, 100e), IoT device (FIG.
  • the wireless device may be mobile or used in a fixed location depending on the use-example/service.
  • various elements, components, units/units, and/or modules in the wireless device 300 may be all interconnected through a wired interface, or at least some may be wirelessly connected through the communication unit 310 .
  • the control unit 320 and the communication unit 310 are connected by wire, and the control unit 320 and the first unit (eg, 130 , 140 ) are connected wirelessly through the communication unit 310 .
  • each element, component, unit/unit, and/or module within the wireless device 300 may further include one or more elements.
  • the controller 320 may include one or more processor sets.
  • control unit 320 may be configured as a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
  • memory unit 330 may include RAM, dynamic RAM (DRAM), ROM, flash memory, volatile memory, non-volatile memory, and/or a combination thereof. can be configured.
  • FIG. 4 is a diagram illustrating an example of a mobile device applied to the present disclosure.
  • the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), and a portable computer (eg, a laptop computer).
  • the mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
  • MS mobile station
  • UT user terminal
  • MSS mobile subscriber station
  • SS subscriber station
  • AMS advanced mobile station
  • WT wireless terminal
  • the mobile device 400 includes an antenna unit 408 , a communication unit 410 , a control unit 420 , a memory unit 430 , a power supply unit 440a , an interface unit 440b , and an input/output unit 440c .
  • the antenna unit 408 may be configured as a part of the communication unit 410 .
  • Blocks 410 to 430/440a to 440c respectively correspond to blocks 310 to 330/340 of FIG. 3 .
  • the communication unit 410 may transmit and receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
  • the controller 420 may control components of the portable device 400 to perform various operations.
  • the controller 420 may include an application processor (AP).
  • the memory unit 430 may store data/parameters/programs/codes/commands necessary for driving the portable device 400 . Also, the memory unit 430 may store input/output data/information.
  • the power supply unit 440a supplies power to the portable device 400 and may include a wired/wireless charging circuit, a battery, and the like.
  • the interface unit 440b may support a connection between the portable device 400 and other external devices.
  • the interface unit 440b may include various ports (eg, an audio input/output port and a video input/output port) for connection with an external device.
  • the input/output unit 440c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
  • the input/output unit 440c may include a camera, a microphone, a user input unit, a display unit 440d, a speaker, and/or a haptic module.
  • the input/output unit 440c obtains information/signals (eg, touch, text, voice, image, video) input from the user, and the obtained information/signals are stored in the memory unit 430 . can be saved.
  • the communication unit 410 may convert the information/signal stored in the memory into a wireless signal, and transmit the converted wireless signal directly to another wireless device or to a base station. Also, after receiving a radio signal from another radio device or base station, the communication unit 410 may restore the received radio signal to original information/signal.
  • the restored information/signal may be stored in the memory unit 430 and output in various forms (eg, text, voice, image, video, haptic) through the input/output unit 440c.
  • FIG. 5 is a diagram illustrating an example of a vehicle or autonomous driving vehicle applied to the present disclosure.
  • the vehicle or autonomous driving vehicle may be implemented as a mobile robot, a vehicle, a train, an aerial vehicle (AV), a ship, and the like, but is not limited to the shape of the vehicle.
  • AV aerial vehicle
  • the vehicle or autonomous driving vehicle 500 includes an antenna unit 508 , a communication unit 510 , a control unit 520 , a driving unit 540a , a power supply unit 540b , a sensor unit 540c and autonomous driving.
  • a unit 540d may be included.
  • the antenna unit 550 may be configured as a part of the communication unit 510 .
  • Blocks 510/530/540a to 540d respectively correspond to blocks 410/430/440 of FIG. 4 .
  • the communication unit 510 may transmit/receive signals (eg, data, control signals, etc.) to and from external devices such as other vehicles, base stations (eg, base stations, roadside units, etc.), and servers.
  • the controller 520 may control elements of the vehicle or the autonomous driving vehicle 500 to perform various operations.
  • the controller 520 may include an electronic control unit (ECU).
  • the driving unit 540a may cause the vehicle or the autonomous driving vehicle 500 to run on the ground.
  • the driving unit 540a may include an engine, a motor, a power train, a wheel, a brake, a steering device, and the like.
  • the power supply unit 540b supplies power to the vehicle or the autonomous driving vehicle 500 , and may include a wired/wireless charging circuit, a battery, and the like.
  • the sensor unit 540c may obtain vehicle state, surrounding environment information, user information, and the like.
  • the sensor unit 540c includes an inertial measurement unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, an inclination sensor, a weight sensor, a heading sensor, a position module, and a vehicle forward movement.
  • IMU inertial measurement unit
  • a collision sensor a wheel sensor
  • a speed sensor a speed sensor
  • an inclination sensor a weight sensor
  • a heading sensor a position module
  • a vehicle forward movement / may include a reverse sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, a pedal position sensor, and the like.
  • the autonomous driving unit 540d includes a technology for maintaining a driving lane, a technology for automatically adjusting speed such as adaptive cruise control, a technology for automatically driving along a predetermined route, and a technology for automatically setting a route when a destination is set. technology can be implemented.
  • the communication unit 510 may receive map data, traffic information data, and the like from an external server.
  • the autonomous driving unit 540d may generate an autonomous driving route and a driving plan based on the acquired data.
  • the controller 520 may control the driving unit 540a to move the vehicle or the autonomous driving vehicle 500 along the autonomous driving path (eg, speed/direction adjustment) according to the driving plan.
  • the communication unit 510 may obtain the latest traffic information data from an external server non/periodically, and may acquire surrounding traffic information data from surrounding vehicles.
  • the sensor unit 540c may acquire vehicle state and surrounding environment information.
  • the autonomous driving unit 540d may update the autonomous driving route and driving plan based on the newly acquired data/information.
  • the communication unit 510 may transmit information about a vehicle location, an autonomous driving route, a driving plan, and the like to an external server.
  • the external server may predict traffic information data in advance using AI technology or the like based on information collected from the vehicle or autonomous vehicles, and may provide the predicted traffic information data to the vehicle or autonomous vehicles.
  • FIG. 6 is a diagram illustrating an example of a movable body applied to the present disclosure.
  • the moving object applied to the present disclosure may be implemented as at least any one of means of transport, train, aircraft, and ship.
  • the movable body applied to the present disclosure may be implemented in other forms, and is not limited to the above-described embodiment.
  • the mobile unit 600 may include a communication unit 610 , a control unit 620 , a memory unit 630 , an input/output unit 640a , and a position measurement unit 640b .
  • blocks 610 to 630/640a to 640b correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
  • the communication unit 610 may transmit/receive signals (eg, data, control signals, etc.) with other mobile devices or external devices such as a base station.
  • the controller 620 may perform various operations by controlling the components of the movable body 600 .
  • the memory unit 630 may store data/parameters/programs/codes/commands supporting various functions of the mobile unit 600 .
  • the input/output unit 640a may output an AR/VR object based on information in the memory unit 630 .
  • the input/output unit 640a may include a HUD.
  • the position measuring unit 640b may acquire position information of the moving object 600 .
  • the location information may include absolute location information of the moving object 600 , location information within a driving line, acceleration information, and location information with a surrounding vehicle.
  • the position measuring unit 640b may include a GPS and various sensors.
  • the communication unit 610 of the mobile unit 600 may receive map information, traffic information, and the like from an external server and store it in the memory unit 630 .
  • the position measurement unit 640b may obtain information about the location of the moving object through GPS and various sensors and store it in the memory unit 630 .
  • the controller 620 may generate a virtual object based on map information, traffic information, and location information of a moving object, and the input/output unit 640a may display the generated virtual object on a window inside the moving object (651, 652). Also, the control unit 620 may determine whether the moving object 600 is normally operating within the driving line based on the moving object location information.
  • the control unit 620 may display a warning on the glass window of the moving object through the input/output unit 640a. Also, the control unit 620 may broadcast a warning message regarding the driving abnormality to surrounding moving objects through the communication unit 610 . Depending on the situation, the control unit 620 may transmit the location information of the moving object and information on the driving/moving object abnormality to the related organization through the communication unit 610 .
  • the XR device may be implemented as an HMD, a head-up display (HUD) provided in a vehicle, a television, a smart phone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • HMD head-up display
  • a television a smart phone
  • a computer a wearable device
  • a home appliance a digital signage
  • a vehicle a robot, and the like.
  • the XR device 700a may include a communication unit 710 , a control unit 720 , a memory unit 730 , an input/output unit 740a , a sensor unit 740b , and a power supply unit 740c .
  • blocks 710 to 730/740a to 740c may correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
  • the communication unit 710 may transmit/receive signals (eg, media data, control signals, etc.) to/from external devices such as other wireless devices, portable devices, or media servers.
  • Media data may include images, images, and sounds.
  • the controller 720 may perform various operations by controlling the components of the XR device 700a.
  • the controller 720 may be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation and processing.
  • the memory unit 730 may store data/parameters/programs/codes/commands necessary for driving the XR device 700a/creating an XR object.
  • the input/output unit 740a may obtain control information, data, etc. from the outside, and may output the generated XR object.
  • the input/output unit 740a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 740b may obtain an XR device state, surrounding environment information, user information, and the like.
  • the sensor unit 740b includes a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, a red green blue (RGB) sensor, an infrared (IR) sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone and / or radar or the like.
  • the power supply unit 740c supplies power to the XR device 700a, and may include a wired/wireless charging circuit, a battery, and the like.
  • the memory unit 730 of the XR device 700a may include information (eg, data, etc.) necessary for generating an XR object (eg, AR/VR/MR object).
  • the input/output unit 740a may obtain a command to operate the XR device 700a from the user, and the controller 720 may drive the XR device 700a according to the user's driving command. For example, when the user intends to watch a movie or news through the XR device 700a, the controller 720 transmits the content request information through the communication unit 730 to another device (eg, the mobile device 700b) or can be sent to the media server.
  • another device eg, the mobile device 700b
  • the communication unit 730 may download/stream contents such as movies and news from another device (eg, the portable device 700b) or a media server to the memory unit 730 .
  • the controller 720 controls and/or performs procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing for the content, and is acquired through the input/output unit 740a/sensor unit 740b It is possible to generate/output an XR object based on information about one surrounding space or a real object.
  • the XR device 700a is wirelessly connected to the portable device 700b through the communication unit 710 , and the operation of the XR device 700a may be controlled by the portable device 700b.
  • the portable device 700b may operate as a controller for the XR device 700a.
  • the XR device 700a may obtain 3D location information of the portable device 700b, and then generate and output an XR object corresponding to the portable device 700b.
  • the robot 800 may include a communication unit 810 , a control unit 820 , a memory unit 830 , an input/output unit 840a , a sensor unit 840b , and a driving unit 840c .
  • blocks 810 to 830/840a to 840c may correspond to blocks 310 to 330/340 of FIG. 3 , respectively.
  • the communication unit 810 may transmit and receive signals (eg, driving information, control signals, etc.) with external devices such as other wireless devices, other robots, or control servers.
  • the controller 820 may control components of the robot 800 to perform various operations.
  • the memory unit 830 may store data/parameters/programs/codes/commands supporting various functions of the robot 800 .
  • the input/output unit 840a may obtain information from the outside of the robot 800 and may output information to the outside of the robot 800 .
  • the input/output unit 840a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 840b may obtain internal information, surrounding environment information, user information, and the like of the robot 800 .
  • the sensor unit 840b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, a radar, and the like.
  • the driving unit 840c may perform various physical operations, such as moving a robot joint. Also, the driving unit 840c may cause the robot 800 to travel on the ground or to fly in the air.
  • the driving unit 840c may include an actuator, a motor, a wheel, a brake, a propeller, and the like.
  • AI devices include TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc. It may be implemented as a device or a mobile device.
  • the AI device 900 includes a communication unit 910 , a control unit 920 , a memory unit 930 , input/output units 940a/940b , a learning processor unit 940c and a sensor unit 940d.
  • the communication unit 910 uses wired/wireless communication technology to communicate with external devices such as other AI devices (eg, FIGS. 1, 100x, 120, 140) or an AI server ( FIGS. 1 and 140 ) and wired/wireless signals (eg, sensor information, user input, learning model, control signal, etc.). To this end, the communication unit 910 may transmit information in the memory unit 930 to an external device or transmit a signal received from the external device to the memory unit 930 .
  • AI devices eg, FIGS. 1, 100x, 120, 140
  • an AI server FIGS. 1 and 140
  • wired/wireless signals eg, sensor information, user input, learning model, control signal, etc.
  • the controller 920 may determine at least one executable operation of the AI device 900 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. In addition, the controller 920 may control the components of the AI device 900 to perform the determined operation. For example, the control unit 920 may request, search, receive, or utilize the data of the learning processor unit 940c or the memory unit 930, and may be a predicted operation among at least one executable operation or determined to be preferable. Components of the AI device 900 may be controlled to execute the operation.
  • control unit 920 collects history information including user feedback on the operation contents or operation of the AI device 900 and stores it in the memory unit 930 or the learning processor unit 940c, or the AI server ( 1 and 140), and the like may be transmitted to an external device.
  • the collected historical information may be used to update the learning model.
  • the memory unit 930 may store data supporting various functions of the AI device 900 .
  • the memory unit 930 may store data obtained from the input unit 940a , data obtained from the communication unit 910 , output data of the learning processor unit 940c , and data obtained from the sensing unit 940 .
  • the memory unit 930 may store control information and/or software codes necessary for the operation/execution of the control unit 920 .
  • the input unit 940a may acquire various types of data from the outside of the AI device 900 .
  • the input unit 920 may obtain training data for model learning, input data to which the learning model is applied, and the like.
  • the input unit 940a may include a camera, a microphone, and/or a user input unit.
  • the output unit 940b may generate an output related to sight, hearing, or touch.
  • the output unit 940b may include a display unit, a speaker, and/or a haptic module.
  • the sensing unit 940 may obtain at least one of internal information of the AI device 900 , surrounding environment information of the AI device 900 , and user information by using various sensors.
  • the sensing unit 940 may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. there is.
  • the learning processor unit 940c may train a model composed of an artificial neural network by using the training data.
  • the learning processor unit 940c may perform AI processing together with the learning processor unit of the AI server ( FIGS. 1 and 140 ).
  • the learning processor unit 940c may process information received from an external device through the communication unit 910 and/or information stored in the memory unit 930 . Also, the output value of the learning processor unit 940c may be transmitted to an external device through the communication unit 910 and/or stored in the memory unit 930 .
  • a terminal may receive information from a base station through downlink (DL) and transmit information to a base station through uplink (UL).
  • Information transmitted and received between the base station and the terminal includes general data information and various control information, and various physical channels exist according to the type/use of the information they transmit and receive.
  • FIG. 10 is a diagram illustrating physical channels applied to the present disclosure and a signal transmission method using the same.
  • the terminal receives a primary synchronization channel (P-SCH) and a secondary synchronization channel (S-SCH) from the base station, synchronizes with the base station, and can obtain information such as cell ID. .
  • P-SCH primary synchronization channel
  • S-SCH secondary synchronization channel
  • the terminal may receive a physical broadcast channel (PBCH) signal from the base station to obtain intra-cell broadcast information.
  • the UE may receive a downlink reference signal (DL RS) in the initial cell search step to check the downlink channel state.
  • DL RS downlink reference signal
  • the UE receives a physical downlink control channel (PDCCH) and a physical downlink control channel (PDSCH) according to physical downlink control channel information in step S1012 and receives a little more Specific system information can be obtained.
  • PDCCH physical downlink control channel
  • PDSCH physical downlink control channel
  • the terminal may perform a random access procedure, such as steps S1013 to S1016, to complete access to the base station.
  • the UE transmits a preamble through a physical random access channel (PRACH) (S1013), and RAR for the preamble through a physical downlink control channel and a corresponding physical downlink shared channel (S1013). random access response) may be received (S1014).
  • the UE transmits a physical uplink shared channel (PUSCH) using scheduling information in the RAR (S1015), and a contention resolution procedure such as reception of a physical downlink control channel signal and a corresponding physical downlink shared channel signal. ) can be performed (S1016).
  • PUSCH physical uplink shared channel
  • S1015 scheduling information in the RAR
  • a contention resolution procedure such as reception of a physical downlink control channel signal and a corresponding physical downlink shared channel signal.
  • the UE After performing the above procedure, the UE receives a physical downlink control channel signal and/or a physical downlink shared channel signal (S1017) and a physical uplink shared channel as a general uplink/downlink signal transmission procedure.
  • channel, PUSCH) signal and/or a physical uplink control channel (PUCCH) signal may be transmitted ( S1018 ).
  • UCI uplink control information
  • HARQ-ACK / NACK hybrid automatic repeat and request acknowledgment / negative-ACK
  • SR scheduling request
  • CQI channel quality indication
  • PMI precoding matrix indication
  • RI rank indication
  • BI beam indication
  • the UCI is generally transmitted periodically through the PUCCH, but may be transmitted through the PUSCH according to an embodiment (eg, when control information and traffic data are to be transmitted at the same time).
  • the UE may aperiodically transmit the UCI through the PUSCH.
  • FIG. 11 is a diagram illustrating a control plane and a user plane structure of a radio interface protocol applied to the present disclosure.
  • entity 1 may be a user equipment (UE).
  • the term "terminal" may be at least one of a wireless device, a portable device, a vehicle, a mobile body, an XR device, a robot, and an AI to which the present disclosure is applied in FIGS. 1 to 9 described above.
  • the terminal refers to a device to which the present disclosure can be applied and may not be limited to a specific device or device.
  • Entity 2 may be a base station.
  • the base station may be at least one of an eNB, a gNB, and an ng-eNB.
  • the base station may refer to an apparatus for transmitting a downlink signal to the terminal, and may not be limited to a specific type or apparatus. That is, the base station may be implemented in various forms or types, and may not be limited to a specific form.
  • Entity 3 may be a network device or a device performing a network function.
  • the network device may be a core network node (eg, a mobility management entity (MME), an access and mobility management function (AMF), etc.) that manages mobility.
  • the network function may mean a function implemented to perform a network function
  • entity 3 may be a device to which the function is applied. That is, the entity 3 may refer to a function or device that performs a network function, and is not limited to a specific type of device.
  • the control plane may refer to a path through which control messages used by a user equipment (UE) and a network to manage a call are transmitted.
  • the user plane may mean a path through which data generated in the application layer, for example, voice data or Internet packet data, is transmitted.
  • the physical layer which is the first layer, may provide an information transfer service to a higher layer by using a physical channel.
  • the physical layer is connected to the upper medium access control layer through a transport channel.
  • data may be moved between the medium access control layer and the physical layer through the transport channel.
  • Data can be moved between the physical layers of the transmitting side and the receiving side through a physical channel.
  • the physical channel uses time and frequency as radio resources.
  • a medium access control (MAC) layer of the second layer provides a service to a radio link control (RLC) layer, which is an upper layer, through a logical channel.
  • the RLC layer of the second layer may support reliable data transmission.
  • the function of the RLC layer may be implemented as a function block inside the MAC.
  • the packet data convergence protocol (PDCP) layer of the second layer may perform a header compression function that reduces unnecessary control information in order to efficiently transmit IP packets such as IPv4 or IPv6 in a narrow-bandwidth air interface.
  • PDCP packet data convergence protocol
  • a radio resource control (RRC) layer located at the bottom of the third layer is defined only in the control plane.
  • the RRC layer may be in charge of controlling logical channels, transport channels and physical channels in relation to configuration, re-configuration, and release of radio bearers (RBs).
  • RB may mean a service provided by the second layer for data transfer between the terminal and the network.
  • the UE and the RRC layer of the network may exchange RRC messages with each other.
  • a non-access stratum (NAS) layer above the RRC layer may perform functions such as session management and mobility management.
  • One cell constituting the base station may be set to one of various bandwidths to provide downlink or uplink transmission services to multiple terminals. Different cells may be configured to provide different bandwidths.
  • the downlink transmission channel for transmitting data from the network to the terminal includes a broadcast channel (BCH) for transmitting system information, a paging channel (PCH) for transmitting a paging message, and a downlink shared channel (SCH) for transmitting user traffic or control messages.
  • BCH broadcast channel
  • PCH paging channel
  • SCH downlink shared channel
  • a downlink multicast or broadcast service traffic or control message it may be transmitted through a downlink SCH or may be transmitted through a separate downlink multicast channel (MCH).
  • RACH random access channel
  • SCH uplink shared channel
  • a logical channel that is located above the transport channel and is mapped to the transport channel includes a broadcast control channel (BCCH), a paging control channel (PCCH), a common control channel (CCCH), a multicast control channel (MCCH), and a multicast (MTCH) channel. traffic channels), etc.
  • BCCH broadcast control channel
  • PCCH paging control channel
  • CCCH common control channel
  • MCCH multicast control channel
  • MTCH multicast
  • the transmission signal may be processed by a signal processing circuit.
  • the signal processing circuit 1200 may include a scrambler 1210 , a modulator 1220 , a layer mapper 1230 , a precoder 1240 , a resource mapper 1250 , and a signal generator 1260 .
  • the operation/function of FIG. 12 may be performed by the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
  • blocks 1010 to 1060 may be implemented in the processors 202a and 202b of FIG. 2 .
  • blocks 1210 to 1250 may be implemented in the processors 202a and 202b of FIG. 2
  • block 1260 may be implemented in the transceivers 206a and 206b of FIG. 2 , and the embodiment is not limited thereto.
  • the codeword may be converted into a wireless signal through the signal processing circuit 1200 of FIG. 12 .
  • the codeword is a coded bit sequence of an information block.
  • the information block may include a transport block (eg, a UL-SCH transport block, a DL-SCH transport block).
  • the radio signal may be transmitted through various physical channels (eg, PUSCH, PDSCH) of FIG. 10 .
  • the codeword may be converted into a scrambled bit sequence by the scrambler 1210 .
  • a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device, and the like.
  • the scrambled bit sequence may be modulated by a modulator 1220 into a modulation symbol sequence.
  • the modulation method may include pi/2-binary phase shift keying (pi/2-BPSK), m-phase shift keying (m-PSK), m-quadrature amplitude modulation (m-QAM), and the like.
  • the complex modulation symbol sequence may be mapped to one or more transport layers by a layer mapper 1230 .
  • Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 1240 (precoding).
  • the output z of the precoder 1240 may be obtained by multiplying the output y of the layer mapper 1230 by the precoding matrix W of N*M.
  • N is the number of antenna ports
  • M is the number of transport layers.
  • the precoder 1240 may perform precoding after performing transform precoding (eg, discrete fourier transform (DFT) transform) on the complex modulation symbols. Also, the precoder 1240 may perform precoding without performing transform precoding.
  • transform precoding eg, discrete fourier transform (DFT) transform
  • the resource mapper 1250 may map modulation symbols of each antenna port to a time-frequency resource.
  • the time-frequency resource may include a plurality of symbols (eg, a CP-OFDMA symbol, a DFT-s-OFDMA symbol) in the time domain and a plurality of subcarriers in the frequency domain.
  • the signal generator 1260 generates a radio signal from the mapped modulation symbols, and the generated radio signal may be transmitted to another device through each antenna.
  • the signal generator 1260 may include an inverse fast fourier transform (IFFT) module and a cyclic prefix (CP) inserter, a digital-to-analog converter (DAC), a frequency uplink converter, and the like. .
  • IFFT inverse fast fourier transform
  • CP cyclic prefix
  • DAC digital-to-analog converter
  • the signal processing process for the received signal in the wireless device may be configured in reverse of the signal processing process 1210 to 1260 of FIG. 12 .
  • the wireless device eg, 200a or 200b of FIG. 2
  • the received radio signal may be converted into a baseband signal through a signal restorer.
  • the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast fourier transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT fast fourier transform
  • the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a descrambling process.
  • the codeword may be restored to the original information block through decoding.
  • the signal processing circuit (not shown) for the received signal may include a signal restorer, a resource de-mapper, a post coder, a demodulator, a descrambler, and a decoder.
  • FIG. 13 is a diagram illustrating a structure of a radio frame applicable to the present disclosure.
  • Uplink and downlink transmission based on the NR system may be based on a frame as shown in FIG. 13 .
  • one radio frame has a length of 10 ms and may be defined as two 5 ms half-frames (HF).
  • One half-frame may be defined as 5 1ms subframes (subframe, SF).
  • One subframe is divided into one or more slots, and the number of slots in a subframe may depend on subcarrier spacing (SCS).
  • SCS subcarrier spacing
  • each slot may include 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP).
  • CP cyclic prefix
  • each slot When a normal CP (normal CP) is used, each slot may include 14 symbols.
  • each slot may include 12 symbols.
  • the symbol may include an OFDM symbol (or a CP-OFDM symbol) and an SC-FDMA symbol (or a DFT-s-OFDM symbol).
  • Table 1 shows the number of symbols per slot, the number of slots per frame, and the number of slots per subframe according to the SCS when the normal CP is used
  • Table 2 shows the number of slots per slot according to the SCS when the extended CSP is used. Indicates the number of symbols, the number of slots per frame, and the number of slots per subframe.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • an (absolute time) interval of a time resource eg, SF, slot, or TTI
  • a TU time unit
  • NR may support multiple numerology (or subcarrier spacing (SCS)) to support various 5G services. For example, when SCS is 15kHz, it supports a wide area in traditional cellular bands, and when SCS is 30kHz/60kHz, dense-urban, lower latency and a wider carrier bandwidth, and when the SCS is 60 kHz or higher, it can support a bandwidth greater than 24.25 GHz to overcome phase noise.
  • SCS subcarrier spacing
  • the NR frequency band is defined as a frequency range of two types (FR1, FR2).
  • FR1 and FR2 may be configured as shown in the table below.
  • FR2 may mean a millimeter wave (mmW).
  • 6G (wireless) systems have (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery- It aims to reduce energy consumption of battery-free IoT devices, (vi) ultra-reliable connections, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system may have four aspects such as “intelligent connectivity”, “deep connectivity”, “holographic connectivity”, and “ubiquitous connectivity”, and the 6G system can satisfy the requirements shown in Table 4 below. That is, Table 4 is a table showing the requirements of the 6G system.
  • the above-described pneumatic numerology may be set differently.
  • a terahertz wave (THz) band may be used as a higher frequency band than the above-described FR2.
  • the SCS may be set to be larger than that of the NR system, and the number of slots may be set differently, and it is not limited to the above-described embodiment.
  • the THz band will be described later.
  • FIG. 14 is a diagram illustrating a slot structure applicable to the present disclosure.
  • One slot includes a plurality of symbols in the time domain. For example, in the case of a normal CP, one slot may include 7 symbols, but in the case of an extended CP, one slot may include 6 symbols.
  • a carrier includes a plurality of subcarriers (subcarrier) in the frequency domain.
  • a resource block may be defined as a plurality of (eg, 12) consecutive subcarriers in the frequency domain.
  • a bandwidth part is defined as a plurality of consecutive (P)RBs in the frequency domain, and may correspond to one numerology (eg, SCS, CP length, etc.).
  • a carrier may include a maximum of N (eg, 5) BWPs. Data communication is performed through the activated BWP, and only one BWP can be activated for one terminal.
  • N e.g. 5
  • Each element in the resource grid is referred to as a resource element (RE), and one complex symbol may be mapped.
  • RE resource element
  • the 6G system includes enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine type communications (mmTC), AI integrated communication, and tactile Internet (tactile internet), high throughput (high throughput), high network capacity (high network capacity), high energy efficiency (high energy efficiency), low backhaul and access network congestion (low backhaul and access network congestion) and improved data security ( It may have key factors such as enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mmTC massive machine type communications
  • AI integrated communication e.g., eMBB
  • tactile Internet e internet
  • high throughput high network capacity
  • high energy efficiency high energy efficiency
  • low backhaul and access network congestion low backhaul and access network congestion
  • improved data security It may have key factors such as enhanced data security.
  • 15 is a diagram illustrating an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
  • the 6G system is expected to have 50 times higher simultaneous wireless communication connectivity than the 5G wireless communication system.
  • URLLC a key feature of 5G, is expected to become an even more important technology by providing an end-to-end delay of less than 1 ms in 6G communication.
  • the 6G system will have much better volumetric spectral efficiency, unlike the frequently used area spectral efficiency.
  • 6G systems can provide very long battery life and advanced battery technology for energy harvesting, so mobile devices in 6G systems may not need to be charged separately.
  • new network characteristics in 6G may be as follows.
  • 6G is expected to be integrated with satellites to provide a global mobile population.
  • the integration of terrestrial, satellite and public networks into one wireless communication system could be very important for 6G.
  • AI may be applied in each step of a communication procedure (or each procedure of signal processing to be described later).
  • the 6G wireless network will deliver power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Small cell networks The idea of small cell networks was introduced to improve the received signal quality as a result of improved throughput, energy efficiency and spectral efficiency in cellular systems. As a result, small cell networks are essential characteristics for communication systems beyond 5G and Beyond 5G (5GB). Accordingly, the 6G communication system also adopts the characteristics of the small cell network.
  • Ultra-dense heterogeneous networks will be another important characteristic of 6G communication system.
  • a multi-tier network composed of heterogeneous networks improves overall QoS and reduces costs.
  • the backhaul connection is characterized as a high-capacity backhaul network to support high-capacity traffic.
  • High-speed fiber optics and free-space optics (FSO) systems may be possible solutions to this problem.
  • High-precision localization (or location-based service) through communication is one of the functions of the 6G wireless communication system. Therefore, the radar system will be integrated with the 6G network.
  • Softening and virtualization are two important functions that underlie the design process in 5GB networks to ensure flexibility, reconfigurability and programmability. In addition, billions of devices can be shared in a shared physical infrastructure.
  • AI The most important and newly introduced technology for 6G systems is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Incorporating AI into communications can simplify and enhance real-time data transmission.
  • AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play an important role in M2M, machine-to-human and human-to-machine communication.
  • AI can be a rapid communication in the BCI (brain computer interface).
  • BCI brain computer interface
  • AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based multiple input multiple output (MIMO) mechanism It may include AI-based resource scheduling and allocation.
  • Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a physical layer of a downlink (DL). In addition, machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DL downlink
  • machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • Deep learning-based AI algorithms require large amounts of training data to optimize training parameters.
  • a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a wireless channel.
  • signals of a physical layer of wireless communication may be expressed as complex signals.
  • further research on a neural network for detecting a complex domain signal is needed.
  • Machine learning refers to a set of operations that trains a machine to create a machine that can perform tasks that humans can or cannot do.
  • Machine learning requires data and a learning model.
  • data learning methods can be roughly divided into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network learning is to minimize output errors. Neural network learning repeatedly inputs training data into the neural network, calculates the output and target errors of the neural network for the training data, and backpropagates the neural network error from the output layer of the neural network to the input layer in the direction to reduce the error. ) to update the weight of each node in the neural network.
  • Supervised learning uses training data in which the correct answer is labeled in the training data, and in unsupervised learning, the correct answer may not be labeled in the training data. That is, for example, learning data in the case of supervised learning related to data classification may be data in which categories are labeled for each of the training data.
  • the labeled training data is input to the neural network, and an error can be calculated by comparing the output (category) of the neural network with the label of the training data.
  • the calculated error is back propagated in the reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back propagation.
  • the change amount of the connection weight of each node to be updated may be determined according to a learning rate.
  • the computation of the neural network on the input data and the backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of learning a neural network, a high learning rate can be used to increase the efficiency by allowing the neural network to quickly obtain a certain level of performance, and in the late learning period, a low learning rate can be used to increase the accuracy.
  • the learning method may vary depending on the characteristics of the data. For example, when the purpose of accurately predicting data transmitted from a transmitter in a communication system is at a receiver, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
  • the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machine (RNN) methods. and such a learning model can be applied.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent boltzmann machine
  • THz communication may be applied in the 6G system.
  • the data rate may be increased by increasing the bandwidth. This can be accomplished by using sub-THz communication with a wide bandwidth and applying advanced large-scale MIMO technology.
  • a THz wave also known as sub-millimeter radiation, generally represents a frequency band between 0.1 THz and 10 THz with a corresponding wavelength in the range of 0.03 mm-3 mm.
  • the 100GHz-300GHz band range (Sub THz band) is considered a major part of the THz band for cellular communication.
  • Sub-THz band Addition to mmWave band increases 6G cellular communication capacity.
  • 300GHz-3THz is in the far-infrared (IR) frequency band.
  • the 300GHz-3THz band is part of the broadband, but at the edge of the wideband, just behind the RF band. Thus, this 300 GHz-3THz band shows similarities to RF.
  • THz communication The main characteristics of THz communication include (i) widely available bandwidth to support very high data rates, and (ii) high path loss occurring at high frequencies (high directional antennas are indispensable).
  • the narrow beamwidth produced by the highly directional antenna reduces interference.
  • the small wavelength of the THz signal allows a much larger number of antenna elements to be integrated into devices and BSs operating in this band. This allows the use of advanced adaptive nesting techniques that can overcome range limitations.
  • Optical wireless communication (OWC) technology is envisaged for 6G communication in addition to RF-based communication for all possible device-to-access networks. These networks connect to network-to-backhaul/fronthaul network connections.
  • OWC technology has already been used since the 4G communication system, but will be used more widely to meet the needs of the 6G communication system.
  • OWC technologies such as light fidelity, visible light communication, optical camera communication, and free space optical (FSO) communication based on a light band are well known technologies. Communication based on optical radio technology can provide very high data rates, low latency and secure communication.
  • Light detection and ranging (LiDAR) can also be used for ultra-high-resolution 3D mapping in 6G communication based on a wide band.
  • FSO The transmitter and receiver characteristics of an FSO system are similar to those of a fiber optic network.
  • data transmission in an FSO system is similar to that of a fiber optic system. Therefore, FSO can be a good technology to provide backhaul connectivity in 6G systems along with fiber optic networks.
  • FSO supports high-capacity backhaul connections for remote and non-remote areas such as sea, space, underwater, and isolated islands.
  • FSO also supports cellular base station connectivity.
  • MIMO technology improves, so does the spectral efficiency. Therefore, large-scale MIMO technology will be important in 6G systems. Since the MIMO technology uses multiple paths, a multiplexing technique and a beam generation and operation technique suitable for the THz band should also be considered important so that a data signal can be transmitted through one or more paths.
  • Blockchain will become an important technology for managing large amounts of data in future communication systems.
  • Blockchain is a form of distributed ledger technology, which is a database distributed across numerous nodes or computing devices. Each node replicates and stores an identical copy of the ledger.
  • the blockchain is managed as a peer-to-peer (P2P) network. It can exist without being managed by a centralized authority or server. Data on the blockchain is collected together and organized into blocks. Blocks are linked together and protected using encryption.
  • Blockchain in nature perfectly complements IoT at scale with improved interoperability, security, privacy, reliability and scalability. Therefore, blockchain technology provides several features such as interoperability between devices, traceability of large amounts of data, autonomous interaction of different IoT systems, and large-scale connection stability of 6G communication systems.
  • the 6G system integrates terrestrial and public networks to support vertical expansion of user communications.
  • 3D BS will be provided via low orbit satellites and UAVs. Adding a new dimension in terms of elevation and associated degrees of freedom makes 3D connections significantly different from traditional 2D networks.
  • Unmanned aerial vehicles or drones will become an important element in 6G wireless communications.
  • UAVs Unmanned aerial vehicles
  • a base station entity is installed in the UAV to provide cellular connectivity.
  • UAVs have certain features not found in fixed base station infrastructure, such as easy deployment, strong line-of-sight links, and degrees of freedom with controlled mobility.
  • the deployment of terrestrial communications infrastructure is not economically feasible and sometimes cannot provide services in volatile environments.
  • a UAV can easily handle this situation.
  • UAV will become a new paradigm in the field of wireless communication. This technology facilitates the three basic requirements of wireless networks: eMBB, URLLC and mMTC.
  • UAVs can also serve several purposes, such as improving network connectivity, fire detection, disaster emergency services, security and surveillance, pollution monitoring, parking monitoring, incident monitoring, and more. Therefore, UAV technology is recognized as one of the most important technologies for 6G communication.
  • Tight integration of multiple frequencies and heterogeneous communication technologies is very important in 6G systems. As a result, users can seamlessly move from one network to another without having to make any manual configuration on the device. The best network is automatically selected from the available communication technologies. This will break the limitations of the cell concept in wireless communication. Currently, user movement from one cell to another causes too many handovers in high-density networks, causing handover failures, handover delays, data loss and ping-pong effects. 6G cell-free communication will overcome all of this and provide better QoS. Cell-free communication will be achieved through multi-connectivity and multi-tier hybrid technologies and different heterogeneous radios of devices.
  • WIET Wireless information and energy transfer
  • WIET uses the same fields and waves as wireless communication systems.
  • the sensor and smartphone will be charged using wireless power transfer during communication.
  • WIET is a promising technology for extending the life of battery-charging wireless systems. Therefore, devices without batteries will be supported in 6G communication.
  • Autonomous wireless network is a function that can continuously detect dynamically changing environmental conditions and exchange information between different nodes.
  • sensing will be tightly integrated with communications to support autonomous systems.
  • the density of access networks in 6G will be enormous.
  • Each access network is connected by backhaul connections such as fiber optic and FSO networks.
  • backhaul connections such as fiber optic and FSO networks.
  • Beamforming is a signal processing procedure that adjusts an antenna array to transmit a radio signal in a specific direction.
  • Beamforming technology has several advantages, such as high signal-to-noise ratio, interference prevention and rejection, and high network efficiency.
  • Hologram beamforming (HBF) is a new beamforming method that is significantly different from MIMO systems because it uses a software-defined antenna. HBF will be a very effective approach for efficient and flexible transmission and reception of signals in multi-antenna communication devices in 6G.
  • Big data analytics is a complex process for analyzing various large data sets or big data. This process ensures complete data management by finding information such as hidden data, unknown correlations and customer propensity. Big data is gathered from a variety of sources such as videos, social networks, images and sensors. This technology is widely used to process massive amounts of data in 6G systems.
  • the LIS is an artificial surface made of electromagnetic materials, and can change the propagation of incoming and outgoing radio waves.
  • LIS can be viewed as an extension of massive MIMO, but has a different array structure and operation mechanism from that of massive MIMO.
  • LIS is low in that it operates as a reconfigurable reflector with passive elements, that is, only passively reflects the signal without using an active RF chain. It has the advantage of having power consumption.
  • each of the passive reflectors of the LIS must independently adjust the phase shift of the incoming signal, it can be advantageous for a wireless communication channel.
  • the reflected signal can be gathered at the target receiver to boost the received signal power.
  • THz Terahertz
  • 17 is a diagram illustrating a THz communication method applicable to the present disclosure.
  • THz wave is located between RF (Radio Frequency)/millimeter (mm) and infrared band, (i) It transmits non-metal/non-polar material better than visible light/infrared light, and has a shorter wavelength than RF/millimeter wave, so it has high straightness. Beam focusing may be possible.
  • the frequency band expected to be used for THz wireless communication may be a D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz) band with low propagation loss due to absorption of molecules in the air.
  • Standardization discussion on THz wireless communication is being discussed centered on IEEE 802.15 THz working group (WG) in addition to 3GPP, and standard documents issued by TG (task group) (eg, TG3d, TG3e) of IEEE 802.15 are described in this specification. It can be specified or supplemented.
  • THz wireless communication may be applied to wireless recognition, sensing, imaging, wireless communication, THz navigation, and the like.
  • a THz wireless communication scenario may be classified into a macro network, a micro network, and a nanoscale network.
  • THz wireless communication can be applied to a vehicle-to-vehicle (V2V) connection and a backhaul/fronthaul connection.
  • V2V vehicle-to-vehicle
  • THz wireless communication in micro networks is applied to indoor small cells, fixed point-to-point or multi-point connections such as wireless connections in data centers, and near-field communication such as kiosk downloading.
  • Table 5 below is a table showing an example of a technique that can be used in the THz wave.
  • FIG. 18 is a diagram illustrating a THz wireless communication transceiver applicable to the present disclosure.
  • THz wireless communication may be classified based on a method for generating and receiving THz.
  • the THz generation method can be classified into an optical device or an electronic device-based technology.
  • the method of generating THz using an electronic device is a method using a semiconductor device such as a resonant tunneling diode (RTD), a method using a local oscillator and a multiplier, a compound semiconductor HEMT (high electron mobility transistor) based
  • a monolithic microwave integrated circuit (MMIC) method using an integrated circuit a method using a Si-CMOS-based integrated circuit, and the like.
  • MMIC monolithic microwave integrated circuit
  • a doubler, tripler, or multiplier is applied to increase the frequency, and it is radiated by the antenna through the sub-harmonic mixer. Since the THz band forms a high frequency, a multiplier is essential.
  • the multiplier is a circuit that has an output frequency that is N times that of the input, matches the desired harmonic frequency, and filters out all other frequencies.
  • an array antenna or the like may be applied to the antenna of FIG. 18 to implement beamforming.
  • IF denotes an intermediate frequency
  • tripler and multiplier denote a multiplier
  • PA denotes a power amplifier
  • LNA denotes a low noise amplifier.
  • PLL represents a phase-locked loop.
  • FIG. 19 is a diagram illustrating a method for generating a THz signal applicable to the present disclosure.
  • FIG. 20 is a diagram illustrating a wireless communication transceiver applicable to the present disclosure.
  • the optical device-based THz wireless communication technology refers to a method of generating and modulating a THz signal using an optical device.
  • the optical element-based THz signal generation technology is a technology that generates a high-speed optical signal using a laser and an optical modulator, and converts it into a THz signal using an ultra-high-speed photodetector. In this technology, it is easier to increase the frequency compared to the technology using only electronic devices, it is possible to generate a high-power signal, and it is possible to obtain a flat response characteristic in a wide frequency band. As shown in FIG.
  • a laser diode, a broadband optical modulator, and a high-speed photodetector are required to generate a THz signal based on an optical device.
  • a THz signal corresponding to a difference in wavelength between the lasers is generated by multiplexing the light signals of two lasers having different wavelengths.
  • an optical coupler refers to a semiconductor device that transmits electrical signals using light waves to provide coupling with electrical insulation between circuits or systems
  • UTC-PD uni-travelling carrier photo- The detector
  • UTC-PD uni-travelling carrier photo- The detector
  • UTC-PD is capable of photodetection above 150GHz.
  • an erbium-doped fiber amplifier indicates an erbium-doped optical fiber amplifier
  • a photo detector indicates a semiconductor device capable of converting an optical signal into an electrical signal
  • the OSA indicates various optical communication functions (eg, , photoelectric conversion, electro-optical conversion, etc.) represents an optical module modularized into one component
  • DSO represents a digital storage oscilloscope.
  • FIG. 21 is a diagram illustrating a structure of a transmitter applicable to the present disclosure.
  • FIG. 22 is a diagram illustrating a modulator structure applicable to the present disclosure.
  • a phase of a signal may be changed by passing an optical source of a laser through an optical wave guide.
  • data is loaded by changing electrical characteristics through microwave contact or the like.
  • an optical modulator output is formed as a modulated waveform.
  • the photoelectric modulator (O/E converter) is an optical rectification operation by a nonlinear crystal (nonlinear crystal), photoelectric conversion (O / E conversion) by a photoconductive antenna (photoconductive antenna), a bunch of electrons in the light beam (bunch of) THz pulses can be generated by, for example, emission from relativistic electrons.
  • a terahertz pulse (THz pulse) generated in the above manner may have a length in units of femtoseconds to picoseconds.
  • An O/E converter performs down conversion by using non-linearity of a device.
  • a number of contiguous GHz bands for fixed or mobile service use for the terahertz system are used. likely to use
  • available bandwidth may be classified based on oxygen attenuation of 10 ⁇ 2 dB/km in a spectrum up to 1 THz. Accordingly, a framework in which the available bandwidth is composed of several band chunks may be considered.
  • the bandwidth (BW) becomes about 20 GHz.
  • Effective down conversion from the infrared band to the THz band depends on how the nonlinearity of the O/E converter is exploited. That is, in order to down-convert to a desired terahertz band (THz band), the O/E converter having the most ideal non-linearity for transfer to the terahertz band (THz band) is design is required. If an O/E converter that does not fit the target frequency band is used, there is a high possibility that an error may occur with respect to the amplitude and phase of the corresponding pulse.
  • a terahertz transmission/reception system may be implemented using one photoelectric converter. Although it depends on the channel environment, in a far-carrier system, as many photoelectric converters as the number of carriers may be required. In particular, in the case of a multi-carrier system using several broadbands according to the above-described spectrum usage-related scheme, the phenomenon will become conspicuous. In this regard, a frame structure for the multi-carrier system may be considered.
  • the down-frequency-converted signal based on the photoelectric converter may be transmitted in a specific resource region (eg, a specific frame).
  • the frequency domain of the specific resource region may include a plurality of chunks. Each chunk may be composed of at least one component carrier (CC).
  • FIG. 23 is a diagram illustrating a neural network applicable to the present disclosure.
  • artificial intelligence technology may be introduced in a new communication system (e.g. 6G system).
  • artificial intelligence can utilize a neural network as a machine learning model modeled after the human brain.
  • the device may process the arithmetic operation consisting of 0 and 1, and may perform operations and communication based on this.
  • the device can process many arithmetic operations in a faster time and using less power than before.
  • humans cannot perform arithmetic operations as fast as devices.
  • the human brain may not be built to process only fast arithmetic operations.
  • humans can perform operations such as recognition and natural language processing.
  • the above-described operation is an operation for processing something more than arithmetic operation, and the current device may not be able to process to a level that a human brain can do. Therefore, it may be considered to make a system so that the device can achieve performance similar to that of a human in areas such as natural language processing and computer vision.
  • the neural network may be a model created based on the idea of mimicking the human brain.
  • the neural network may be a simple mathematical model made with the above-described motivation.
  • the human brain can consist of a huge number of neurons and synapses connecting them.
  • an action may be taken by selecting whether other neurons are also activated.
  • the neural network may define a mathematical model based on the above facts.
  • neurons are nodes, and it may be possible to create a network in which a synapse connecting neurons is an edge.
  • the importance of each synapse may be different. That is, it is necessary to separately define a weight for each edge.
  • the neural network may be a directed graph. That is, information propagation may be fixed in one direction. For example, when a non-directed edge is provided or the same direct edge is given in both directions, information propagation may occur recursively. Therefore, the result by the neural network can be complicated.
  • the neural network as described above may be a recurrent neural network (RNN). At this time, since RNN has the effect of storing past data, it is recently used a lot when processing sequential data such as voice recognition.
  • the multi-layer perceptron (MLP) structure may be a directed simple graph.
  • the MLP may be the above-described MLP unless there is a special mention of the layer below, but the present invention is not limited thereto.
  • the above-described network may be a feed-forward network, but is not limited thereto.
  • different neurons may be activated in an actual brain, and the result may be transmitted to the next neuron.
  • the result value can be activated by the neuron that makes the final decision, and the information is processed through it.
  • the above-described method is changed to a mathematical model, it may be possible to express an activation condition for input data as a function.
  • the above-described function may be referred to as an activate function.
  • the simplest activation function may be a function that sums all input data and compares it with a threshold value. For example, when the sum of all input data exceeds a specific value, the device may process information as activation. On the other hand, when the sum of all input data does not exceed a specific value, the device may process information as deactivation.
  • the activation function may have various forms.
  • Equation 1 may be defined for convenience of description. In this case, in Equation 1, it is necessary to consider not only the weight w but also the bias, and when this is taken into consideration, it can be expressed as Equation 2 below. However, since the vise (b) and the weight (w) are almost the same, only the weight is considered and described below. However, the present invention is not limited thereto. For example, the value is always 1 if you add Since is a vise, it is possible to assume a virtual input, and through this, the weight and the vise can be treated equally, and the present invention is not limited to the above-described embodiment.
  • a model based on the above can initially define the shape of a network composed of nodes and edges. After that, the model can define an activation function for each node.
  • the role of the parameter that adjusts the model takes on the weight of the edge, and finding the most appropriate weight may be a training goal of the mathematical model.
  • the following Equations 3 to 6 may be one form of the above-described activation function, and are not limited to a specific form.
  • the neural network may first determine activation of the next layer with respect to a given input, and may determine activation of the next layer according to the determined activation. Based on the above-described method, the interface may be determined by looking at the result of the last decision layer.
  • FIG. 24 is a diagram illustrating an activation node in a neural network applicable to the present disclosure.
  • as many decision nodes as the number of classes to be classified in the last layer may be created, and then a value may be selected by activating one of them.
  • a case in which activation functions of a neural network are non-linear and the functions are complexly configured while forming layers with each other may be considered.
  • weight optimization of the neural network may be non-convex optimization. Therefore, it may be impossible to find a global optimization of parameters of a neural network.
  • a method of convergence to an appropriate value using a gradient descent method may be used. For example, all optimization problems can be solved only when a target function is defined.
  • a method of minimizing a corresponding value by calculating a loss function between a target output actually desired in the final decision layer and an estimated output generated by the current network may be calculated.
  • the loss function may be as shown in Equations 7 to 9 below, but is not limited thereto.
  • Equations 7 to 9 may be loss functions for optimization.
  • the back propagation algorithm may be an algorithm capable of simply performing gradient calculation using a chain rule.
  • parallelization may also be easy.
  • memory can be saved through algorithm design. Therefore, the neural network update may use a backpropagation algorithm.
  • the backpropagation algorithm a loss is first calculated using the current parameters, and how much each parameter affects the corresponding loss can be calculated through the chain rule. An update may be performed based on the calculated value.
  • the backpropagation algorithm may be divided into two phases.
  • One may be a propagation phase, and the other may be a weight update phase.
  • an error or a change amount of each neuron may be calculated from a training input pattern.
  • the weight in the weight update phase, the weight may be updated using the previously calculated value.
  • specific phases may be as shown in Table 6 below.
  • FIG. 25 is a diagram illustrating a method of calculating a gradient using a chain rule applicable to the present disclosure.
  • a method for obtaining instead of calculating the corresponding value, it is a derivative value already calculated in the y-layer. with y-layers and only relevant to x can be used to calculate the desired value. If a parameter called x' exists under x, Wow using can be calculated. Therefore, what is required in the backpropagation algorithm may be only two values of a derivative of a variable immediately preceding the parameter to be updated and a value obtained by differentiating the immediately preceding variable with the current parameter.
  • SDG stochastic gradient descent
  • a neural network that processes a complex number may have a number of advantages, such as a neural network description or a parameter expression.
  • a complex value neural network there may be points to be considered compared to a real neural network that processes real numbers.
  • the activation function As an example, for example, the “sigmoid function of Equation 3 ”, if t is a complex number, In the case of , f(t) becomes 0, so it is not differentiable. Therefore, activation functions generally used in real neural networks cannot be applied to complex neural networks without restrictions.
  • Equation 10 may be derived by “Liouville’s theorem” based on Table 7.
  • Equation 11 the form of the plurality of activation functions
  • Activation functions such as “sigmoid function” and “hyperbolic tangent function” used in real neural networks can be used.
  • CNNs Convolution neural networks
  • CNN may be a type of neural network mainly used in speech recognition or image recognition, but is not limited thereto.
  • CNN is configured to process multi-dimensional array data, and is specialized in multi-dimensional array processing such as color images. Therefore, most techniques using deep learning in the image recognition field can be performed based on CNN.
  • image data is processed as it is. That is, since the entire image is considered as one data and received as input, the correct performance may not be obtained if the image position is slightly changed or distorted as above without finding the characteristics of the image.
  • CNN can process an image by dividing it into several pieces, not one piece of data. As described above, even if the image is distorted, the CNN can extract the partial characteristics of the image, so that the correct performance can be achieved. CNN may be defined in terms as shown in Table 9 below.
  • RNNs Recurrent neural networks
  • the RNN may be a type of artificial neural network in which hidden nodes are connected by directional edges to form a directed cycle.
  • the RNN may be a model suitable for processing data that appears sequentially, such as voice and text. Since RNN is a network structure that can accept input and output regardless of sequence length, it has the advantage of being able to create various and flexible structures according to needs.
  • a structure proposed to solve the “vanishing gradient” problem may be a long-short term memory (LSTM) and a gated recurrent unit (GRU). That is, the RNN may have a structure in which feedback exists compared to CNN.
  • LSTM long-short term memory
  • GRU gated recurrent unit
  • FIG. 27 is a view showing an autoencoder applicable to the present disclosure.
  • 28 to 30 are views showing a turbo autoencoder applicable to the present disclosure.
  • various attempts are being made to apply a neural network to a communication system.
  • an attempt to apply a neural network to a physical layer may mainly focus on optimizing a specific function of a receiver.
  • the channel decoder is configured as a neural network, the performance of the channel decoder may be improved.
  • a MIMO detector is implemented as a neural network in a MIMO system having a plurality of transmit/receive antennas, the performance of the MIMO system may be improved.
  • an autoencoder method may be applied.
  • the autoencoder configures both the transmitter and the receiver as a neural network, and performs optimization from an end-to-end point of view to improve performance. and may be configured as shown in FIG. 27 .
  • FIG. 28 is a diagram illustrating a federated learning method based on a compression method applicable to the present disclosure.
  • model parameter of Federated Learning may be applied to a new communication system.
  • a method and system in which the model parameters of federated learning are adapted to the communication environment and signals are transmitted will be described.
  • federated learning may be applied to any one of the cases of protecting personal privacy, reducing the load of the base station through distributed processing, and reducing traffic between the base station and the terminal.
  • traffic of local model parameters e.g. weight, information of the deep neural network
  • techniques for reducing traffic through compression of local model parameters or aircomp are being developed.
  • the wireless communication environment in the communication system may be diverse.
  • the number of terminals requiring learning in the communication system may be set in various ways.
  • the communication system may require a flexible operating method and system rather than a fixed specific technology in consideration of the above-described environment. Through this, it is possible to increase the resource efficiency of the communication system.
  • a method of applying the federated learning method in consideration of the above points will be described.
  • a federated learning method through terminal model parameter compression may be applied to the communication system.
  • the federated learning through compression may be a method in which each terminal performs compression on data in consideration of the characteristics of the parameters and transmits the data to the base station. Therefore, when the base station receives a signal based on the federated learning method, the base station needs to perform an operation of decompressing the received signal based on the received signal and summing the collected parameters. Accordingly, the load of the base station may increase. Also, for example, since a communication channel must be allocated for each number of terminals, communication traffic may increase in proportion to the number of terminals used. Therefore, when there are a plurality of terminals, the method through compression may reduce efficiency.
  • compression may cause delay in data processing. That is, the terminal may need time to process the compressed data, and delay may occur based on this.
  • the modulation method may be changed according to the channel environment.
  • the terminal may use a modulation scheme such as 256QAM (Quadrature Amplitude Modulation). That is, since the terminal can transmit more data in the same time period, data throughput may increase. Accordingly, low-delay communication can be performed.
  • 256QAM Quadrature Amplitude Modulation
  • the UE may use a modulation scheme such as Quadrature Phase Shift Keying (QPSK). That is, since the terminal can transmit a small amount of data in the same time period, data throughput is reduced, which may cause delay in transmission and processing of federated learning.
  • QPSK Quadrature Phase Shift Keying
  • the compression-based method and the method of changing the modulation method according to the channel environment may have different delay rates.
  • the compression-based method may have a different delay speed depending on the compression rate. For example, when the compression rate increases, the delay time increases because the processing time for compression increases, and when the compression rate decreases, the delay time may be shortened because the processing time for compression may decrease. Also, for example, a change in a modulation method according to a communication environment may affect a delay rate because data throughput is changed.
  • the compression method can minimize the transmission capacity, but if the compression rate is increased, a loss of model parameters may occur. Accordingly, when the terminal compresses data based on a high compression rate, a problem may occur. Accordingly, the terminal needs to select a compression rate that minimizes the transmission capacity in consideration of the target loss rate.
  • the target loss rate may be applied differently based on the number of users (or the number of terminals). For example, when the number of users is small, the influence of an individual terminal loss rate may be large. On the other hand, when the number of users is large, the influence of the individual terminal loss rate may be reduced. That is, since the number of users may affect the loss ratio, using a fixed compression ratio regardless of the number of users may reduce transmission efficiency compared to the performance improvement ratio.
  • efficiency when a weight signaling method between a terminal and a base station is fixedly used in a federated learning method, efficiency may be different based on a wireless environment. For example, the efficiency may be high in a specific environment, but in the opposite case, the efficiency may be inhibited. In this case, since the wireless environment can change dynamically, it is necessary to recognize the dynamically changing wireless environment and select a technology based on the recognized wireless environment.
  • each terminal may transmit the parameters (e.g. weights and information of the deep neural network) of the model learned based on the federated learning method to the base station.
  • each terminal transmits compressed parameters, and the base station may update the global model based on Equation 12 below.
  • c may be information compression and modulation processing
  • d may be demodulation and information restoration processing. Thereafter, the base station may transmit the updated global model to each terminal.
  • each terminal may perform compression based on a method of minimizing the amount of model parameters.
  • the respective terminals use the same compression algorithm in FIG. 28 , the present invention is not limited thereto.
  • compression may be performed based on at least one of weight pruning, quantization, and weight sharing.
  • compression may be performed based on another method, and is not limited to the above-described embodiment.
  • a value necessary for actual inference among weights may have resistance to small values. That is, a weight value necessary for actual inference may have a small effect on small values.
  • all small weight values may be set to 0. Through this, the neural network can reduce the network model size.
  • quantization may be a method of calculating data by reducing data to a specific number of bits. That is, data can be expressed only as a specific quantized value.
  • weight sharing may be a method of adjusting weight values based on an approximate value (e.g. codebook) and sharing the weight values.
  • an approximate value e.g. codebook
  • each terminal may perform data compression, and may transmit compressed information to the base station.
  • the base station compresses can be received from each terminal, and the received information can be decompressed to calculate and update parameters of the global model.
  • each terminal may set local model parameters having individual characteristics. Accordingly, when each terminal performs compression, compression efficiency may be different for each terminal. Also, as an example, each terminal may have different hardware resources. Here, compression efficiency may be affected by hardware resources. Accordingly, the compression efficiency may be different for each terminal.
  • the terminal when the terminal performs quantization with 8 bits, the terminal equipped with a 64-bit arithmetic processing function can obtain high compression efficiency.
  • a terminal equipped with a 16-bit arithmetic processing function may have low compression efficiency.
  • the terminal when the terminal has low-spec hardware, the terminal may receive a large compression load. Therefore, it may be advantageous for the above-described terminal to use a simple compression method. For example, since Internet of Thing (IoT) terminals or low-power terminals may have relatively low-spec hardware, a simple compression technique may be used.
  • IoT Internet of Thing
  • a terminal operating based on AI or a terminal processing a large amount of data may have high-spec hardware, it is possible to increase the compression efficiency by using a complex compression technique. That is, different compression methods may be used for each terminal, and it may be necessary to use a compression method suitable for each.
  • each terminal may use a compression method suitable for individual characteristics of local model parameters and hardware resources.
  • the terminals need to transmit information on the compression method to the base station.
  • the base station may restore compressed data and model parameters received from each terminal based on the information received from the terminal.
  • 29 is a diagram illustrating a processing time and a transmission time according to a compression rate applicable to the present disclosure.
  • the terminal and the base station may minimize the final delivery time of the model parameter in order to reduce the delay.
  • the total delay time of the model parameter may be determined based on Equation 13 below. That is, the total delay time may be expressed as the sum of the delay time based on the compression rate (or the release rate) and the transmission delay time.
  • the compression delay ( ) may mean the sum of the processing time of the terminal used for compression and the processing time of the base station for reconstructing it.
  • the transmission delay ( ) may be the transmission time it takes to transmit all of the model parameters. That is, the total delay time ( ) can be expressed as the sum of the time required for compression and decompression and the transmission time.
  • propagation delay for a wireless environment or other system delay may not be considered. However, this is only one example, and it may be possible to reflect the above-described configuration.
  • the compression delay ( ) may change depending on the compression rate, the performance of the terminal/base station, and the size of the original data to be compressed, and may be determined based on Equation 14 below.
  • the delay for restoration (Decompression Delay)
  • compression Delay may be a delay for compression
  • CR is a compression ratio and may be a size before compression/size after compression.
  • index indicating the performance of the base station may be an indicator indicating the performance of the terminal.
  • DS may be a data size. That is, based on Equation 14, the delay for compression When the compression ratio (CR) is high, the terminal performance ( ) is low and DS is large.
  • D_d which is a delay for restoration
  • CR compression ratio
  • base station performance ( ) is low
  • DS DS
  • the processing time for compression/decompression may increase as the compression rate increases because it takes a lot of load on the terminal and the base station.
  • the performance of the terminal and the base station is also affected, as described above.
  • the larger the size of the original data to be compressed the more memory read/write (Read/Write) processing and calculation amount is used, so the processing time may increase as well.
  • transmission delay ( ) may be affected by the Modulation and Coding Scheme (MCS) set based on the radio channel environment and the compressed data size (a specific compression rate is applied).
  • MCS Modulation and Coding Scheme
  • the transmission time may decrease as the MCS increases.
  • the transmission time may decrease as the compression rate increases, which may be expressed by Equation 15 below. here, may be the MCS of the minimum transmission rate.
  • Is When it may be bits per modulation symbol
  • Is It may be a coding rate when .
  • Is It may be bits per second for the coding rate when . That is, the transmission delay ( ) may increase when the data size DS is large.
  • transmission delay ( ) may increase when the compression ratio is small and the MCS is small.
  • the total delay ( ) is the compression delay ( ) and transmission delay ( ) can be determined according to For example, referring to FIG. 29, when the compression rate increases, the compression delay ( ) increases, and the transmission delay ( ) can be reduced. That is, the compression delay ( ) and transmission delay ( ) may be inversely proportional to the compression ratio (CR).
  • the terminal and the base station can achieve the optimal communication with the total delay ( ), there is a need to determine the compression ratio (CR), and a method for this is described below.
  • FIG. 30 is a diagram illustrating a method of determining a compression rate and a Modulation Coding Scheme (MCS) for low-latency joint learning applicable to the present disclosure.
  • MCS Modulation Coding Scheme
  • the total delay ( ) is the compression delay ( ) and transmission delay ( ) can be the sum of
  • the total delay ( ) is selected, the overall delay can be minimized.
  • the base station may measure a signal noise ratio (SNR) through a reference signal (or reference signal) received from the terminal, and estimate the MCS and compression ratio values based thereon.
  • SNR signal noise ratio
  • the base station is at least one of an Adaptive Modulation and Coding (AMC) agent (AMC Agent, 3010), an MCS instruction generator 3020, a compression rate predictor 3030, and a compression rate indication generator 3040.
  • AMC Adaptive Modulation and Coding
  • the above-described configuration included in the base station may be an example, and the above-described name may also be an example.
  • the base station may include other components that perform the same function as the above-described components.
  • the base station may perform a function based on a configuration having a name different from the above-mentioned name, and is not limited to the above-described embodiment.
  • related content will be described based on the above-described configuration for convenience of description, but may not be limited thereto.
  • the AMC agent 3010 may perform MCS prediction based on SNR information measured based on a reference signal received from the terminal. Thereafter, the AMC agent 3010 may transmit information on the predicted MSC to the MCS indication generator 3020 . In this case, the MCS indication generator 3020 may generate information indicating the MCS and transmit it to the terminal.
  • the compression rate predictor 3030 includes the MCS predicted by the AMC agent 3010 and the original size (DS) of transmission data, the base station performance index ( ) and terminal performance indicators ( ) can predict the compression ratio, which will be described later. Thereafter, the compression ratio predictor 3030 may transmit the generated compression ratio value to the compression ratio indication generator 3040 .
  • FIG. 31 is a diagram illustrating a method of determining a compression ratio and an MCS based on the aforementioned 30.
  • the base station may receive a reference signal (or reference signal) from the terminal and measure the SNR based on the received signal (S3110). Then, the base station provides the measured SNR information to the AMC agent, The AMC agent may predict the MCS suitable for the SNR based on the SNR information. (S3120) Then, the compression rate predictor of the base station determines the predicted MCS, the transmission size (DS), and the base station performance index (S3120).
  • the base station may instruct the instruction generator to transmit the predicted MCS and compression rate values to the terminal.
  • the base station is the predicted MCS and compression rate values may be transmitted to the terminal.
  • AMC adaptive modulation and coding
  • the AMC agent 3210 is a state (state, ) can be used as input.
  • state( ) may be expressed as a quantized value of the SNR measured in the uplink reference signal transmitted through the radio channel 3220 .
  • the action of the AMC agent 3210 is can be defined as here, may be an index of the MCS.
  • the AMC agent 3210 may select an MCS that maximizes a reward based on the SNR information of the radio channel.
  • the reward (Reward, R) of the AMC agent 3210 may be expressed based on the frequency efficiency (Spectral Efficiency), and may be expressed as Equation 16 below.
  • BLER may be a block error rate
  • v may be a coding rate.
  • the AMC agent 3210 may be trained to determine the MCS that maximizes the frequency efficiency (Spectral Efficiency) in the radio channel environment 3220 .
  • learning for MCS may be implemented through “Q-Learning,” which is a type of reinforcement learning, but is not limited thereto.
  • the action value (Action-Value) learned by the AMC agent 3210 may be as shown in Equation 17 below. here, is the learning rate, may be a discount factor.
  • the base station does not use the AMC agent 3210, and it may be possible to link a compression rate predictor with a method of selecting an MCS based on a table form like the existing communication system. That is, the base station may select a corresponding index from the MCS table based on the SNR and link it with the compression rate predictor based on this, and is not limited to the above-described embodiment.
  • the base station may derive an MCS value through learning based on the AMC agent 3210 or may derive an MCS value based on an existing table, and is not limited to the above-described embodiment.
  • the compression rate predictor is given a condition For all cases (step-by-step) of , the optimal compression ratio can be calculated in advance.
  • the compression rate predictor may calculate the compression rate in the form of a look-up table. That is, the compression rate predictor may perform a complex operation in advance based on a corresponding condition and derive a compression rate through a lookup table corresponding thereto, and through this, the operation may be performed quickly.
  • the compression rate predictor may use artificial intelligence (AI) for more accurate and detailed prediction.
  • AI artificial intelligence
  • the compressor predictor may perform learning based on reinforcement learning.
  • reinforcement learning it may be possible to update in real time.
  • the compression rate predictor can perform adaptive prediction on delays occurring other than the given conditions.
  • this is only one example and is not limited to the above-described embodiment.
  • the AMC agent may predict the MCS based on the above description, and the predicted MCS may instruct the UE to select the corresponding MCS through the MCS indication generator.
  • the compression rate predictor includes the MCS predicted by the AMC agent, the original size (DS) of the transmitted data, and the base station performance index ( ) and terminal performance indicators ( ), it is possible to predict the compression ratio through at least one. Then, the compression ratio predictor may pass the predicted compression ratio value to the compression ratio indication generator. The compression ratio indication generator may instruct the terminal to transmit by applying the corresponding compression ratio based on the received compression ratio value.
  • FIG. 33 is a diagram illustrating a method of predicting a compression rate to minimize transmission delay based on a fully connected layer applicable to the present disclosure.
  • the compression ratio predictor may predict the compression ratio based on a full connected layer.
  • the compression rate predictor includes the original size (DS) of transmitted data, the base station performance index ( ), terminal performance index ( ) and the optimal compression ratio calculated from the MCS value can be configured as training data.
  • all layers in the compression rate predictor may be connected, and through this, learning reflecting each element may be performed.
  • the compression ratio predictor may derive a compression ratio providing the shortest delay as an output value based on the above-described input value.
  • FIG. 34 is a diagram illustrating a method of controlling a compression rate to minimize transmission delay applicable to the present disclosure.
  • the compression rate predictor may predict the compression rate based on reinforcement learning. More specifically, referring to FIG. 34 , the compression rate predictor may include at least one of a CR Agent 3410 , a delay calculator 3420 , and a compression rate adjuster 3430 for reinforcement learning.
  • the above-described configuration included in the compression rate predictor may be an example, and the above-described name may also be an example.
  • the compression rate predictor may include other components that perform the same function as the above-described components.
  • the compression rate predictor may perform a function based on a configuration having a name different from the above-mentioned name, and is not limited to the above-described embodiment.
  • related content will be described based on the above-described configuration for convenience of description, but may not be limited thereto.
  • the state of the CR agent 3410 ( )Is can be That is, the CR agent 3410 is the original data size (DS), MCS, base station performance indicators ( ) and terminal performance indicators ( ) can be considered.
  • the action of the CR agent 3410 may serve to increase or decrease the unit step of the compression rate.
  • the step of the adjustable compression ratio is n
  • the output value of the compression ratio adjuster 3430 is may be within the range of
  • the compression ratio adjuster 3430 may increase or decrease the current compression ratio value based on the action of the CR agent 3410 .
  • the compression ratio adjuster 3430 may transmit information on the adjustment value to the CR agent 3410 as a next state value.
  • the delay calculator 3420 may calculate a compensation (Reward, R) corresponding to the output value of the compression ratio adjuster, and the compensation may be as shown in Equation 18 below.
  • the delay value at the previous compression rate may be a delay value at the current compression rate.
  • the reward may be increased when the delay due to the action is further reduced. Accordingly, the CR agent 3410 may select an action in the direction of reducing the delay.
  • the compression rate predictor using reinforcement learning repeatedly calculates the delay value based on the compression rate adjuster 3430 and provides it to the CR agent 3410, and the CR agent 3410 is based on the repeated value.
  • a converged compression ratio (CR, Compression Ratio) can be used.
  • the base station may measure the delay based on the difference from the compression and restoration completion time.
  • the compression rate predictor may derive a compression rate value by further reflecting the above-described information, and through this, accuracy may be increased, but the present invention is not limited thereto.
  • 35 is a diagram illustrating a flow for a method of controlling a compression rate and MCS to minimize transmission delay applicable to the present disclosure.
  • terminals 3510 , 3520 , and 3530 each perform performance indicators ( ) can be transmitted.
  • the base station 3540 may transmit a message requesting transmission of a terminal performance indicator to each of the terminals 3510 , 3520 , and 3530 .
  • each of the terminals (3510, 3520, 3530) based on the request message received from the base station each performance indicator ( ) may be transmitted to the base station 3540 .
  • the base station 3540 is a global model ( ) can be transmitted.
  • each of the terminals (3510, 3520, 3530) receives the global model (3540) from the base station (3540). ) to perform local learning.
  • each of the terminals 3510 , 3520 , and 3530 may prepare to transmit the model parameter.
  • each of the terminals 3510 , 3520 , and 3530 may transmit each reference signal to the base station.
  • the base station 3540 may measure each SNR based on each reference signal received from each of the terminals 3510 , 3520 , and 3530 .
  • the base station 3540 may determine the MCS based on the SNR based on the above description.
  • the MCS may be determined through learning based on artificial intelligence.
  • the MCS may be determined based on a table value like the existing communication system, and is not limited to the above-described embodiment.
  • the base station 3540 determines the MCS and the base station performance indicator ( ) and the performance index of each terminal ( ) and the original data size (DS) may be used to predict the compression ratio for low delay.
  • the base station 3540 may predict the compression ratio based on the pre-combination method as shown in FIG. 33 or the reinforcement learning method as shown in FIG. 34 .
  • the base station may derive the compression ratio based on the compression ratio adjuster and the delay calculator based on FIG.
  • the base station 3540 may transmit an operation instruction to each of the terminals 3510 , 3520 , and 3530 .
  • each operation instruction may include a compression rate and MCS information for the corresponding terminal.
  • each of the terminals 3510 , 3520 , and 3530 may perform compression on data based on the received information.
  • each of the terminals (3510, 3520, 3530) is each compressed data ( ) may be transmitted to the base station 3540 .
  • the base station 3540 may generate and update global model parameters based on the received data. Thereafter, the base station 3540 may transmit the updated global model parameter to the respective terminals 3510 , 3520 , and 3530 .
  • the terminal performance index ( ) and base station performance indicators ( ) may be an index indicating performance related to the compression capability of the terminal and the base station.
  • the performance index of the terminal and the base station may be defined as a standardized score according to a benchmarking tool or a product specification (CPU, GPU, Memory), but is not limited thereto.
  • the base station may deliver a benchmarking tool to the terminal.
  • the terminal may perform measurement based on the received benchmarking tool, and transmit the measured score to the base station.
  • the base station may check the terminal performance indicator based on the measured score, but this is only an example and is not limited to the above-described embodiment.
  • FIGS. 36 and 37 may be diagrams illustrating a method of controlling a compression rate to minimize transmission capacity.
  • the above-described compression rate predictor may be designed to minimize transmission capacity.
  • the transmission capacity can be minimized.
  • the compression rate for data is high, data loss may increase. Therefore, a method for maximally increasing the compression ratio within the target compression loss value may be required, and a compression ratio predictor may be implemented based on this.
  • the compression rate predictor is a CR agent 3610, At least one of a comparator 3620 and a compression ratio adjuster 3630 may be included.
  • the above-described configuration included in the compression rate predictor may be an example, and the above-described name may also be an example.
  • the compression rate predictor may include other components that perform the same function as the above-described components.
  • the compression rate predictor may perform a function based on a configuration having a name different from the above-mentioned name, and is not limited to the above-described embodiment.
  • related content will be described based on the above-described configuration for convenience of description, but may not be limited thereto.
  • the state of the CR agent 3610 ( )Is can be That is, the CR agent 3610 may perform learning in consideration of the compression rate and the number of users (the number of terminals), which will be described later. Also, as an example, the action of the CR agent 3610 may serve to increase or decrease the unit step of the compression rate.
  • the output value of the compression ratio adjuster 3630 is may be within the range of The compression ratio adjuster 3460 may increase or decrease the current compression ratio value based on the action of the CR agent 3610 . Thereafter, the compression rate adjuster 3630 may transmit information on the adjustment value to the CR agent 3610 as a next state value.
  • the comparator 3620 may calculate a compensation (Reward, R) corresponding to the output value of the compression ratio adjuster, and the compensation may be as shown in Equation 19 below.
  • R Reward
  • CR may be a current compression ratio (ie, an output value of the compression ratio adjuster).
  • target compression loss ratio may be the current compression loss ratio.
  • the compensation may be increased when the compression loss rate due to the action is further reduced. Accordingly, the CR agent 3610 may select an action in the direction of reducing the compression loss rate.
  • the compression rate predictor using reinforcement learning repeatedly calculates the compression loss rate based on the compression rate adjuster 3630 and provides it to the CR agent 3610, and the CR agent 3610 is based on the repeated value.
  • a converged compression ratio (CR, Compression Ratio) can be used.
  • a loss rate caused by data compression may decrease as the number of users increases. That is, the smaller the number of users, the greater the effect on the compression loss rate may be.
  • the state of the CR agent 3610 is determined by considering the compression ratio (CR) and the number of users (n). can be set to
  • the compression loss rate may greatly affect each user.
  • the compression loss ratio may not have a large effect on each user.
  • a different part of compression may be performed for each user.
  • the number of users (n) may have a large influence on the compression loss in the overall compression loss ratio.
  • the number of users (n) may have a small influence on the compression loss in the overall compression loss ratio.
  • the state of the CR agent 3610 may consider the compression ratio (CR) and the number of users (n).
  • the target compression loss ratio ( ) when the number of users increases in Equation 19 above, the target compression loss ratio ( ) can be set high. Through this, compensation may be designed so that the current compression ratio (ie, the output value of the compression ratio adjuster) increases.
  • the base station checks the number of users (or the number of terminals) to determine the target compression loss rate ( ) can be determined.
  • the target compression loss ratio corresponding to the number of users ( ) may be preset in advance. The base station checks the number of users, and the target compression loss ratio ( ) value can be used, and it is not limited to the above-described embodiment. That is, the compression rate predictor may learn the compression rate value in consideration of the number of users.
  • FIG. 37 is a diagram illustrating a flow for a method of controlling a compression rate and MCS to minimize a transmission capacity applicable to the present disclosure.
  • the base station 3740 may transmit the global model g_pre to the respective terminals 3710 , 3720 , and 3730 .
  • each of the terminals 3710, 3720, 3730 receives the global model ( ) to perform local learning.
  • each of the terminals 3710, 3720, and 3730 may prepare to transmit the model parameter.
  • each of the terminals 3710 , 3720 , and 3730 may transmit each reference signal to the base station.
  • the base station 3740 may measure each SNR based on each reference signal received from each of the terminals 3710 , 3720 , and 3730 .
  • the base station 3740 may determine the MCS based on the SNR based on the above description.
  • the base station 3740 may further confirm the number of terminals (or the number of users). As a specific example, the base station 3740 may check the number of terminals connected to the base station 3740 and use it as the above-described number of users (n). As another example, when the base station 3740 receives a reference signal from each of the terminals 3710, 3720, and 3730, the base station 3740 identifies the terminal from which the reference signal is received, and based on this, the number of users ( n) the value can be determined. For example, when the base station 3740 does not receive the reference signal transmitted by a specific terminal, the base station 3740 may exclude the corresponding terminal when counting the number of users n described above. That is, the base station 3740 may receive a reference signal from each of the terminals 3710 , 3720 , and 3730 to measure the SNR as well as determine the number of users n, and is not limited to the above-described embodiment.
  • the MCS may be determined through learning based on artificial intelligence.
  • the MCS may be determined based on a table value like the existing communication system, and is not limited to the above-described embodiment.
  • the base station 3740 may predict a compression rate that minimizes the transmission capacity by using at least one of the determined MCS and the number of users.
  • the base station 3740 may predict the compression ratio based on the reinforcement learning method shown in FIG. 36 described above.
  • the base station has a compression ratio adjuster and The compression ratio may be derived based on the comparator, and the present invention is not limited to the above-described embodiment.
  • each operation instruction may include a compression rate and MCS information for the corresponding terminal.
  • each of the terminals 3710 , 3720 , and 3730 may perform compression on data based on the received information. After that, each of the terminals 3710 , 3720 , 3730 receives the compressed data may be transmitted to the base station 3740 .
  • the base station 3740 may generate and update global model parameters based on the received data. Thereafter, the base station 3740 may transmit the updated global model parameter to the respective terminals 3710 , 3720 , and 3730 .
  • the terminal's performance index ( ) may not include request and response operations, and unnecessary signal exchange may be omitted through this.
  • the compression rate can be determined based on the number of users (or the number of terminals), so the prediction operation of the base station can be performed simply. For example, when the number of users (or the number of terminals) increases, the terminal The influence of the loss ratio of the star compression ratio can be reduced. Accordingly, as the number of users increases, more compression can be performed, and a compression rate that minimizes transmission capacity can be determined based on this, as described above.
  • 38 is a diagram illustrating a method of operating a base station applicable to the present disclosure.
  • the base station may transmit the first global parameter to a plurality of terminals.
  • the base station may transmit a global parameter in consideration of joint learning to a plurality of terminals, as described above.
  • the base station may receive each reference signal from a plurality of terminals.
  • the base station may measure a signal noise ratio (SNR) based on each received reference signal.
  • the base station may determine the compression ratio and MCS based on the measured SNR.
  • the base station may determine the MCS from the measured SNR based on a preset MCS table.
  • the base station may determine the MCS from the measured SNR based on reinforcement learning.
  • reinforcement learning may use a reward in consideration of spectral efficiency and the measured SNR as input values, and through this, the MCS may be derived as an output value. That is, the base station may determine the MCS through reinforcement learning.
  • the base station may determine a compression rate based on the determined MCS.
  • the base station may determine a compression rate that minimizes transmission delay.
  • the base station determines the MCS, the original data size (Data Size, DS), the terminal performance ( ) and base station performance ( ), the compression rate can be determined based on at least any one of them, and through this, the transmission delay can be minimized.
  • the base station may determine a compression rate at which transmission delay is minimized based on a Full Connected Layer scheme, which may be as shown in FIG. 33 .
  • the base station may determine a compression rate at which transmission delay is minimized based on reinforcement learning.
  • reinforcement learning is a reward in consideration of delay, determined MCS, original data size (Data Size, DS), and terminal performance ( ) and base station performance ( ) can be used as an input value.
  • the base station can determine a compression rate that minimizes the transmission delay, which may be as shown in FIG. 34 .
  • the base station uses a plurality of terminals to predict the compression rate as described above. ) to send a message requesting information. Thereafter, the base station receives each terminal capability ( ) to receive information. Through this, the base station can determine the compression ratio as described above.
  • the base station may determine a compression rate that minimizes the transmission capacity.
  • the base station may determine the MCS through the measured SNR, as described above. Thereafter, the base station may determine the compression rate in consideration of the number of the plurality of terminals.
  • the compression loss rate may have a large effect.
  • the influence of the compression loss rate may be small, as described above.
  • the base station may determine a compression rate at which the transmission capacity is minimized based on reinforcement learning.
  • the compensation may be determined in consideration of the target compression loss rate, which may be as shown in FIG. 36 .
  • reinforcement learning may use a compensation in consideration of a target compression loss rate and the number of the plurality of terminals as input values.
  • the base station may determine the compression rate based on the input value.
  • the target compression loss ratio may be set differently based on the number of a plurality of terminals. For example, the target compression loss rate may be set higher as the number of the plurality of terminals increases.
  • the base station may perform the operation of checking the number of terminals to determine the compression rate that minimizes the transmission capacity as described above, as described above.
  • the base station may indicate to each of the plurality of terminals information on the compression ratio and MCS determined based on the above description.
  • the base station may receive data based on the MCS and the compression ratio determined from the plurality of terminals.
  • the base station may update the first global parameter to the second global parameter based on data received from each of the plurality of terminals.
  • the base station may transmit the updated global parameter to a plurality of terminals.
  • examples of the above-described proposed method may also be included as one of the implementation methods of the present disclosure, it is clear that they may be regarded as a kind of proposed method.
  • the above-described proposed methods may be implemented independently, or may be implemented in the form of a combination (or merge) of some of the proposed methods.
  • Rules can be defined so that the base station informs the terminal of whether the proposed methods are applied or not (or information about the rules of the proposed methods) through a predefined signal (eg, a physical layer signal or a higher layer signal). there is.
  • Embodiments of the present disclosure may be applied to various wireless access systems.
  • various radio access systems there is a 3rd Generation Partnership Project (3GPP) or a 3GPP2 system.
  • 3GPP 3rd Generation Partnership Project
  • 3GPP2 3rd Generation Partnership Project2
  • Embodiments of the present disclosure may be applied not only to the various radio access systems, but also to all technical fields to which the various radio access systems are applied. Furthermore, the proposed method can be applied to mmWave and THz communication systems using very high frequency bands.
  • embodiments of the present disclosure may be applied to various applications such as free-running vehicles and drones.

Abstract

L'invention concerne un procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et un appareil pour prendre en charge celui-ci. Selon un mode de réalisation applicable à la présente invention, un procédé par lequel une station de base émet un signal peut comprendre les étapes suivantes : émission d'un premier paramètre global à une pluralité de terminaux ; réception de signaux de référence respectifs provenant de la pluralité de terminaux ; mesure des rapports signal/bruit (SNR) sur la base des signaux de référence respectifs reçus et détermination d'un taux de compression et d'un schéma de codage de modulation (MCS) sur la base des SNR mesurés ; indication d'informations à propos du taux de compression et du MCS déterminés à chaque terminal de la pluralité de terminaux ; réception de données en provenance de la pluralité de terminaux sur la base du taux de compression et du MCS déterminés ; et mise à jour du premier paramètre global en un deuxième paramètre global sur la base des données reçues de la pluralité de terminaux.
PCT/KR2020/011234 2020-08-24 2020-08-24 Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil WO2022045377A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020227034789A KR20230056622A (ko) 2020-08-24 2020-08-24 무선 통신 시스템에서 단말 및 기지국의 신호 송수신 방법 및 장치
PCT/KR2020/011234 WO2022045377A1 (fr) 2020-08-24 2020-08-24 Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/011234 WO2022045377A1 (fr) 2020-08-24 2020-08-24 Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil

Publications (1)

Publication Number Publication Date
WO2022045377A1 true WO2022045377A1 (fr) 2022-03-03

Family

ID=80353489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/011234 WO2022045377A1 (fr) 2020-08-24 2020-08-24 Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil

Country Status (2)

Country Link
KR (1) KR20230056622A (fr)
WO (1) WO2022045377A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200293896A1 (en) * 2019-03-12 2020-09-17 Samsung Electronics Co., Ltd. Multiple-input multiple-output (mimo) detector selection using neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101127498B1 (ko) * 2004-10-20 2012-03-23 코닌클리케 필립스 일렉트로닉스 엔.브이. 비코닝 프로토콜을 사용하여 데이터 속도와 송신 전력을 동적으로 적응시키는 시스템 및 방법
KR101314611B1 (ko) * 2007-01-30 2013-10-07 엘지전자 주식회사 주파수 선택성에 따른 mcs 인덱스 선택 방법, 장치, 및이를 위한 통신 시스템
KR101428921B1 (ko) * 2013-04-12 2014-09-25 한국과학기술원 다중 라디오 환경에서 기계학습을 이용한 적응형 전송 방법 및 장치
WO2017153891A1 (fr) * 2016-03-07 2017-09-14 Neptune Computer Inc. Procédé et système pour un ordinateur d'interface avec des dispositifs périphériques connectés sans fil
KR20180034558A (ko) * 2015-07-27 2018-04-04 후아웨이 테크놀러지 컴퍼니 리미티드 비승인 다중 액세스 시스템에서의 링크 적응

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101127498B1 (ko) * 2004-10-20 2012-03-23 코닌클리케 필립스 일렉트로닉스 엔.브이. 비코닝 프로토콜을 사용하여 데이터 속도와 송신 전력을 동적으로 적응시키는 시스템 및 방법
KR101314611B1 (ko) * 2007-01-30 2013-10-07 엘지전자 주식회사 주파수 선택성에 따른 mcs 인덱스 선택 방법, 장치, 및이를 위한 통신 시스템
KR101428921B1 (ko) * 2013-04-12 2014-09-25 한국과학기술원 다중 라디오 환경에서 기계학습을 이용한 적응형 전송 방법 및 장치
KR20180034558A (ko) * 2015-07-27 2018-04-04 후아웨이 테크놀러지 컴퍼니 리미티드 비승인 다중 액세스 시스템에서의 링크 적응
WO2017153891A1 (fr) * 2016-03-07 2017-09-14 Neptune Computer Inc. Procédé et système pour un ordinateur d'interface avec des dispositifs périphériques connectés sans fil

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200293896A1 (en) * 2019-03-12 2020-09-17 Samsung Electronics Co., Ltd. Multiple-input multiple-output (mimo) detector selection using neural network
US11625610B2 (en) * 2019-03-12 2023-04-11 Samsung Electronics Co., Ltd Multiple-input multiple-output (MIMO) detector selection using neural network

Also Published As

Publication number Publication date
KR20230056622A (ko) 2023-04-27

Similar Documents

Publication Publication Date Title
WO2021112360A1 (fr) Procédé et dispositif d'estimation de canal dans un système de communication sans fil
WO2022050432A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021256584A1 (fr) Procédé d'émission ou de réception de données dans un système de communication sans fil et appareil associé
WO2022045399A1 (fr) Procédé d'apprentissage fédéré basé sur une transmission de poids sélective et terminal associé
WO2022019352A1 (fr) Procédé et appareil de transmission et de réception de signal pour un terminal et une station de base dans un système de communication sans fil
WO2022014751A1 (fr) Procédé et appareil de génération de mots uniques pour estimation de canal dans le domaine fréquentiel dans un système de communication sans fil
WO2022014732A1 (fr) Procédé et appareil d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021251523A1 (fr) Procédé et dispositif permettant à un ue et à une station de base d'émettre et de recevoir un signal dans un système de communication sans fil
WO2021251511A1 (fr) Procédé d'émission/réception de signal de liaison montante de bande de fréquences haute dans un système de communication sans fil, et dispositif associé
WO2022045377A1 (fr) Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil
WO2022014735A1 (fr) Procédé et dispositif permettant à un terminal et une station de base de transmettre et recevoir des signaux dans un système de communication sans fil
WO2022004914A1 (fr) Procédé et appareil d'emission et de réception de signaux d'un équipement utilisateur et station de base dans un système de communication sans fil
WO2022097774A1 (fr) Procédé et dispositif pour la réalisation d'une rétroaction par un terminal et une station de base dans un système de communication sans fil
WO2022014728A1 (fr) Procédé et appareil pour effectuer un codage de canal par un équipement utilisateur et une station de base dans un système de communication sans fil
WO2022054980A1 (fr) Procédé de codage et structure de codeur de réseau neuronal utilisables dans un système de communication sans fil
WO2022045402A1 (fr) Procédé et dispositif permettant à un terminal et une station de base d'émettre et recevoir un signal dans un système de communication sans fil
WO2022004927A1 (fr) Procédé d'émission ou de réception de signal avec un codeur automatique dans un système de communication sans fil et appareil associé
WO2022119021A1 (fr) Procédé et dispositif d'adaptation d'un système basé sur une classe d'apprentissage à la technologie ai mimo
WO2022039287A1 (fr) Procédé permettant à un équipement utilisateur et à une station de base de transmettre/recevoir des signaux dans un système de communication sans fil, et appareil
WO2022050434A1 (fr) Procédé et appareil pour effectuer un transfert intercellulaire dans système de communication sans fil
WO2021261611A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2022080530A1 (fr) Procédé et dispositif pour émettre et recevoir des signaux en utilisant de multiples antennes dans un système de communication sans fil
WO2021256585A1 (fr) Procédé et dispositif pour la transmission/la réception d'un signal dans un système de communication sans fil
WO2022054981A1 (fr) Procédé et dispositif d'exécution d'apprentissage fédéré par compression
WO2022045390A1 (fr) Procédé et appareil pour la mise en œuvre d'un codage de canal au moyen d'un terminal et d'une station de base dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20951611

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20951611

Country of ref document: EP

Kind code of ref document: A1