WO2022030664A1 - Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé - Google Patents

Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé Download PDF

Info

Publication number
WO2022030664A1
WO2022030664A1 PCT/KR2020/010323 KR2020010323W WO2022030664A1 WO 2022030664 A1 WO2022030664 A1 WO 2022030664A1 KR 2020010323 W KR2020010323 W KR 2020010323W WO 2022030664 A1 WO2022030664 A1 WO 2022030664A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
base station
similarity
channel
communication
Prior art date
Application number
PCT/KR2020/010323
Other languages
English (en)
Korean (ko)
Inventor
김재환
박성호
김수남
홍성룡
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2020/010323 priority Critical patent/WO2022030664A1/fr
Priority to KR1020237007016A priority patent/KR20230049664A/ko
Publication of WO2022030664A1 publication Critical patent/WO2022030664A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0404Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas the mobile station comprising multiple antennas, e.g. to provide uplink diversity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/10Monitoring; Testing of transmitters
    • H04B17/11Monitoring; Testing of transmitters for calibration
    • H04B17/12Monitoring; Testing of transmitters for calibration of transmit antennas, e.g. of the amplitude or phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0408Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas using two or more beams, i.e. beam diversity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0617Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal for beam forming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0686Hybrid systems, i.e. switching and simultaneous transmission
    • H04B7/0695Hybrid systems, i.e. switching and simultaneous transmission using beam selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0254Channel estimation channel estimation algorithms using neural network algorithms

Definitions

  • the present specification relates to a communication method based on the similarity of channel space information between frequency bands in a wireless communication system and an apparatus therefor, and more particularly, to a channel space between frequency bands capable of reducing overhead due to channel estimation and beam training. It relates to a communication method based on the similarity of information and an apparatus therefor.
  • a wireless communication system is a multiple access system that can support communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
  • Examples of the multiple access system include a Code Division Multiple Access (CDMA) system, a Frequency Division Multiple Access (FDMA) system, a Time Division Multiple Access (TDMA) system, a Space Division Multiple Access (SDMA), or an Orthogonal Frequency Division Multiple Access (OFDMA) system.
  • CDMA Code Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • SDMA Space Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • IDMA Interleave Division Multiple Access
  • the present specification provides a communication method based on the similarity of channel spatial information between frequency bands in a wireless communication system that reduces the amount of computation and beam training overhead by using low-band signal information composed of a small number of antennas and using it in a high-band system. And it aims to implement an apparatus for this.
  • An object of the present invention is to implement a communication method and an apparatus therefor.
  • a method is a method in which a terminal having first control information in a wireless communication system communicates based on the similarity of spatial information for channel between frequency bands, from a base station Receiving second control information, if the calculation capability of the base station related to the degree of similarity of channel spatial information is confirmed based on at least one of the first control information and the second control information, the calculation of the degree of similarity is performed with the base station and performing at least one of beam training and channel estimation with an algorithm selected based on the response upon receiving a response to the request.
  • the first control information may be information related to the antenna arrangement of the terminal.
  • the information related to the antenna arrangement of the terminal may include one or more of first terminal information on whether the antenna arrays can be aligned, second terminal information on RF calibration, or third terminal information that is a reference value for determining similarity.
  • the second control information may be information related to an antenna arrangement of the base station.
  • information related to the antenna arrangement of the base station includes (i) first base station information on the computational capability of spatial information similarity, (ii) second base station information on whether the antenna arrays can be aligned, (iii) RF Calibration It may include one or more of third base station information and (iv) fourth base station information that is a reference value for determining similarity.
  • the requesting operation of the similarity degree is performed based on one or more conditions in which the computational capability of the base station is preset, and the one or more preset conditions are (i) based on information on the computational capability of the spatial information similarity.
  • a first condition requiring that the operation supported by the base station coincide with the operation expected by the terminal, (ii) requiring the base station to be able to align the antenna array based on information on whether the antenna arrays are alignable It may include one or more of the second condition, or (iii) a third condition requiring that the base station be the RF Calibration based on the information on the RF Calibration.
  • the operation may include a CMD operation based on a covariance matrix or an estimation operation based on AOA.
  • the response to the request may include a parameter for the similarity of the channel spatial information as a result of the operation.
  • the parameter for the similarity may be implemented based on 1 bit.
  • the response includes a result of determining the similarity of the channel spatial information, and the performing the beam training is performed in a high frequency band based on information on a direction of a beam used in a low frequency band if the channel spatial information is similar.
  • the performing the beam training is performed in a high frequency band based on information on a direction of a beam used in a low frequency band if the channel spatial information is similar.
  • the performing the beam training includes the steps of selecting an adjacent beam based on the beam used in the low frequency band if the channel spatial information is similar, and assigning a priority of beam search to the adjacent beam and performing beam training of the high frequency band by giving the beam training, wherein the adjacent beam may be a beam adjacent to a beam used in the low frequency band among a plurality of preset directional beams within a preset range.
  • the response includes a result of determining the similarity of the channel spatial information, and the performing the beam training is performed in a high frequency band using all of a plurality of preset beam directions when the channel spatial information is dissimilar. It may include performing beam training.
  • the performing of the channel estimation may include performing channel estimation on the signal of the high frequency band based on the measured channel spatial information in the low frequency band.
  • a method is a method in which a terminal having first control information in a wireless communication system communicates based on the similarity of spatial information for channel between frequency bands, the base station Transmitting a first message including the first control information, receiving a first response to the first message, calculation of the base station related to the similarity of the channel space information based on the first response When the capability is confirmed, transmitting a second message requesting the calculation of the similarity to the base station, and receiving a second response to the second message, beam training or channel estimation with an algorithm selected based on the second response Comprising performing at least one of, wherein the first message is based on at least one of the first control information and the second control information of the base station, the calculation capability of the base station related to the similarity of the channel space information It may be a message requesting confirmation.
  • a terminal includes one or more transceivers, one or more processors, and one or more memories connected to the one or more processors and storing first control information and instructions, wherein the instructions are When executed by the one or more processors, causes the one or more processors to support operations for intelligent beam prediction, wherein the operations include receiving second control information from a base station, the first control information and the second control If the calculation capability of the base station related to the degree of similarity of channel spatial information is confirmed based on at least one of the information, an operation of requesting the calculation of the degree of similarity to the base station and an algorithm selected based on the response when a response to the request is received and performing at least one of beam training and channel estimation.
  • the amount of computation and the beam training overhead can be reduced by using signal information of a low band configured with a small number of antennas for a high band system.
  • the time required for beam search can be reduced by using the direction information of the directional beam used in the low band.
  • 1 illustrates physical channels and general signal transmission used in a 3GPP system.
  • FIG. 2 is a diagram illustrating an example of a communication structure that can be provided in a 6G system.
  • 5 illustrates a deep neural network structure
  • FIG. 6 illustrates a convolutional neural network structure
  • FIG. 7 illustrates a filter operation in a convolutional neural network.
  • FIG. 8 illustrates a neural network structure in which a cyclic loop exists.
  • FIG. 9 illustrates the operational structure of a recurrent neural network.
  • FIG. 11 shows an example of a THz communication application.
  • FIG. 12 illustrates a sequence of a communication method based on the similarity of channel spatial information between frequency bands according to various embodiments of the present specification.
  • FIG 13 illustrates an antenna array for providing low-band and high-band communication services applied to various embodiments of the present specification.
  • FIG. 14 is an exemplary diagram for estimation of a spatial correlation matrix according to some embodiments of the present specification.
  • 15 illustrates a sequence of a communication method based on the similarity of channel spatial information between frequency bands according to some embodiments of the present specification.
  • 16 illustrates a sequence of a communication method based on the similarity of channel spatial information between frequency bands according to some other embodiments of the present specification.
  • FIG. 17 illustrates a communication system applied to the present invention.
  • 19 illustrates a signal processing circuit for a transmission signal.
  • FIG. 20 shows another example of a wireless device to which the present invention is applied.
  • FIG. 21 illustrates a portable device to which the present invention is applied.
  • FIG. 22 illustrates a vehicle or an autonomous driving vehicle to which the present invention is applied.
  • 25 illustrates a robot applied to the present invention.
  • 26 illustrates an AI device applied to the present invention.
  • CDMA may be implemented with a radio technology such as Universal Terrestrial Radio Access (UTRA) or CDMA2000.
  • TDMA may be implemented with a radio technology such as Global System for Mobile communications (GSM)/General Packet Radio Service (GPRS)/Enhanced Data Rates for GSM Evolution (EDGE).
  • GSM Global System for Mobile communications
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data Rates for GSM Evolution
  • OFDMA may be implemented with a radio technology such as IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802-20, Evolved UTRA (E-UTRA), and the like.
  • UTRA is part of the Universal Mobile Telecommunications System (UMTS).
  • 3GPP 3rd Generation Partnership Project
  • Long Term Evolution is a part of Evolved UMTS (E-UMTS) using E-UTRA and LTE-A (Advanced)/LTE-A pro is an evolved version of 3GPP LTE.
  • 3GPP NR New Radio or New Radio Access Technology
  • 3GPP 6G may be an evolved version of 3GPP NR.
  • LTE refers to technology after 3GPP TS 36.xxx Release 8.
  • LTE technology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A
  • LTE technology after 3GPP TS 36.xxx Release 13 is referred to as LTE-A pro
  • 3GPP NR refers to technology after TS 38.
  • 3GPP 6G may refer to technology after TS Release 17 and/or Release 18.
  • xxx stands for standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • terms, abbreviations, etc. used in the description of the present invention reference may be made to matters described in standard documents published before the present invention. For example, you can refer to the following documents:
  • RRC Radio Resource Control
  • RRC Radio Resource Control
  • a terminal receives information through a downlink (DL) from a base station, and the terminal transmits information through an uplink (UL) to the base station.
  • Information transmitted and received between the base station and the terminal includes data and various control information, and various physical channels exist according to the type/use of the information they transmit and receive.
  • the terminal When the terminal is powered on or newly enters a cell, the terminal performs an initial cell search operation, such as synchronizing with the base station (S11). To this end, the terminal receives a primary synchronization signal (PSS) and a secondary synchronization signal (SSS) from the base station, synchronizes with the base station, and obtains information such as a cell ID. Thereafter, the terminal may receive a physical broadcast channel (PBCH) from the base station to obtain intra-cell broadcast information. On the other hand, the UE may receive a downlink reference signal (DL RS) in the initial cell search step to check the downlink channel state.
  • PSS primary synchronization signal
  • SSS secondary synchronization signal
  • PBCH physical broadcast channel
  • DL RS downlink reference signal
  • the UE After the initial cell search, the UE receives a Physical Downlink Control Channel (PDCCH) and a Physical Downlink Control Channel (PDSCH) according to information carried on the PDCCH to obtain more specific system information. It can be done (S12).
  • PDCH Physical Downlink Control Channel
  • PDSCH Physical Downlink Control Channel
  • the terminal may perform a random access procedure (RACH) for the base station (S13 to S16).
  • RACH Random Access procedure
  • the UE transmits a specific sequence as a preamble through a Physical Random Access Channel (PRACH) (S13 and S15), and a response message to the preamble through the PDCCH and the corresponding PDSCH ((Random Access (RAR)) Response) message)
  • PRACH Physical Random Access Channel
  • RAR Random Access
  • a contention resolution procedure may be additionally performed (S16).
  • the UE After performing the procedure as described above, the UE performs PDCCH/PDSCH reception (S17) and Physical Uplink Shared Channel (PUSCH)/Physical Uplink Control Channel (Physical Uplink) as a general uplink/downlink signal transmission procedure.
  • Control Channel (PUCCH) transmission (S18) may be performed.
  • the UE may receive downlink control information (DCI) through the PDCCH.
  • DCI downlink control information
  • the DCI includes control information such as resource allocation information for the terminal, and different formats may be applied according to the purpose of use.
  • control information that the terminal transmits to the base station through the uplink or the terminal receives from the base station includes a downlink/uplink ACK/NACK signal, a channel quality indicator (CQI), a precoding matrix index (PMI), and a rank indicator (RI). ) and the like.
  • the UE may transmit the above-described control information such as CQI/PMI/RI through PUSCH and/or PUCCH.
  • the base station transmits a related signal to the terminal through a downlink channel to be described later, and the terminal receives the related signal from the base station through a downlink channel to be described later.
  • PDSCH Physical Downlink Shared Channel
  • PDSCH carries downlink data (eg, DL-shared channel transport block, DL-SCH TB), and modulation methods such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (QAM), 64 QAM, and 256 QAM are available. applies.
  • QPSK Quadrature Phase Shift Keying
  • QAM 16 Quadrature Amplitude Modulation
  • a codeword is generated by encoding the TB.
  • a PDSCH can carry multiple codewords. Scrambling and modulation mapping are performed for each codeword, and modulation symbols generated from each codeword are mapped to one or more layers (Layer mapping). Each layer is mapped to a resource together with a demodulation reference signal (DMRS), is generated as an OFDM symbol signal, and is transmitted through a corresponding antenna port.
  • DMRS demodulation reference signal
  • the PDCCH carries downlink control information (DCI) and a QPSK modulation method is applied.
  • DCI downlink control information
  • One PDCCH is composed of 1, 2, 4, 8, 16 CCEs (Control Channel Elements) according to an Aggregation Level (AL).
  • One CCE consists of six REGs (Resource Element Groups).
  • One REG is defined as one OFDM symbol and one (P)RB.
  • the UE obtains DCI transmitted through the PDCCH by performing decoding (aka, blind decoding) on the set of PDCCH candidates.
  • a set of PDCCH candidates decoded by the UE is defined as a PDCCH search space set.
  • the search space set may be a common search space or a UE-specific search space.
  • the UE may acquire DCI by monitoring PDCCH candidates in one or more search space sets configured by MIB or higher layer signaling.
  • the terminal transmits a related signal to the base station through an uplink channel to be described later, and the base station receives the related signal from the terminal through an uplink channel to be described later.
  • PUSCH Physical Uplink Shared Channel
  • PUSCH carries uplink data (eg, UL-shared channel transport block, UL-SCH TB) and/or uplink control information (UCI), and CP-OFDM (Cyclic Prefix - Orthogonal Frequency Division Multiplexing) waveform (waveform) , DFT-s-OFDM (Discrete Fourier Transform - spread - Orthogonal Frequency Division Multiplexing) is transmitted based on the waveform.
  • the PUSCH is transmitted based on the DFT-s-OFDM waveform
  • the UE transmits the PUSCH by applying transform precoding.
  • the UE when transform precoding is not possible (eg, transform precoding is disabled), the UE transmits a PUSCH based on the CP-OFDM waveform, and when transform precoding is possible (eg, transform precoding is enabled), the UE transmits the CP-OFDM PUSCH may be transmitted based on a waveform or a DFT-s-OFDM waveform.
  • PUSCH transmission is dynamically scheduled by a UL grant in DCI, or based on higher layer (eg, RRC) signaling (and/or Layer 1 (L1) signaling (eg, PDCCH)) semi-statically. Can be scheduled (configured grant).
  • PUSCH transmission may be performed on a codebook-based or non-codebook-based basis.
  • the PUCCH carries uplink control information, HARQ-ACK and/or a scheduling request (SR), and may be divided into a plurality of PUCCHs according to the PUCCH transmission length.
  • SR scheduling request
  • 6G (wireless) systems have (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery- It aims to reduce energy consumption of battery-free IoT devices, (vi) ultra-reliable connections, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be in four aspects: intelligent connectivity, deep connectivity, holographic connectivity, and ubiquitous connectivity, and the 6G system can satisfy the requirements shown in Table 1 below. That is, Table 1 is a table showing an example of the requirements of the 6G system.
  • FIG. 2 is a diagram showing an example of a communication structure that can be provided in a 6G system.
  • eMBB Enhanced mobile broadband
  • URLLC Ultra-reliable low latency communications
  • mMTC massive machine-type communication
  • AI integrated communication Tactile internet, High throughput, High network capacity, High energy efficiency, Low backhaul and It may have key factors such as access network congestion and enhanced data security.
  • FIG. 2 is a diagram showing an example of a communication structure that can be provided in a 6G system.
  • 6G systems are expected to have 50 times higher simultaneous wireless connectivity than 5G wireless communication systems.
  • URLLC a key feature of 5G, will become an even more important technology by providing an end-to-end delay of less than 1ms in 6G communication.
  • 6G systems will have much better volumetric spectral efficiencies as opposed to frequently used areal spectral efficiencies.
  • the 6G system can provide very long battery life and advanced battery technology for energy harvesting, so mobile devices will not need to be charged separately in the 6G system.
  • New network characteristics in 6G may be as follows.
  • 6G is expected to be integrated with satellites to provide a global mobile population.
  • the integration of terrestrial, satellite and public networks into one wireless communication system is very important for 6G.
  • AI may be applied in each step of a communication procedure (or each procedure of signal processing to be described later).
  • the 6G wireless network will deliver power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Small cell networks The idea of small cell networks was introduced to improve the received signal quality as a result of improved throughput, energy efficiency and spectral efficiency in cellular systems. As a result, small cell networks are essential characteristics for communication systems beyond 5G and Beyond 5G (5GB). Accordingly, the 6G communication system also adopts the characteristics of the small cell network.
  • Ultra-dense heterogeneous networks will be another important characteristic of 6G communication systems.
  • a multi-tier network composed of heterogeneous networks improves overall QoS and reduces costs.
  • a backhaul connection is characterized as a high-capacity backhaul network to support high-capacity traffic.
  • High-speed fiber optics and free-space optics (FSO) systems may be possible solutions to this problem.
  • High-precision localization (or location-based service) through communication is one of the functions of the 6G wireless communication system. Therefore, the radar system will be integrated with the 6G network.
  • Softening and virtualization are two important features that underlie the design process in 5GB networks to ensure flexibility, reconfigurability and programmability. In addition, billions of devices can be shared in a shared physical infrastructure.
  • AI The most important and newly introduced technology for 6G systems is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Incorporating AI into communications can simplify and enhance real-time data transmission.
  • AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play an important role in M2M, machine-to-human and human-to-machine communication.
  • AI can be a rapid communication in BCI (Brain Computer Interface).
  • BCI Brain Computer Interface
  • AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling and It may include an allocation (allocation) and the like.
  • Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a physical layer of a downlink (DL). In addition, machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DL downlink
  • machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • Deep learning-based AI algorithms require large amounts of training data to optimize training parameters.
  • a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a wireless channel.
  • signals of the physical layer of wireless communication are complex signals.
  • further research on a neural network for detecting a complex domain signal is needed.
  • Machine learning refers to a set of actions that trains a machine to create a machine that can perform tasks that humans can or cannot do.
  • Machine learning requires data and a learning model.
  • data learning methods can be roughly divided into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network learning is to minimize output errors. Neural network learning repeatedly inputs learning data into the neural network, calculates the output and target errors of the neural network for the training data, and backpropagates the neural network error from the output layer of the neural network to the input layer in the direction to reduce the error. ) to update the weight of each node in the neural network.
  • Supervised learning uses training data in which the correct answer is labeled in the training data, and in unsupervised learning, the correct answer may not be labeled in the training data. That is, for example, learning data in the case of supervised learning related to data classification may be data in which categories are labeled for each of the training data.
  • the labeled training data is input to the neural network, and an error can be calculated by comparing the output (category) of the neural network with the label of the training data.
  • the calculated error is back propagated in the reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back propagation.
  • a change amount of the connection weight of each node to be updated may be determined according to a learning rate.
  • the computation of the neural network on the input data and the backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of learning a neural network, a high learning rate can be used to increase the efficiency by allowing the neural network to quickly obtain a certain level of performance, and in the late learning period, a low learning rate can be used to increase the accuracy.
  • the learning method may vary depending on the characteristics of the data. For example, when the purpose of accurately predicting data transmitted from a transmitter in a communication system is at a receiver, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
  • the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and Recurrent Boltzmann Machine (RNN) methods. have.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN Recurrent Boltzmann Machine
  • An artificial neural network is an example of connecting several perceptrons.
  • the huge artificial neural network structure may extend the simplified perceptron structure shown in FIG. 3 to apply input vectors to different multidimensional perceptrons.
  • an input value or an output value is referred to as a node.
  • the perceptron structure shown in FIG. 3 can be described as being composed of a total of three layers based on an input value and an output value.
  • An artificial neural network in which H (d+1)-dimensional perceptrons exist between the 1st layer and the 2nd layer and K (H+1)-dimensional perceptrons exist between the 2nd layer and the 3rd layer can be expressed as shown in FIG. 4 .
  • the layer where the input vector is located is called the input layer
  • the layer where the final output value is located is called the output layer
  • all the layers located between the input layer and the output layer are called hidden layers.
  • three layers are disclosed, but when counting the actual number of artificial neural network layers, the input layer is counted except for the input layer, so it can be viewed as a total of two layers.
  • the artificial neural network is constructed by connecting the perceptrons of the basic blocks in two dimensions.
  • the aforementioned input layer, hidden layer, and output layer can be jointly applied in various artificial neural network structures such as CNN and RNN to be described later as well as multi-layer perceptron.
  • various artificial neural network structures such as CNN and RNN to be described later as well as multi-layer perceptron.
  • the artificial neural network becomes deeper, and a machine learning paradigm that uses a sufficiently deep artificial neural network as a learning model is called deep learning.
  • an artificial neural network used for deep learning is called a deep neural network (DNN).
  • DNN deep neural network
  • the deep neural network shown in FIG. 5 is a multilayer perceptron composed of eight hidden layers + output layers.
  • the multi-layered perceptron structure is referred to as a fully-connected neural network.
  • a connection relationship does not exist between nodes located in the same layer, and a connection relationship exists only between nodes located in adjacent layers.
  • DNN has a fully connected neural network structure and is composed of a combination of a number of hidden layers and activation functions, so it can be usefully applied to figure out the correlation between input and output.
  • the correlation characteristic may mean a joint probability of input/output.
  • nodes located inside one layer are arranged in a one-dimensional vertical direction.
  • the nodes are two-dimensionally arranged with w horizontally and h vertical nodes (convolutional neural network structure of FIG. 6 ).
  • a weight is added per connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights must be considered.
  • h ⁇ w nodes in the input layer a total of h 2 w 2 weights are needed between two adjacent layers.
  • the convolutional neural network of FIG. 6 has a problem in that the number of weights increases exponentially according to the number of connections, so instead of considering the connection of all modes between adjacent layers, it is assumed that a filter with a small size exists in FIG. 7 As in Fig., the weighted sum and activation function calculations are performed on the overlapping filters.
  • One filter has a weight corresponding to the number corresponding to its size, and weight learning can be performed so that a specific feature on an image can be extracted and output as a factor.
  • a filter with a size of 3 ⁇ 3 is applied to the upper left 3 ⁇ 3 region of the input layer, and an output value obtained by performing weighted sum and activation function operations on the corresponding node is stored in z22.
  • the filter performs weight sum and activation function calculations while moving horizontally and vertically at regular intervals while scanning the input layer, and places the output value at the current filter position.
  • This calculation method is similar to a convolution operation on an image in the field of computer vision, so a deep neural network with such a structure is called a convolutional neural network (CNN), and a hidden layer generated as a result of the convolution operation is called a convolutional layer.
  • a neural network having a plurality of convolutional layers is called a deep convolutional neural network (DCNN).
  • the number of weights can be reduced by calculating the weighted sum by including only nodes located in the region covered by the filter in the node where the filter is currently located. Due to this, one filter can be used to focus on features for a local area. Accordingly, CNN can be effectively applied to image data processing in which physical distance in a two-dimensional domain is an important criterion. Meanwhile, in CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through the convolution operation of each filter.
  • a structure in which this method is applied to an artificial neural network is called a recurrent neural network structure.
  • a recurrent neural network connects elements (x1(t), x2(t), ,..., xd(t)) of a certain gaze t on a data sequence to a fully connected neural network.
  • the previous time point t-1 is weighted by inputting the hidden vectors (z1(t-1), z2(t*?*1),..., zH(t*?*1)) together. and a structure to which an activation function is applied.
  • the reason why the hidden vector is transferred to the next time point in this way is that information in the input vector at previous time points is considered to be accumulated in the hidden vector of the current time point.
  • the recurrent neural network operates in a predetermined time sequence with respect to an input data sequence.
  • the hidden vector (z1(1), z2(1), ... ,zH(1)) is input together with the input vector (x1(2),x2(2),...,xd(2)) of time point 2, and then the vector of the hidden layer (z1(2)) through weighted sum and activation functions ),z2(2) ,...,zH(2)). This process is repeatedly performed until time point 2, time point 3, ,, and time point T.
  • a deep recurrent neural network when a plurality of hidden layers are arranged in a recurrent neural network, this is called a deep recurrent neural network (DRNN).
  • the recurrent neural network is designed to be usefully applied to sequence data (eg, natural language processing).
  • Deep Q-Network As a neural network core used as a learning method, in addition to DNN, CNN, and RNN, Restricted Boltzmann Machine (RBM), deep belief networks (DBN), Deep Q-Network and It includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • RBM Restricted Boltzmann Machine
  • DNN deep belief networks
  • Deep Q-Network includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling and It may include an allocation (allocation) and the like.
  • the data rate can be increased by increasing the bandwidth. This can be accomplished by using sub-THz communication with a wide bandwidth and applying advanced large-scale MIMO technology.
  • THz waves also known as sub-millimeter radiation, typically exhibit a frequency band between 0.1 THz and 10 THz with corresponding wavelengths in the range of 0.03 mm-3 mm.
  • the 100GHz-300GHz band range (Sub THz band) is considered a major part of the THz band for cellular communication.
  • Sub-THz band Addition to mmWave band increases 6G cellular communication capacity.
  • 300GHz-3THz is in the far-infrared (IR) frequency band.
  • the 300GHz-3THz band is part of the broadband, but at the edge of the wideband, just behind the RF band. Thus, this 300 GHz-3 THz band shows similarities to RF. 10 shows an example of an electromagnetic spectrum.
  • THz wave is located between RF (Radio Frequency)/millimeter (mm) and infrared band, (i) It transmits non-metal/non-polar material better than visible light/infrared light, and has a shorter wavelength than RF/millimeter wave, so it has high straightness. Beam focusing may be possible. In addition, since the photon energy of the THz wave is only a few meV, it is harmless to the human body.
  • the frequency band expected to be used for THz wireless communication may be a D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz) band with low propagation loss due to absorption of molecules in the air.
  • THz wireless communication may be applied to wireless recognition, sensing, imaging, wireless communication, THz navigation, and the like.
  • the main characteristics of THz communication include (i) widely available bandwidth to support very high data rates, and (ii) high path loss occurring at high frequencies (high directional antennas are indispensable).
  • the narrow beamwidth produced by the highly directional antenna reduces interference.
  • the small wavelength of the THz signal allows a much larger number of antenna elements to be integrated into devices and BSs operating in this band. This allows the use of advanced adaptive nesting techniques that can overcome range limitations.
  • FIG. 11 shows an example of a THz communication application.
  • a THz wireless communication scenario may be classified into a macro network, a micro network, and a nanoscale network.
  • THz wireless communication can be applied to vehicle-to-vehicle connection and backhaul/fronthaul connection.
  • THz wireless communication in micro networks is applied to indoor small cells, fixed point-to-point or multi-point connections such as wireless connections in data centers, and near-field communication such as kiosk downloading.
  • Table 2 below is a table showing an example of a technique that can be used in the THz wave.
  • OWC technology is envisioned for 6G communications in addition to RF-based communications for all possible device-to-access networks. These networks connect to network-to-backhaul/fronthaul network connections.
  • OWC technology has already been used since the 4G communication system, but will be used more widely to meet the needs of the 6G communication system.
  • OWC technologies such as light fidelity, visible light communication, optical camera communication, and FSO communication based on a light band are well known technologies.
  • Communication based on optical radio technology can provide very high data rates, low latency and secure communication.
  • LiDAR can also be used for ultra-high-resolution 3D mapping in 6G communication based on wide bands.
  • FSO The transmitter and receiver characteristics of an FSO system are similar to those of a fiber optic network.
  • data transmission in an FSO system is similar to that of a fiber optic system. Therefore, FSO can be a good technology to provide backhaul connectivity in 6G systems along with fiber optic networks.
  • FSO supports high-capacity backhaul connections for remote and non-remote areas such as sea, space, underwater, and isolated islands.
  • FSO also supports cellular BS connectivity.
  • MIMO technology improves, so does the spectral efficiency. Therefore, large-scale MIMO technology will be important in 6G systems. Since the MIMO technology uses multiple paths, a multiplexing technique and a beam generation and operation technique suitable for the THz band should also be considered important so that a data signal can be transmitted through one or more paths.
  • Blockchain will become an important technology for managing large amounts of data in future communication systems.
  • Blockchain is a form of distributed ledger technology, which is a database distributed across numerous nodes or computing devices. Each node replicates and stores an identical copy of the ledger.
  • the blockchain is managed as a peer-to-peer network. It can exist without being managed by a centralized authority or server. Data on the blockchain is collected together and organized into blocks. Blocks are linked together and protected using encryption.
  • Blockchain in nature perfectly complements IoT at scale with improved interoperability, security, privacy, reliability and scalability. Therefore, blockchain technology provides several features such as interoperability between devices, traceability of large amounts of data, autonomous interaction of different IoT systems, and large-scale connection stability of 6G communication systems.
  • the 6G system integrates terrestrial and public networks to support vertical expansion of user communications.
  • 3D BS will be provided via low orbit satellites and UAVs. Adding a new dimension in terms of elevation and associated degrees of freedom makes 3D connections significantly different from traditional 2D networks.
  • UAVs Unmanned Aerial Vehicles
  • a BS entity is installed in the UAV to provide cellular connectivity.
  • UAVs have certain features not found in fixed BS infrastructure, such as easy deployment, strong line-of-sight links, and degrees of freedom with controlled mobility.
  • eMBB enhanced Mobile Broadband
  • URLLC Universal Mobile Broadband
  • mMTC massive Machine Type Communication
  • Tight integration of multiple frequencies and heterogeneous communication technologies is very important in 6G systems. As a result, users can seamlessly move from one network to another without having to make any manual configuration on the device. The best network is automatically selected from the available communication technologies. This will break the limitations of the cell concept in wireless communication. Currently, user movement from one cell to another causes too many handovers in high-density networks, causing handover failures, handover delays, data loss and ping-pong effects. 6G cell-free communication will overcome all of this and provide better QoS. Cell-free communication will be achieved through multi-connectivity and multi-tier hybrid technologies and different heterogeneous radios of devices.
  • WIET uses the same fields and waves as wireless communication systems.
  • the sensor and smartphone will be charged using wireless power transfer during communication.
  • WIET is a promising technology for extending the life of battery-charging wireless systems. Therefore, devices without batteries will be supported in 6G communication.
  • An autonomous wireless network is a function that can continuously detect dynamically changing environmental conditions and exchange information between different nodes.
  • sensing will be tightly integrated with communications to support autonomous systems.
  • the density of access networks in 6G will be enormous.
  • Each access network is connected by backhaul connections such as fiber optic and FSO networks.
  • backhaul connections such as fiber optic and FSO networks.
  • Beamforming is a signal processing procedure that adjusts an antenna array to transmit a radio signal in a specific direction.
  • Beamforming technology has several advantages such as high call-to-noise ratio, interference prevention and rejection, and high network efficiency.
  • Hologram beamforming (HBF) is a new beamforming method that is significantly different from MIMO systems because it uses a software-defined antenna. HBF will be a very effective approach for efficient and flexible transmission and reception of signals in multi-antenna communication devices in 6G.
  • Big data analytics is a complex process for analyzing various large data sets or big data. This process ensures complete data management by finding information such as hidden data, unknown correlations and customer propensity. Big data is gathered from a variety of sources such as videos, social networks, images and sensors. This technology is widely used to process massive amounts of data in 6G systems.
  • the linearity is strong, so there may be many shaded areas due to obstructions.
  • the LIS technology that expands the communication area, strengthens communication stability and enables additional additional services becomes important.
  • the LIS is an artificial surface made of electromagnetic materials, and can change the propagation of incoming and outgoing radio waves.
  • LIS can be seen as an extension of massive MIMO, but the array structure and operation mechanism are different from those of massive MIMO.
  • LIS has low power consumption in that it operates as a reconfigurable reflector with passive elements, that is, only passively reflects the signal without using an active RF chain.
  • each of the passive reflectors of the LIS must independently adjust the phase shift of the incoming signal, it can be advantageous for a wireless communication channel.
  • the reflected signal can be gathered at the target receiver to boost the received signal power.
  • the above salpin 6G communication technology may be applied in combination with the methods proposed in the present specification to be described later, or may be supplemented to specify or clarify the technical characteristics of the methods proposed in the present specification.
  • the communication service proposed in the present specification may be applied in combination with a communication service by 3G, 4G and/or 5G communication technology as well as the 6G communication technology described above.
  • Such overhead can be reduced if low-band and high-band communication systems coexist. If channels in the low-band and high-band are configured with similar paths, spatial information of the high-band channel can be estimated from the spatial information of the low-band channel.
  • a user equipment may reduce overhead by performing channel estimation or beam search on a high-band signal using a low-band signal having a relatively small number of antennas.
  • a base station receives information about the antenna arrangement of the terminal, and periodically transmits the similarity of channel spatial information between the low and high bands to the terminal through a control signal. Upon receiving such information, the terminal selectively performs an operation such as channel estimation or beam search based on the similarity.
  • the terminal and the base station must be able to support low-band and high-band communication systems.
  • the communication method may be performed by a base station or a UE applied to the communication systems exemplified herein.
  • the communication method may be used based on TDD, but is not limited thereto.
  • the base station transmits base station information to the UE (S1210).
  • the base station information includes information related to an antenna arrangement of the base station.
  • the information related to the antenna arrangement of the base station includes one or more of the following information.
  • the information on the computational capability of the spatial information similarity refers to the capability to perform an operation to be performed later in S1250.
  • Corresponding information may be represented as an example in Table 3.
  • the values in Table 3 are examples, and the technical ideas according to various embodiments of the present specification are not limited thereto.
  • the information on whether the antenna arrays are alignable includes information on the state of alignment of the antenna arrays. say The array state includes an ordered state and an unsorted state. In the case of sorting, it does not matter whether sorting is possible. Even in a non-aligned state, if a predetermined condition is satisfied, the antenna arrays may be aligned. A detailed description of the information will be described with reference to FIG. 13 .
  • the information on the RF Calibration is information on the RF Calibration for correcting the difference in RF characteristics of the RF elements, and the information includes a signal phase at the end of each antenna constituting the antenna array of the low frequency band and the high frequency band as a result of the calibration; Indicates that the signal magnitude is within a certain range.
  • the reference value for determining the similarity refers to a reference value for determining the similarity of the channel spatial information in S1250.
  • the UE transmits terminal information to the base station (S1220).
  • the terminal information includes information related to an antenna arrangement of the UE.
  • the information related to the antenna arrangement of the UE includes one or more of the following information.
  • the information related to the antenna arrangement of the UE does not include information on the computational capability of the spatial information similarity, unlike the information related to the antenna arrangement of the base station. This is because, since the corresponding operation is performed in the base station, there is no need to receive separate information from the UE.
  • the UE checks the calculation capability of the base station related to the similarity of the channel space information based on the received base station information and the preset terminal information (S1230).
  • the UE confirms the arithmetic capability of the base station related to the similarity of the channel space information based on the base station information received from the base station.
  • the information that is the basis of the confirmation includes (i) information on the computational capability of spatial information similarity, (ii) information on whether antenna arrays are alignable, and/or (iii) information on RF Calibration. .
  • the UE provides a predetermined condition based on each of (i) information on the computational capability of spatial information similarity, (ii) information on whether antenna arrays are alignable, and/or (iii) information on RF calibration. If all of these are satisfied, it is determined that the base station has computational capability.
  • the UE may check whether the operation supported by the base station matches the operation expected by the UE based on information on the computational capability of the spatial information similarity. As another example, the UE may confirm that the base station can align the antenna arrays based on the information on whether the antenna arrays are alignable. As another example, the UE may confirm that the base station is RF Calibration based on the information on the RF Calibration.
  • the UE When it is confirmed that the base station has the computing capability, the UE requests the base station to provide the similarity of the channel space information (S1240).
  • the UE requests the base station for which the computational capability is confirmed to provide the similarity of the channel spatial information.
  • the request includes a request for calculating the degree of similarity with respect to the channel spatial information and a request for transmitting the calculated degree of similarity.
  • the base station may transmit the calculated similarity to the UE dynamically or semi-persistently according to the UE's request.
  • the similarity calculation request may include information on an operation expected by the UE.
  • the operation expected by the UE includes a CMD operation based on a covariance matrix or an estimation operation based on AoA.
  • the base station determines the similarity of the channel space information in response to receiving the request from the UE (S1250).
  • the base station may perform a covariance matrix-based CMD operation or an AOA-based estimation operation to determine the similarity.
  • the base station uses the CMD of a spatial correlation matrix as a reference.
  • the base station measures the low-band correlation matrix received in the low-frequency band.
  • the base station estimates a first high-band correlation matrix based on the low-band correlation matrix.
  • the first high-band correlation matrix is estimated based on the low-band correlation matrix, and is distinguished from the actually measured correlation matrix.
  • a method of estimating the first high-band correlation matrix will be described with reference to FIG. 14 .
  • the base station measures the second high-band correlation matrix based on the signal received in the high-frequency band. Unlike the first high-band correlation matrix, the second high-band correlation matrix is not estimated, but is based on an actual received signal.
  • the transmission beam used for measuring the second high-band correlation matrix may be the same as the transmission beam used for measuring the low-band correlation matrix.
  • the base station may determine the similarity between the two using the first high-band correlation matrix and the second high-band correlation matrix. In this case, the degree of similarity can be confirmed by CMD.
  • CMD may be calculated by Equation 1 below.
  • Equation 1 The symbol of Equation 1 above is defined as follows.
  • the base station may determine similarity or dissimilarity by comparing the calculated CMD value with a threshold value. For example, when the threshold value is 0.1, if the calculated CMD is less than 0.1, the base station determines that it is similar. In contrast, if the calculated CMD is 0.1 or more, the base station determines that it is dissimilar.
  • the technical ideas according to various embodiments of the present specification are not limited to these examples.
  • the threshold may be determined by the base station and transmitted to the UE in S1210 or may be determined by the UE and transmitted to the base station in S1220. Also, the threshold value may be a preset value.
  • the base station uses the AoA of the dominant path as a reference.
  • the main path means the main path on AoA.
  • the base station is performed based on the AoA value measured using the antenna array of the UE.
  • the antenna array of the UE includes a first antenna array for a low-band frequency and a second antenna array for a high-band frequency.
  • the base station measures a first angle of arrival that is AoA associated with the first antenna array, and measures a second angle of arrival that is AoA associated with the second antenna array.
  • the transmission beam of the terminal used for measuring the first angle of arrival may be the same as the transmission beam used for measuring the second angle of arrival.
  • the base station may determine the degree of similarity between the two using the first angle of arrival and the second angle of arrival. In this case, the similarity can be confirmed by the difference in size.
  • the base station may determine similarity or dissimilarity by comparing the calculated difference in magnitude with a threshold value.
  • the threshold value corresponds to a reference value included in base station information or terminal information. For example, when the threshold value is 10 degrees, if the difference between the calculated magnitudes is 0 or more and less than 10 degrees, the base station determines that they are similar. In contrast, if the difference between the calculated sizes is greater than or equal to 10 degrees and less than or equal to 360 degrees, the base station determines that they are dissimilar.
  • the technical ideas according to various embodiments of the present specification are not limited to these examples.
  • the threshold may be determined by the base station and transmitted to the UE in S1210 or may be determined by the UE and transmitted to the base station in S1220. Also, the threshold value may be a preset value.
  • the base station sets a parameter related to similarity. For example, when it is determined that the channel spatial information is similar, the parameter is set to 1, and when it is determined that the channel spatial information is dissimilar, the parameter is set to 0.
  • the base station transmits a parameter related to the determined similarity to the UE (S1260).
  • the parameter related to the similarity refers to a parameter whose value is set in S1250. That is, the similarity value can be expressed using 1 bit. For example, if it is determined that the channel spatial information is similar, the parameter will be 1, and if it is determined that the channel spatial information is dissimilar, the parameter will be 0.
  • the UE performs beam training or channel estimation with an algorithm selected based on a parameter related to the similarity of channel spatial information received from the base station (S1270).
  • a subsequent operation e.g. beam training, channel estimation, etc. is performed based on the parameter.
  • the UE When the channel spatial information is similar based on the parameter, the UE performs beam training in the high frequency band based on information on the direction of the beam used in the low frequency band.
  • the beam used in the low frequency band may be defined as a reference beam.
  • the UE may select an adjacent beam based on the reference beam.
  • the adjacent beam means a beam adjacent to the reference beam among a plurality of preset beams.
  • the UE may preferentially search for an adjacent beam during beam training of a high frequency band.
  • the UE may perform channel estimation for the signal of the high frequency band based on the channel spatial information measured in the low frequency band.
  • the UE When the channel spatial information is dissimilar based on the parameter, the UE independently operates the communication system for the high frequency band.
  • the time required for beam search increases. time can be reduced.
  • FIG 13 illustrates an antenna array for providing low-band and high-band communication services applied to various embodiments of the present specification.
  • a next-generation communication system uses a high frequency resource to satisfy a higher data rate than a conventional communication system.
  • Such high-bandwidth communication systems eg, mmWave, THz
  • the UE or the base station may include or include an antenna array for a high band frequency and an antenna array for a low band frequency.
  • the necessary information includes information about the antenna arrangement of the UE and the base station.
  • the information on the antenna arrangement includes an arrangement state of a plurality of antenna arrays predetermined in the UE or the base station.
  • the plurality of antenna arrays includes a Sub-6G antenna array and a mmWave antenna array.
  • the Sub-6G antenna array means a low-frequency antenna array
  • the mmWave antenna array means a high-frequency antenna array.
  • the technical idea according to various embodiments of the present specification is not limited to the Sub-6G Antenna array and the mmWave Antenna array, and can be applied to a communication system having an antenna array of a high frequency band and an antenna array of a low frequency band without exception. will be.
  • the arrangement state of the antenna array includes an alignment state and a non-alignment state.
  • the alignment state means that the antenna array of the low frequency band and the antenna array of the high frequency band are physically aligned.
  • the misalignment state means that the antenna array of the low frequency band and the antenna array of the high frequency band are physically unaligned.
  • the UE or the base station determines the arrangement state that the antenna arrays are in the alignment state. However, even in the unaligned state of the antenna arrays, the alignment state may be determined if a predetermined condition is satisfied.
  • the predetermined condition includes at least some of the following first and second conditions.
  • Equation 2 when calculating the covariance matrix, the phase difference of the antenna element between the low frequency band and the high frequency band is the same in the unaligned state compared to the aligned state. Equation 2 occurs.
  • n is the antenna index (eg 0, ..., M-1), is the angle between the two antenna arrays.
  • M the number of antenna elements
  • the communication method may be performed by a base station or a UE applied to the communication systems exemplified herein.
  • the communication method may be used based on TDD, but is not limited thereto.
  • the communication method may be simplified by changing one or more operations as necessary.
  • the UE transmits terminal information to the base station (S1510).
  • the terminal information includes information related to an antenna arrangement of the UE.
  • the information related to the antenna arrangement of the UE includes one or more of the following information.
  • the information on whether the antenna arrays can be aligned refers to information about the state of alignment of the antenna arrays.
  • the array state includes an ordered state and an unsorted state. In the case of sorting, it does not matter whether sorting is possible. Even in a non-aligned state, if a predetermined condition is satisfied, the antenna arrays may be aligned. A detailed description of the information is the same as described above with reference to FIG. 13 .
  • the information on the RF Calibration is information on the RF Calibration for correcting the difference in RF characteristics of RF elements, and the information includes a signal phase and a signal at the end of each antenna constituting the antenna array of the low frequency band and the high frequency band as a result of the calibration. Indicates that the signal magnitude is within a certain range.
  • the reference value for determining the similarity refers to a reference value for determining the similarity of the channel spatial information in S1550.
  • the reference value is set by the UE, unlike in FIG. 12 , and the base station does not set the reference value.
  • the communication method according to some embodiments of FIG. 15 does not transmit base station information to the terminal when a predetermined condition is satisfied.
  • the predetermined condition includes (i) a first condition requiring that the operation capability check operation of the base station (S1230 in FIG. 12) be performed by the base station, and (ii) a reference value for determining the channel spatial information is determined by the UE. It includes a second condition that requires it.
  • the base station checks the calculation capability of the base station related to the similarity of the channel space information based on the terminal information and the preset base station information (S1520).
  • the information on which the confirmation is based includes (i) information on whether the antenna arrays can be aligned and/or (ii) information on RF Calibration. Since the information on the computational capability of the spatial information similarity is pre-stored in the base station, it is not a basis for confirmation. A detailed description of the corresponding information is the same as that described above with reference to FIG. 12, and thus will be omitted.
  • the base station determines that the base station has computational capability when all predetermined conditions based on (i) information on whether the antenna arrays are alignable and/or (ii) information on RF calibration are satisfied. do.
  • the base station may confirm that the UE can align the antenna arrays based on information on whether the antenna arrays are alignable.
  • the base station may determine that the UE is RF calibration based on the information on the RF calibration.
  • the base station transmits a report including the check result to the terminal (S1530).
  • the UE Upon receiving the report, the UE requests the base station to provide the similarity of the channel space information (S1240).
  • the UE When the UE receives the report, the UE requests the base station for which the computational capability is confirmed to provide the similarity of the channel spatial information.
  • the request includes a request for calculating the degree of similarity with respect to the channel spatial information and a request for transmitting the calculated degree of similarity.
  • the base station may transmit the calculated similarity to the UE dynamically or semi-persistently according to the UE's request.
  • the similarity calculation request may include information on an operation expected by the UE.
  • the operation expected by the UE includes a CMD operation based on a covariance matrix or an estimation operation based on AoA.
  • the base station determines the similarity of the channel space information in response to receiving the request from the UE (S1550).
  • the base station may perform a covariance matrix-based CMD operation or an AOA-based estimation operation to determine the similarity. A detailed description thereof will be omitted because it mostly overlaps with the description in S1250 of FIG. 12 , and the differences will be mainly described.
  • a threshold value for determining similarity or dissimilarity corresponds to a reference value included in terminal information.
  • the threshold value may be a reference value received through terminal information or a value preset in the base station. That is, it is different from the embodiment of FIG. 12 in that it is not set to a value determined by the base station.
  • the base station transmits a parameter related to the similarity determined to the UE (S1560).
  • the UE performs beam training or channel estimation based on a parameter related to the similarity of the channel spatial information received from the base station (S1570).
  • the communication method may be performed by a base station or a UE applied to the communication systems exemplified herein.
  • the communication method may be used based on TDD, but is not limited thereto.
  • the communication method may be simplified by changing one or more operations as necessary.
  • the base station transmits base station information to the UE (S1610).
  • the base station information includes information related to an antenna arrangement of the base station.
  • the information related to the antenna arrangement of the base station includes one or more of the following information.
  • the communication method according to some embodiments of FIG. 16 does not transmit terminal information to the base station when a predetermined condition is satisfied.
  • the predetermined conditions are (i) a first condition requiring that the operation capability check operation of the base station (S1230 in FIG. 12) be performed by the terminal, (ii) the reference value for determining the channel spatial information is to be determined by the base station It includes a second condition that requires it.
  • the UE checks the computing capability of the base station related to the similarity of the channel space information based on the base station information and the preset terminal information (S1620).
  • the UE checks the arithmetic capability of the base station related to the similarity of the channel space information based on the base station information received from the base station and the preset terminal information.
  • the information that is the basis of the confirmation includes (i) information on the computational capability of spatial information similarity, (ii) information on whether antenna arrays are alignable, and/or (iii) information on RF Calibration. .
  • the UE provides a predetermined condition based on each of (i) information on the computational capability of spatial information similarity, (ii) information on whether antenna arrays are alignable, and/or (iii) information on RF calibration. If all of these are satisfied, it is determined that the base station has computational capability.
  • the UE may check whether the operation supported by the base station matches the operation expected by the UE based on information on the computational capability of the spatial information similarity. As another example, the UE may confirm that the base station can align the antenna arrays based on the information on whether the antenna arrays are alignable. As another example, the UE may confirm that the base station is RF Calibration based on the information on the RF Calibration.
  • the UE When it is confirmed that the base station has the computing capability, the UE requests the base station to provide the similarity of the channel spatial information (S1630).
  • the base station determines the similarity of channel spatial information in response to receiving the request from the UE (S1640).
  • the base station may perform a covariance matrix-based CMD operation or an AOA-based estimation operation to determine the similarity.
  • a threshold value for determining similarity or dissimilarity corresponds to a reference value included in base station information.
  • the threshold value may be a reference value received through base station information or a value preset in the base station. That is, it is different from the embodiments of FIG. 12 in that it is not set to a value determined by the UE.
  • the base station transmits a parameter related to the determined similarity to the UE (S1650).
  • the UE performs beam training or channel estimation based on a parameter related to the similarity of the channel spatial information received from the base station (S1660).
  • FIG. 17 illustrates a communication system applied to the present invention.
  • a communication system 1 applied to the present invention includes a wireless device, a base station, and a network.
  • the wireless device refers to a device that performs communication using a radio access technology (eg, 5G NR (New RAT), LTE (Long Term Evolution)), and may be referred to as a communication/wireless/5G device.
  • the wireless device may include a robot 100a, a vehicle 100b-1, 100b-2, an eXtended Reality (XR) device 100c, a hand-held device 100d, and a home appliance 100e. ), an Internet of Thing (IoT) device 100f, and an AI device/server 400 .
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous driving vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
  • the vehicle may include an Unmanned Aerial Vehicle (UAV) (eg, a drone).
  • UAV Unmanned Aerial Vehicle
  • XR devices include AR (Augmented Reality)/VR (Virtual Reality)/MR (Mixed Reality) devices, and include a Head-Mounted Device (HMD), a Head-Up Display (HUD) provided in a vehicle, a television, a smartphone, It may be implemented in the form of a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), a computer (eg, a laptop computer), and the like.
  • Home appliances may include a TV, a refrigerator, a washing machine, and the like.
  • the IoT device may include a sensor, a smart meter, and the like.
  • the base station and the network may be implemented as a wireless device, and the specific wireless device 200a may operate as a base station/network node to other wireless devices.
  • the wireless devices 100a to 100f may be connected to the network 300 through the base station 200 .
  • AI Artificial Intelligence
  • the network 300 may be configured using a 3G network, a 4G (eg, LTE) network, or a 5G (eg, NR) network.
  • the wireless devices 100a to 100f may communicate with each other through the base station 200/network 300, but may also communicate directly (e.g. sidelink communication) without passing through the base station/network.
  • the vehicles 100b-1 and 100b-2 may perform direct communication (e.g. Vehicle to Vehicle (V2V)/Vehicle to everything (V2X) communication).
  • the IoT device eg, sensor
  • the IoT device may communicate directly with other IoT devices (eg, sensor) or other wireless devices 100a to 100f.
  • Wireless communication/connection 150a and 150b may be performed between the wireless devices 100a to 100f/base station 200 - the base station 200/wireless devices 100a to 100f.
  • the wireless communication/connection may be performed through various wireless access technologies (eg, 5G NR) for uplink/downlink communication 150a and sidelink communication 150b (or D2D communication).
  • 5G NR wireless access technologies
  • the wireless device and the base station/wireless device may transmit/receive wireless signals to each other.
  • the wireless communication/connection 150a and 150b may transmit/receive signals through various physical channels based on all/part of the process of FIG. A1 .
  • various configuration information setting processes for wireless signal transmission/reception various signal processing processes (eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.) , at least a part of a resource allocation process may be performed.
  • various signal processing processes eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.
  • the first wireless device 100 and the second wireless device 200 may transmit/receive wireless signals through various wireless access technologies (eg, LTE, NR).
  • ⁇ first wireless device 100, second wireless device 200 ⁇ is ⁇ wireless device 100x, base station 200 ⁇ of FIG. 17 and/or ⁇ wireless device 100x, wireless device 100x) ⁇ can be matched.
  • the first wireless device 100 includes one or more processors 102 and one or more memories 104 , and may further include one or more transceivers 106 and/or one or more antennas 108 .
  • the processor 102 controls the memory 104 and/or the transceiver 106 and may be configured to implement the functions, procedures and/or methods described/suggested above. For example, the processor 102 may process information in the memory 104 to generate first information/signal, and then transmit a wireless signal including the first information/signal through the transceiver 106 . In addition, the processor 102 may receive the radio signal including the second information/signal through the transceiver 106 , and then store information obtained from signal processing of the second information/signal in the memory 104 .
  • the memory 104 may be connected to the processor 102 and may store various information related to the operation of the processor 102 .
  • the memory 104 may store software code including instructions for performing some or all of the processes controlled by the processor 102 , or for performing the procedures and/or methods described/suggested above.
  • the processor 102 and the memory 104 may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • the transceiver 106 may be coupled to the processor 102 , and may transmit and/or receive wireless signals via one or more antennas 108 .
  • the transceiver 106 may include a transmitter and/or a receiver.
  • the transceiver 106 may be used interchangeably with a radio frequency (RF) unit.
  • a wireless device may refer to a communication modem/circuit/chip.
  • the second wireless device 200 includes one or more processors 202 , one or more memories 204 , and may further include one or more transceivers 206 and/or one or more antennas 208 .
  • the processor 202 controls the memory 204 and/or the transceiver 206 and may be configured to implement the functions, procedures, and/or methods described/suggested above. For example, the processor 202 may process the information in the memory 204 to generate third information/signal, and then transmit a wireless signal including the third information/signal through the transceiver 206 . In addition, the processor 202 may receive the radio signal including the fourth information/signal through the transceiver 206 , and then store information obtained from signal processing of the fourth information/signal in the memory 204 .
  • the memory 204 may be connected to the processor 202 and may store various information related to the operation of the processor 202 .
  • the memory 204 may store software code including instructions for performing some or all of the processes controlled by the processor 202 , or for performing the procedures and/or methods described/suggested above.
  • the processor 202 and the memory 204 may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • the transceiver 206 may be coupled to the processor 202 and may transmit and/or receive wireless signals via one or more antennas 208 .
  • the transceiver 206 may include a transmitter and/or a receiver.
  • the transceiver 206 may be used interchangeably with an RF unit.
  • a wireless device may refer to a communication modem/circuit/chip.
  • one or more protocol layers may be implemented by one or more processors 102 , 202 .
  • one or more processors 102 , 202 may implement one or more layers (eg, functional layers such as PHY, MAC, RLC, PDCP, RRC, SDAP).
  • the one or more processors 102 and 202 may generate one or more Protocol Data Units (PDUs) and/or one or more Service Data Units (SDUs) according to the functions, procedures, proposals and/or methods disclosed herein.
  • PDUs Protocol Data Units
  • SDUs Service Data Units
  • One or more processors 102 , 202 may generate messages, control information, data, or information according to the functions, procedures, proposals and/or methods disclosed herein.
  • the one or more processors 102 and 202 generate a signal (eg, a baseband signal) including PDUs, SDUs, messages, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein. , to one or more transceivers 106 and 206 .
  • One or more processors 102 , 202 may receive signals (eg, baseband signals) from one or more transceivers 106 , 206 , PDUs, SDUs, and/or SDUs according to the functions, procedures, proposals and/or methods disclosed herein. , a message, control information, data or information can be obtained.
  • One or more processors 102, 202 may be referred to as a controller, microcontroller, microprocessor, or microcomputer.
  • One or more processors 102 , 202 may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • the functions, procedures, proposals and/or methods disclosed in this document may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, and the like.
  • Firmware or software configured to perform the functions, procedures, proposals, and/or methods disclosed herein is included in one or more processors 102, 202, or stored in one or more memories 104, 204, to one or more processors 102, 202) can be driven.
  • the functions, procedures, proposals and/or methods disclosed in this document may be implemented using firmware or software in the form of code, instructions, and/or a set of instructions.
  • One or more memories 104 , 204 may be coupled with one or more processors 102 , 202 , and may store various forms of data, signals, messages, information, programs, code, instructions, and/or instructions.
  • the one or more memories 104 and 204 may be comprised of ROM, RAM, EPROM, flash memory, hard drives, registers, cache memory, computer readable storage media, and/or combinations thereof.
  • One or more memories 104 , 204 may be located inside and/or external to one or more processors 102 , 202 . Additionally, one or more memories 104 , 204 may be coupled to one or more processors 102 , 202 through various technologies, such as wired or wireless connections.
  • One or more transceivers 106 , 206 may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flowcharts of this document to one or more other devices.
  • the one or more transceivers 106 and 206 may receive user data, control information, radio signals/channels, etc. referred to in the functions, procedures, proposals, methods, and/or flowcharts of operations disclosed herein, and the like, from one or more other devices.
  • one or more transceivers 106 , 206 may be coupled to one or more processors 102 , 202 and may transmit and receive wireless signals.
  • one or more processors 102 , 202 may control one or more transceivers 106 , 206 to transmit user data, control information, or wireless signals to one or more other devices.
  • one or more processors 102 , 202 may control one or more transceivers 106 , 206 to receive user data, control information, or wireless signals from one or more other devices.
  • one or more transceivers 106, 206 may be coupled to one or more antennas 108, 208, and the one or more transceivers 106, 206 may be coupled to one or more of the transceivers 106, 206 via the one or more antennas 108, 208 for the functions, procedures, and procedures disclosed herein.
  • one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
  • the one or more transceivers 106, 206 convert the received radio signal/channel, etc. from the RF band signal to process the received user data, control information, radio signal/channel, etc. using the one or more processors 102, 202. It can be converted into a baseband signal.
  • One or more transceivers 106 , 206 may convert user data, control information, radio signals/channels, etc. processed using one or more processors 102 , 202 from baseband signals to RF band signals.
  • one or more transceivers 106 , 206 may include (analog) oscillators and/or filters.
  • 19 illustrates a signal processing circuit for a transmission signal.
  • the signal processing circuit 1000 may include a scrambler 1010 , a modulator 1020 , a layer mapper 1030 , a precoder 1040 , a resource mapper 1050 , and a signal generator 1060 .
  • the operations/functions of FIG. 19 may be performed by the processors 102 and 202 and/or the transceivers 106 and 206 of FIG. 18 .
  • the hardware elements of FIG. 19 may be implemented in processors 102 , 202 and/or transceivers 106 , 206 of FIG. 18 .
  • blocks 1010 to 1060 may be implemented in the processors 102 and 202 of FIG. 18 .
  • blocks 1010 to 1050 may be implemented in the processors 102 and 202 of FIG. 18
  • block 1060 may be implemented in the transceivers 106 and 206 of FIG. 18 .
  • the codeword may be converted into a wireless signal through the signal processing circuit 1000 of FIG. 19 .
  • the codeword is a coded bit sequence of an information block.
  • the information block may include a transport block (eg, a UL-SCH transport block, a DL-SCH transport block).
  • the radio signal may be transmitted through various physical channels (eg, PUSCH, PDSCH) of FIG. A1 .
  • the codeword may be converted into a scrambled bit sequence by the scrambler 1010 .
  • a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device, and the like.
  • the scrambled bit sequence may be modulated by a modulator 1020 into a modulation symbol sequence.
  • the modulation method may include pi/2-Binary Phase Shift Keying (pi/2-BPSK), m-Phase Shift Keying (m-PSK), m-Quadrature Amplitude Modulation (m-QAM), and the like.
  • the complex modulation symbol sequence may be mapped to one or more transport layers by the layer mapper 1030 .
  • Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 1040 (precoding).
  • the output z of the precoder 1040 may be obtained by multiplying the output y of the layer mapper 1030 by the precoding matrix W of N*M.
  • N is the number of antenna ports
  • M is the number of transport layers.
  • the precoder 1040 may perform precoding after performing transform precoding (eg, DFT transform) on the complex modulation symbols. Also, the precoder 1040 may perform precoding without performing transform precoding.
  • the resource mapper 1050 may map modulation symbols of each antenna port to a time-frequency resource.
  • the time-frequency resource may include a plurality of symbols (eg, a CP-OFDMA symbol, a DFT-s-OFDMA symbol) in the time domain and a plurality of subcarriers in the frequency domain.
  • CP Cyclic Prefix
  • DAC Digital-to-Analog Converter
  • a signal processing process for a received signal in the wireless device may be configured in reverse of the signal processing process 1010 to 1060 of FIG. 19 .
  • the wireless device eg, 100 and 200 in FIG. 18
  • the received radio signal may be converted into a baseband signal through a signal restorer.
  • the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a Fast Fourier Transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT Fast Fourier Transform
  • the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a descrambling process.
  • the codeword may be restored to the original information block through decoding.
  • the signal processing circuit (not shown) for the received signal may include a signal restorer, a resource de-mapper, a post coder, a demodulator, a descrambler, and a decoder.
  • the wireless device 20 shows another example of a wireless device to which the present invention is applied.
  • the wireless device may be implemented in various forms according to use-examples/services (refer to FIGS. 17 and 21 to 26 ).
  • wireless devices 100 and 200 correspond to wireless devices 100 and 200 of FIG. 18 , and include various elements, components, units/units, and/or modules. ) can be composed of
  • the wireless devices 100 and 200 may include a communication unit 110 , a control unit 120 , a memory unit 130 , and an additional element 140 .
  • the communication unit may include communication circuitry 112 and transceiver(s) 114 .
  • communication circuitry 112 may include one or more processors 102 , 202 and/or one or more memories 104 , 204 of FIG. 18 .
  • transceiver(s) 114 may include one or more transceivers 106 , 206 and/or one or more antennas 108 , 208 of FIG. 18 .
  • the control unit 120 is electrically connected to the communication unit 110 , the memory unit 130 , and the additional element 140 , and controls general operations of the wireless device.
  • the controller 120 may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit 130 .
  • control unit 120 transmits information stored in the memory unit 130 to the outside (eg, other communication device) through the communication unit 110 through a wireless/wired interface, or externally (eg, through the communication unit 110 ) Information received through a wireless/wired interface from another communication device) may be stored in the memory unit 130 .
  • the additional element 140 may be configured in various ways according to the type of the wireless device.
  • the additional element 140 may include at least one of a power unit/battery, an input/output unit (I/O unit), a driving unit, and a computing unit.
  • the wireless device includes a robot ( FIGS. 17 and 100a ), a vehicle ( FIGS. 17 , 100b-1 , 100b-2 ), an XR device ( FIGS. 17 and 100c ), a mobile device ( FIGS. 17 and 100d ), and a home appliance. (FIG. 17, 100e), IoT device (FIG.
  • digital broadcasting terminal digital broadcasting terminal
  • hologram device public safety device
  • MTC device medical device
  • fintech device or financial device
  • security device climate/environment device
  • It may be implemented in the form of an AI server/device ( FIGS. 17 and 400 ), a base station ( FIGS. 17 and 200 ), and a network node.
  • the wireless device may be mobile or used in a fixed location depending on the use-example/service.
  • various elements, components, units/units, and/or modules in the wireless devices 100 and 200 may be all interconnected through a wired interface, or at least some of them may be wirelessly connected through the communication unit 110 .
  • the control unit 120 and the communication unit 110 are connected by wire, and the control unit 120 and the first unit (eg, 130 , 140 ) are connected to the communication unit 110 through the communication unit 110 . It can be connected wirelessly.
  • each element, component, unit/unit, and/or module within the wireless device 100 , 200 may further include one or more elements.
  • the controller 120 may be configured with one or more processor sets.
  • control unit 120 may be configured as a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
  • memory unit 130 may include random access memory (RAM), dynamic RAM (DRAM), read only memory (ROM), flash memory, volatile memory, and non-volatile memory. volatile memory) and/or a combination thereof.
  • FIG. 20 will be described in more detail with reference to the drawings.
  • the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), and a portable computer (eg, a laptop computer).
  • a mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
  • MS mobile station
  • UT user terminal
  • MSS mobile subscriber station
  • SS subscriber station
  • AMS advanced mobile station
  • WT wireless terminal
  • the mobile device 100 includes an antenna unit 108 , a communication unit 110 , a control unit 120 , a memory unit 130 , a power supply unit 140a , an interface unit 140b , and an input/output unit 140c ) may be included.
  • the antenna unit 108 may be configured as a part of the communication unit 110 .
  • Blocks 110 to 130/140a to 140c respectively correspond to blocks 110 to 130/140 of FIG. 20 .
  • the communication unit 110 may transmit and receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
  • the controller 120 may perform various operations by controlling the components of the portable device 100 .
  • the controller 120 may include an application processor (AP).
  • the memory unit 130 may store data/parameters/programs/codes/commands necessary for driving the portable device 100 . Also, the memory unit 130 may store input/output data/information.
  • the power supply unit 140a supplies power to the portable device 100 and may include a wired/wireless charging circuit, a battery, and the like.
  • the interface unit 140b may support a connection between the portable device 100 and other external devices.
  • the interface unit 140b may include various ports (eg, an audio input/output port and a video input/output port) for connection with an external device.
  • the input/output unit 140c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
  • the input/output unit 140c may include a camera, a microphone, a user input unit, a display unit 140d, a speaker, and/or a haptic module.
  • the input/output unit 140c obtains information/signals (eg, touch, text, voice, image, video) input from the user, and the obtained information/signals are stored in the memory unit 130 . can be saved.
  • the communication unit 110 may convert the information/signal stored in the memory into a wireless signal, and transmit the converted wireless signal directly to another wireless device or to a base station. Also, after receiving a radio signal from another radio device or base station, the communication unit 110 may restore the received radio signal to original information/signal. After the restored information/signal is stored in the memory unit 130 , it may be output in various forms (eg, text, voice, image, video, haptic) through the input/output unit 140c.
  • various forms eg, text, voice, image, video, haptic
  • the vehicle or autonomous driving vehicle may be implemented as a mobile robot, a vehicle, a train, an aerial vehicle (AV), a ship, and the like.
  • AV aerial vehicle
  • the vehicle or autonomous driving vehicle 100 includes an antenna unit 108 , a communication unit 110 , a control unit 120 , a driving unit 140a , a power supply unit 140b , a sensor unit 140c and autonomous driving. It may include a part 140d.
  • the antenna unit 108 may be configured as a part of the communication unit 110 .
  • Blocks 110/130/140a-140d correspond to blocks 110/130/140 of FIG. 20, respectively.
  • the communication unit 110 may transmit/receive signals (eg, data, control signals, etc.) to and from external devices such as other vehicles, base stations (e.g., base stations, roadside units, etc.), servers, and the like.
  • the controller 120 may control elements of the vehicle or the autonomous driving vehicle 100 to perform various operations.
  • the controller 120 may include an Electronic Control Unit (ECU).
  • the driving unit 140a may cause the vehicle or the autonomous driving vehicle 100 to run on the ground.
  • the driving unit 140a may include an engine, a motor, a power train, a wheel, a brake, a steering device, and the like.
  • the power supply unit 140b supplies power to the vehicle or the autonomous driving vehicle 100 , and may include a wired/wireless charging circuit, a battery, and the like.
  • the sensor unit 140c may obtain vehicle status, surrounding environment information, user information, and the like.
  • the sensor unit 140c includes an inertial measurement unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, an inclination sensor, a weight sensor, a heading sensor, a position module, and a vehicle forward movement.
  • IMU inertial measurement unit
  • a collision sensor a wheel sensor
  • a speed sensor a speed sensor
  • an inclination sensor a weight sensor
  • a heading sensor a position module
  • a vehicle forward movement / may include a reverse sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, a pedal position sensor, and the like.
  • the autonomous driving unit 140d includes a technology for maintaining a driving lane, a technology for automatically adjusting speed such as adaptive cruise control, a technology for automatically driving along a predetermined route, and a technology for automatically setting a route when a destination is set. technology can be implemented.
  • the communication unit 110 may receive map data, traffic information data, and the like from an external server.
  • the autonomous driving unit 140d may generate an autonomous driving route and a driving plan based on the acquired data.
  • the controller 120 may control the driving unit 140a to move the vehicle or the autonomous driving vehicle 100 along the autonomous driving path (eg, speed/direction adjustment) according to the driving plan.
  • the communication unit 110 may obtain the latest traffic information data from an external server non/periodically, and may acquire surrounding traffic information data from surrounding vehicles.
  • the sensor unit 140c may acquire vehicle state and surrounding environment information.
  • the autonomous driving unit 140d may update the autonomous driving route and driving plan based on the newly acquired data/information.
  • the communication unit 110 may transmit information about a vehicle location, an autonomous driving route, a driving plan, and the like to an external server.
  • the external server may predict traffic information data in advance using AI technology or the like based on information collected from the vehicle or autonomous driving vehicles, and may provide the predicted traffic information data to the vehicle or autonomous driving vehicles.
  • the vehicle 23 illustrates a vehicle to which the present invention is applied.
  • the vehicle may also be implemented as a means of transportation, a train, an air vehicle, a ship, and the like.
  • the vehicle 100 may include a communication unit 110 , a control unit 120 , a memory unit 130 , an input/output unit 140a , and a position measurement unit 140b .
  • blocks 110 to 130/140a to 140b correspond to blocks 110 to 130/140 of FIG. 20 , respectively.
  • the communication unit 110 may transmit and receive signals (eg, data, control signals, etc.) with other vehicles or external devices such as a base station.
  • the controller 120 may control components of the vehicle 100 to perform various operations.
  • the memory unit 130 may store data/parameters/programs/codes/commands supporting various functions of the vehicle 100 .
  • the input/output unit 140a may output an AR/VR object based on information in the memory unit 130 .
  • the input/output unit 140a may include a HUD.
  • the position measuring unit 140b may acquire position information of the vehicle 100 .
  • the location information may include absolute location information of the vehicle 100 , location information within a driving line, acceleration information, location information with a surrounding vehicle, and the like.
  • the position measuring unit 140b may include a GPS and various sensors.
  • the communication unit 110 of the vehicle 100 may receive map information, traffic information, and the like from an external server and store it in the memory unit 130 .
  • the position measuring unit 140b may acquire vehicle position information through GPS and various sensors and store it in the memory unit 130 .
  • the controller 120 may generate a virtual object based on map information, traffic information, vehicle location information, and the like, and the input/output unit 140a may display the created virtual object on a window inside the vehicle ( 1410 and 1420 ).
  • the controller 120 may determine whether the vehicle 100 is normally operating within the driving line based on the vehicle location information. When the vehicle 100 deviates from the driving line abnormally, the controller 120 may display a warning on the windshield of the vehicle through the input/output unit 140a.
  • control unit 120 may broadcast a warning message regarding driving abnormality to surrounding vehicles through the communication unit 110 .
  • control unit 120 may transmit the location information of the vehicle and information on driving/vehicle abnormality to a related organization through the communication unit 110 .
  • the XR device may be implemented as an HMD, a head-up display (HUD) provided in a vehicle, a television, a smart phone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • HMD head-up display
  • a television a smart phone
  • a computer a wearable device
  • a home appliance a digital signage
  • a vehicle a robot, and the like.
  • the XR device 100a may include a communication unit 110 , a control unit 120 , a memory unit 130 , an input/output unit 140a , a sensor unit 140b and a power supply unit 140c .
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 20 , respectively.
  • the communication unit 110 may transmit/receive signals (eg, media data, control signals, etc.) to/from external devices such as other wireless devices, portable devices, or media servers.
  • Media data may include images, images, and sounds.
  • the controller 120 may perform various operations by controlling the components of the XR device 100a.
  • the controller 120 may be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation and processing.
  • the memory unit 130 may store data/parameters/programs/codes/commands necessary for driving the XR device 100a/creating an XR object.
  • the input/output unit 140a may obtain control information, data, and the like from the outside, and may output the generated XR object.
  • the input/output unit 140a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 140b may obtain an XR device state, surrounding environment information, user information, and the like.
  • the sensor unit 140b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. have.
  • the power supply unit 140c supplies power to the XR device 100a, and may include a wired/wireless charging circuit, a battery, and the like.
  • the memory unit 130 of the XR device 100a may include information (eg, data, etc.) necessary for generating an XR object (eg, AR/VR/MR object).
  • the input/output unit 140a may obtain a command to operate the XR device 100a from the user, and the controller 120 may drive the XR device 100a according to the user's driving command. For example, when the user intends to watch a movie or news through the XR device 100a, the controller 120 transmits the content request information to another device (eg, the mobile device 100b) through the communication unit 130 or can be sent to the media server.
  • another device eg, the mobile device 100b
  • the communication unit 130 may download/stream contents such as movies and news from another device (eg, the portable device 100b) or a media server to the memory unit 130 .
  • the controller 120 controls and/or performs procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing for the content, and is acquired through the input/output unit 140a/sensor unit 140b It is possible to generate/output an XR object based on information about one surrounding space or a real object.
  • the XR device 100a is wirelessly connected to the portable device 100b through the communication unit 110 , and the operation of the XR device 100a may be controlled by the portable device 100b.
  • the portable device 100b may operate as a controller for the XR device 100a.
  • the XR device 100a may obtain 3D location information of the portable device 100b, and then generate and output an XR object corresponding to the portable device 100b.
  • Robots can be classified into industrial, medical, home, military, etc. depending on the purpose or field of use.
  • the robot 100 may include a communication unit 110 , a control unit 120 , a memory unit 130 , an input/output unit 140a , a sensor unit 140b , and a driving unit 140c .
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 20 , respectively.
  • the communication unit 110 may transmit/receive signals (eg, driving information, control signals, etc.) with external devices such as other wireless devices, other robots, or control servers.
  • the controller 120 may perform various operations by controlling the components of the robot 100 .
  • the memory unit 130 may store data/parameters/programs/codes/commands supporting various functions of the robot 100 .
  • the input/output unit 140a may obtain information from the outside of the robot 100 and may output information to the outside of the robot 100 .
  • the input/output unit 140a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 140b may obtain internal information, surrounding environment information, user information, and the like of the robot 100 .
  • the sensor unit 140b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, a radar, and the like.
  • the driving unit 140c may perform various physical operations, such as moving a robot joint. In addition, the driving unit 140c may make the robot 100 travel on the ground or fly in the air.
  • the driving unit 140c may include an actuator, a motor, a wheel, a brake, a propeller, and the like.
  • AI devices include TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc., fixed devices or mobile devices. It may be implemented in any possible device or the like.
  • the AI device 100 includes a communication unit 110 , a control unit 120 , a memory unit 130 , input/output units 140a/140b , a learning processor unit 140c and a sensor unit 140d). may include Blocks 110 to 130/140a to 140d correspond to blocks 110 to 130/140 of FIG. 20, respectively.
  • the communication unit 110 uses wired/wireless communication technology to communicate with external devices such as other AI devices (eg, FIGS. 17, 100x, 200, 400) or the AI server 200 and wired/wireless signals (eg, sensor information, user input, learning). models, control signals, etc.). To this end, the communication unit 110 may transmit information in the memory unit 130 to an external device or transmit a signal received from the external device to the memory unit 130 .
  • external devices such as other AI devices (eg, FIGS. 17, 100x, 200, 400) or the AI server 200 and wired/wireless signals (eg, sensor information, user input, learning). models, control signals, etc.).
  • the communication unit 110 may transmit information in the memory unit 130 to an external device or transmit a signal received from the external device to the memory unit 130 .
  • the controller 120 may determine at least one executable operation of the AI device 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. In addition, the controller 120 may control the components of the AI device 100 to perform the determined operation. For example, the control unit 120 may request, search, receive, or utilize the data of the learning processor unit 140c or the memory unit 130, and is determined to be a predicted operation or desirable among at least one executable operation. Components of the AI device 100 may be controlled to execute the operation. In addition, the control unit 120 collects history information including user feedback on the operation contents or operation of the AI device 100 and stores it in the memory unit 130 or the learning processor unit 140c, or the AI server ( 17 and 400), and the like may be transmitted to an external device. The collected historical information may be used to update the learning model.
  • the memory unit 130 may store data supporting various functions of the AI device 100 .
  • the memory unit 130 may store data obtained from the input unit 140a , data obtained from the communication unit 110 , output data of the learning processor unit 140c , and data obtained from the sensing unit 140 .
  • the memory unit 130 may store control information and/or software codes necessary for the operation/execution of the control unit 120 .
  • the input unit 140a may acquire various types of data from the outside of the AI device 100 .
  • the input unit 120 may obtain training data for model learning, input data to which the learning model is applied, and the like.
  • the input unit 140a may include a camera, a microphone, and/or a user input unit.
  • the output unit 140b may generate an output related to sight, hearing, or touch.
  • the output unit 140b may include a display unit, a speaker, and/or a haptic module.
  • the sensing unit 140 may obtain at least one of internal information of the AI device 100 , surrounding environment information of the AI device 100 , and user information by using various sensors.
  • the sensing unit 140 may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. have.
  • the learning processor unit 140c may train a model composed of an artificial neural network by using the training data.
  • the learning processor unit 140c may perform AI processing together with the learning processor unit of the AI server ( FIGS. 17 and 400 ).
  • the learning processor unit 140c may process information received from an external device through the communication unit 110 and/or information stored in the memory unit 130 .
  • the output value of the learning processor unit 140c may be transmitted to an external device through the communication unit 110 and/or stored in the memory unit 130 .
  • the above-described specification can be implemented as computer-readable code on a medium in which a program is recorded.
  • the computer-readable medium includes all types of recording devices in which data readable by a computer system is stored. Examples of computer-readable media include Hard Disk Drive (HDD), Solid State Disk (SSD), Silicon Disk Drive (SDD), ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • HDD Hard Disk Drive
  • SSD Solid State Disk
  • SDD Silicon Disk Drive
  • ROM Read Only Memory
  • RAM Compact Disk Drive
  • CD-ROM Compact Disk Read Only Memory
  • magnetic tape floppy disk
  • optical data storage device etc.
  • carrier wave eg, transmission over the Internet

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Power Engineering (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Est divulgué un procédé pour réaliser une communication sur la base de la similarité d'informations spatiales de bande inter-fréquence pour un canal par un terminal ayant des premières informations dans un système de communication sans fil. Un procédé selon un mode de réalisation de la présente spécification comprend les étapes consistant à : recevoir des secondes informations en provenance d'une station de base ; lorsque la capacité de calcul de la station de base associée à la similarité d'informations spatiales pour le canal est identifiée sur la base d'au moins l'une des premières informations et des secondes informations, demander le calcul de la similarité à partir de la station de base ; et lors de la réception d'une réponse à la demande, réaliser au moins l'une parmi une formation de faisceau et une estimation de canal sur la base de la réponse. Un terminal selon la présente spécification peut être lié à un module d'intelligence artificielle, à un drone (véhicule aérien sans pilote, (UAV)), à un robot, à un dispositif de réalité augmentée (AR), à un dispositif de réalité virtuelle (RV), à un dispositif associé à un service 5G et analogue.
PCT/KR2020/010323 2020-08-05 2020-08-05 Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé WO2022030664A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/KR2020/010323 WO2022030664A1 (fr) 2020-08-05 2020-08-05 Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé
KR1020237007016A KR20230049664A (ko) 2020-08-05 2020-08-05 무선 통신 시스템에서 주파수 대역 간 채널 공간 정보의 유사성에 기반한 통신 방법 및 이를 위한 장치

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/010323 WO2022030664A1 (fr) 2020-08-05 2020-08-05 Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé

Publications (1)

Publication Number Publication Date
WO2022030664A1 true WO2022030664A1 (fr) 2022-02-10

Family

ID=80118204

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/010323 WO2022030664A1 (fr) 2020-08-05 2020-08-05 Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé

Country Status (2)

Country Link
KR (1) KR20230049664A (fr)
WO (1) WO2022030664A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115175132A (zh) * 2022-06-30 2022-10-11 重庆邮电大学 一种支持无人机通信感知一体化的预编码及功率分配方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160294462A1 (en) * 2013-12-27 2016-10-06 Samsung Electronics Co., Ltd. Method and device for operating beam mode in wireless communication system
KR20180009644A (ko) * 2016-07-19 2018-01-29 한국전자통신연구원 이동무선백홀 네트워크에서의 고속 이동체 단말 및 그의 통신 방법
CN108650200A (zh) * 2018-04-23 2018-10-12 电子科技大学 高低频混合组网系统的低频辅助信道估计方法
US20200022000A1 (en) * 2018-07-16 2020-01-16 Qualcomm Incorporated Beam identification for multi-tci transmission

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160294462A1 (en) * 2013-12-27 2016-10-06 Samsung Electronics Co., Ltd. Method and device for operating beam mode in wireless communication system
KR20180009644A (ko) * 2016-07-19 2018-01-29 한국전자통신연구원 이동무선백홀 네트워크에서의 고속 이동체 단말 및 그의 통신 방법
CN108650200A (zh) * 2018-04-23 2018-10-12 电子科技大学 高低频混合组网系统的低频辅助信道估计方法
US20200022000A1 (en) * 2018-07-16 2020-01-16 Qualcomm Incorporated Beam identification for multi-tci transmission

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALI ANUM; GONZALEZ-PRELCIC NURIA; HEATH ROBERT W.: "Estimating millimeter wave channels using out-of-band measurements", 2016 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), IEEE, 31 January 2016 (2016-01-31), pages 1 - 6, XP033082393, DOI: 10.1109/ITA.2016.7888146 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115175132A (zh) * 2022-06-30 2022-10-11 重庆邮电大学 一种支持无人机通信感知一体化的预编码及功率分配方法

Also Published As

Publication number Publication date
KR20230049664A (ko) 2023-04-13

Similar Documents

Publication Publication Date Title
WO2021112360A1 (fr) Procédé et dispositif d'estimation de canal dans un système de communication sans fil
WO2022050432A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021256584A1 (fr) Procédé d'émission ou de réception de données dans un système de communication sans fil et appareil associé
WO2022050468A1 (fr) Procédé pour réaliser un apprentissage fédéré dans un système de communication sans fil et appareil associé
WO2022045399A1 (fr) Procédé d'apprentissage fédéré basé sur une transmission de poids sélective et terminal associé
WO2022039295A1 (fr) Procédé de prétraitement d'une liaison descendante dans un système de communication sans fil et appareil associé
WO2021251511A1 (fr) Procédé d'émission/réception de signal de liaison montante de bande de fréquences haute dans un système de communication sans fil, et dispositif associé
WO2022025321A1 (fr) Procédé et dispositif de randomisation de signal d'un appareil de communication
WO2022014732A1 (fr) Procédé et appareil d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2022054985A1 (fr) Procédé et appareil d'émission et de réception de signaux par un équipement utilisateur, et station de base dans un système de communication sans fil
WO2022019352A1 (fr) Procédé et appareil de transmission et de réception de signal pour un terminal et une station de base dans un système de communication sans fil
WO2021251523A1 (fr) Procédé et dispositif permettant à un ue et à une station de base d'émettre et de recevoir un signal dans un système de communication sans fil
WO2022030664A1 (fr) Procédé de communication basé sur la similarité d'informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé
WO2022039302A1 (fr) Procédé destiné au contrôle de calculs de réseau neuronal profond dans un système de communication sans fil, et appareil associé
WO2022050444A1 (fr) Procédé de communication pour un apprentissage fédéré et dispositif pour son exécution
WO2022004927A1 (fr) Procédé d'émission ou de réception de signal avec un codeur automatique dans un système de communication sans fil et appareil associé
WO2022265141A1 (fr) Procédé de réalisation d'une gestion de faisceaux dans un système de communication sans fil et dispositif associé
WO2022119021A1 (fr) Procédé et dispositif d'adaptation d'un système basé sur une classe d'apprentissage à la technologie ai mimo
WO2022014735A1 (fr) Procédé et dispositif permettant à un terminal et une station de base de transmettre et recevoir des signaux dans un système de communication sans fil
WO2022045377A1 (fr) Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil
WO2022050434A1 (fr) Procédé et appareil pour effectuer un transfert intercellulaire dans système de communication sans fil
WO2022097774A1 (fr) Procédé et dispositif pour la réalisation d'une rétroaction par un terminal et une station de base dans un système de communication sans fil
WO2022080530A1 (fr) Procédé et dispositif pour émettre et recevoir des signaux en utilisant de multiples antennes dans un système de communication sans fil
WO2021256585A1 (fr) Procédé et dispositif pour la transmission/la réception d'un signal dans un système de communication sans fil
WO2022014728A1 (fr) Procédé et appareil pour effectuer un codage de canal par un équipement utilisateur et une station de base dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20948242

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237007016

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20948242

Country of ref document: EP

Kind code of ref document: A1