WO2022151900A1 - 一种基于神经网络的信道估计方法及通信装置 - Google Patents
一种基于神经网络的信道估计方法及通信装置 Download PDFInfo
- Publication number
- WO2022151900A1 WO2022151900A1 PCT/CN2021/138374 CN2021138374W WO2022151900A1 WO 2022151900 A1 WO2022151900 A1 WO 2022151900A1 CN 2021138374 W CN2021138374 W CN 2021138374W WO 2022151900 A1 WO2022151900 A1 WO 2022151900A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- channel information
- information
- dimension
- layer
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 267
- 230000006854 communication Effects 0.000 title claims abstract description 202
- 238000004891 communication Methods 0.000 title claims abstract description 198
- 238000000034 method Methods 0.000 title claims abstract description 159
- 238000012545 processing Methods 0.000 claims abstract description 40
- 230000006870 function Effects 0.000 claims description 141
- 238000012549 training Methods 0.000 claims description 102
- 230000008569 process Effects 0.000 claims description 80
- 230000000737 periodic effect Effects 0.000 claims description 37
- 230000004913 activation Effects 0.000 claims description 34
- 210000002569 neuron Anatomy 0.000 claims description 22
- 238000013507 mapping Methods 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 8
- 238000013461 design Methods 0.000 description 38
- 238000010586 diagram Methods 0.000 description 31
- 230000004044 response Effects 0.000 description 19
- 238000004590 computer program Methods 0.000 description 11
- 230000001537 neural effect Effects 0.000 description 10
- 210000004027 cell Anatomy 0.000 description 7
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000005457 optimization Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000006837 decompression Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000007906 compression Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000000332 black box Nutrition 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000011423 initialization method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/0202—Channel estimation
- H04L25/024—Channel estimation channel estimation algorithms
- H04L25/0254—Channel estimation channel estimation algorithms using neural network algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
Definitions
- the embodiments of the present application relate to the field of communication technologies, and in particular, to a channel estimation method and communication device based on a neural network.
- neural network The application of neural network (NN) is more and more extensive.
- the introduction of neural network in the network optimization of wireless communication system can make the communication system have the ability of intelligent wireless environment perception learning.
- channel information is an indispensable factor to ensure communication quality. How to apply neural network to obtain channel information more efficiently and accurately is a problem that needs to be solved.
- Embodiments of the present application provide a channel estimation method and a communication device based on a neural network, so as to obtain channel information based on a neural network, so as to improve the efficiency and accuracy of obtaining channel information.
- a method for channel estimation based on a neural network is provided, and the method can be executed by a communication device or a component of the communication device (for example, a processor, a chip, or a chip system, etc.).
- the communication device may be a terminal device or a network device.
- the method can be implemented by the following steps: acquiring first location information of the first communication device; processing the first location information by using a neural network to obtain first channel information, where the first channel information is between the first communication device and the second communication device Information of the wireless channel; communicate according to the first channel information.
- the channel information is estimated using the location information through a neural network. In this way, the use of pilot signals for channel estimation can be avoided, the overhead of pilot signals can be saved, and the estimation efficiency and accuracy of channel information can be improved.
- the parameters of the neural network are obtained by training based on historical data
- the historical data includes one or more mappings of historical location information and historical channel information
- the historical location information is the relationship between the training device and the first Location information during the period of communication between the two communication apparatuses
- the historical channel information is the channel information of the training apparatus during the period of communication with the second communication apparatus. Because the communication process is affected by some fixed scatterers in the environment, the historical communication data contains rich environmental information. By making full use of the historical data, the training accuracy of the neural network can be improved, and the accuracy of using the neural network to estimate the channel information can be improved.
- using a neural network to process the first location information to obtain the first channel information may be implemented in the following manner: processing the first location information to obtain the second channel information, the first channel information
- the dimension of the second channel information is lower than the dimension of the first channel information; the second channel information is processed to obtain the first channel information.
- the dimension of the second channel information can be closer to the dimension of the first location information, and the second channel information obtained by processing the first location information can more accurately reflect the channel quality.
- the first channel information is channel information of the same dimension as the conventional channel information, and can better use the first channel information for communication.
- first training is performed on the neural network according to the historical channel information
- the first training process includes: changing the dimension of the historical channel information from a first dimension to a second dimension and The second dimension is changed to the first dimension; the second training is performed on the neural network according to the historical location information corresponding to the historical channel information and the historical channel information of the second dimension.
- Changing the dimension of the historical channel information from the first dimension to the second dimension and from the second dimension to the first dimension may be the dimension of compressing the historical channel information.
- the process of the first training does not require location information, and can be performed independently according to historical channel information, and data is easier to obtain.
- the activation function of the neural network is a periodic function.
- the channel information is an implicit periodic function of the location information, eg the phase of the channel information is an implicit periodic function of the location information.
- the neural network adopts periodic function to better adapt to the characteristics of location information and channel information.
- the neural network includes the following formula or the neural network conforms to the following formula:
- the input of the ith layer of the neural network is x i
- the weight of the neural network Bias of the neural network The sine function sin is used as the nonlinear activation function of the neural network; x is the input of the neural network, and ⁇ (x) is the output of the neural network.
- the neural network has significantly excellent performance in fitting hidden periodic functions. Therefore, the neural network is very suitable for representing the function with periodic characteristics required by this embodiment.
- using a neural network to process the first position information to obtain the first channel information may also be implemented in the following manner: processing the first position information to obtain one or more second position information , the second position information is a function of the mirror point of the first position information; the one or more second position information is processed to obtain the first channel information.
- the special electromagnetic propagation physical process and mathematical form can be introduced into the neural network structure design, and at the same time, the robustness of the network can be guaranteed by combining with the currently commonly used neural network structure.
- the neural network takes into account the transmission characteristics of electromagnetic waves and uses prior information, which requires less data volume and can still accurately predict the channel even with fewer training samples.
- the second location information has the same dimension as the first location information.
- the neural network includes an intermediate layer, and the number of neurons in the intermediate layer is an integer multiple of the dimension of the first position information.
- the neural network further includes a radial basis function RBF layer, and the RBF layer is used to process the output of the intermediate layer.
- the activation function of the RBF layer is a periodic kernel function.
- the neural network has the ability to track the phase change of the channel response.
- the activation function of the RBF layer conforms to the following formula:
- x is the input of the RBF layer
- ⁇ (x) is the output of the RBF layer
- a, b, c, w, and ⁇ are the parameters to be trained.
- the first channel information includes uplink channel information and/or downlink channel information.
- a communication device which may be a communication device, or a device (eg, a chip, or a chip system, or a circuit) located in the communication device, or a device that can be matched with the communication device.
- the communication device may be a terminal device or a network device.
- the apparatus has the function of implementing the method described in the first aspect and any possible design of the first aspect above.
- the functions can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
- the hardware or software includes one or more modules corresponding to the above functions.
- the apparatus may include a communication module and a processing module. Exemplarily:
- a processing module for acquiring first location information of the first communication device; and for processing the first location information by using a neural network to obtain first channel information, where the first channel information is the relationship between the first communication device and the information of the wireless channel between the second communication devices; the communication module is further configured to communicate according to the first channel information.
- the parameters of the neural network are obtained by training based on historical data
- the historical data includes one or more mappings of historical location information and historical channel information
- the historical location information is the relationship between the training device and the first Location information during the period of communication between the two communication apparatuses
- the historical channel information is the channel information of the training apparatus during the period of communication with the second communication apparatus.
- the processing module when using a neural network to process the first location information to obtain the first channel information, is specifically configured to: process the first location information to obtain the second channel information.
- the dimension of the second channel information is lower than the dimension of the first channel information; and the second channel information is processed to obtain the first channel information.
- the processing module is further configured to: perform first training on the neural network according to the historical channel information, and the first training process includes: changing the dimension of the historical channel information from the One dimension is changed to the second dimension and the second dimension is changed to the first dimension; according to the historical location information corresponding to the historical channel information and the historical channel information of the second dimension, the neural network is Second training.
- the activation function of the neural network is a periodic function.
- the neural network includes:
- the input of the ith layer of the neural network is x i
- the weight of the neural network Bias of the neural network The sine function sin is used as the nonlinear activation function of the neural network; x is the input of the neural network, and ⁇ (x) is the output of the neural network.
- the processing module when using the neural network to process the first position information to obtain the first channel information, is configured to: process the first position information to obtain one or more second position information , the second position information is a function of the mirror point of the first position information; the one or more second position information is processed to obtain the first channel information.
- the second location information has the same dimension as the first location information.
- the neural network includes an intermediate layer, and the number of neurons in the intermediate layer is an integer multiple of the dimension of the first position information.
- the neural network further includes a radial basis function RBF layer, the RBF layer is used to process the output of the intermediate layer.
- the activation function of the RBF layer is a periodic kernel function.
- the activation function of the RBF layer conforms to the following formula:
- x is the input of the RBF layer
- ⁇ (x) is the output of the RBF layer
- a, b, c, w, and ⁇ are the parameters to be trained.
- the first channel information includes uplink channel information and/or downlink channel information.
- an embodiment of the present application provides a communication apparatus, the apparatus includes a communication interface and a processor, and the communication interface is used for the apparatus to communicate with other devices, such as data or signal transmission and reception.
- the communication interface may be a transceiver, circuit, bus, module or other type of communication interface, and other devices may be other communication devices.
- the processor is configured to invoke a set of programs, instructions or data to execute the methods described in the first aspect and each possible design of the first aspect.
- the apparatus may also include a memory for storing programs, instructions or data invoked by the processor. The memory is coupled to the processor, and when the processor executes the instructions or data stored in the memory, the method described in the first aspect or each possible design of the first aspect can be implemented.
- the embodiments of the present application further provide a computer-readable storage medium, where computer-readable instructions are stored in the computer-readable storage medium, and when the computer-readable instructions are run on a computer, the computer-readable instructions can be The methods described in one aspect or each possible design of the first aspect are performed.
- an embodiment of the present application provides a chip system, where the chip system includes a processor, and may further include a memory, for implementing the method described in the first aspect or each possible design of the first aspect.
- the chip system can be composed of chips, and can also include chips and other discrete devices.
- a computer program product comprising instructions which, when run on a computer, cause the method as described in the first aspect or various possible designs of the first aspect above to be performed.
- FIG. 1 is a schematic structural diagram of a communication system in an embodiment of the application
- 2a is a schematic diagram of the principle of a fully connected neural network in an embodiment of the present application
- 2b is a schematic diagram of a fully connected neural network in an embodiment of the present application.
- FIG. 3 is a schematic diagram of optimization of a loss function in an embodiment of the present application.
- FIG. 5 is a schematic flowchart of a specific process of a channel estimation method based on a neural network in an embodiment of the present application
- FIG. 6a is one of the schematic diagrams of estimating channel information through a neural network in an embodiment of the present application.
- 6b is the second schematic diagram of estimating channel information through a neural network in an embodiment of the present application.
- FIG. 7 is a schematic flowchart of channel estimation performed by the neural network of Example 1 in an embodiment of the present application.
- FIG. 8 is a network schematic diagram of an autoencoder in an embodiment of the present application.
- FIG. 9 is a schematic diagram of the neural network of Example 1 in the embodiment of the present application.
- FIG. 10 is a schematic diagram of a path through which a transmitted signal is reflected by a reflective surface and reaches a receiving end in an embodiment of the present application;
- Example 11 is a schematic diagram of the neural network of Example 3 in the embodiment of the present application.
- 12a is a schematic diagram of a real scene of a cell in an embodiment of the present application.
- 12b is a schematic diagram of a 3D model of a cell in an embodiment of the present application.
- FIG. 13 is a schematic diagram of a simulation effect of a neural network estimating a channel in an embodiment of the present application
- FIG. 14 is a schematic structural diagram of a communication device according to an embodiment of the present application.
- FIG. 15 is a schematic structural diagram of another communication device according to an embodiment of the present application.
- Embodiments of the present application provide a channel estimation method and communication device based on a neural network.
- the method and the device are based on the same technical concept or similar technical concept. Since the methods and devices solve problems in similar principles, the implementation of the device and the method can be referred to each other, and repeated descriptions will not be repeated.
- the neural network-based channel estimation method provided in the embodiments of the present application can be applied to a 5G communication system, such as a 5G new radio (NR) system, and can also be applied to various communication systems that evolve in the future, such as a sixth-generation (6th generation) system. generation, 6G) communication system, or the integrated communication system of space and sea.
- a 5G communication system such as a 5G new radio (NR) system
- NR new radio
- FIG. 1 shows the architecture of a communication system to which the embodiments of the present application are applied.
- the communication system 100 includes a network device 101 and a terminal device 102 .
- the possible implementation forms and functions of the network device 101 and the terminal device 102 are introduced as examples.
- the network device 101 provides services for the terminal devices 102 within the coverage. For example, as shown in FIG. 1 , the network device 101 provides wireless access to one or more terminal devices 102 within the coverage of the network device 101 .
- the network device 101 is a node in a radio access network (radio access network, RAN), which may also be referred to as a base station, and may also be referred to as a RAN node (or device).
- radio access network radio access network
- RAN radio access network
- examples of some network devices 101 are: next generation nodeB (gNB), next generation evolved nodeB (Ng-eNB), transmission reception point (TRP), evolved Node B (evolved Node B, eNB), radio network controller (radio network controller, RNC), Node B (Node B, NB), base station controller (base station controller, BSC), base transceiver station (base transceiver station, BTS), home base station (for example, home evolved NodeB, or home Node B, HNB), base band unit (base band unit, BBU), or wireless fidelity (wireless fidelity, Wifi) access point (access point, AP),
- the network device 101 may also be a satellite, and the satellite may also be referred
- the network device 101 can also be other devices with network device functions, for example, the network device 101 can also be used in device-to-device (device-to-device, D2D) communication, Internet of Vehicles, or machine-to-machine (M2M) communication.
- D2D device-to-device
- M2M machine-to-machine
- a network device capable device may also be any possible network device in a future communication system.
- Terminal equipment 102 also known as user equipment (UE), mobile station (MS), mobile terminal (MT), etc., is a device that provides voice and/or data connectivity to users. equipment.
- the terminal device 102 includes a handheld device with a wireless connection function, a vehicle-mounted device, and the like.
- the terminal device 102 may be a mobile phone (mobile phone), a tablet computer, a notebook computer, a palmtop computer, a mobile internet device (MID), a wearable device (such as a smart watch, a smart bracelet, a pedometer, etc.) ), in-vehicle equipment (for example, cars, bicycles, electric vehicles, airplanes, ships, trains, high-speed rail, etc.), virtual reality (virtual reality, VR) equipment, augmented reality (augmented reality, AR) equipment, industrial control (industrial control) wireless terminals in the field, smart home equipment (for example, refrigerators, TVs, air conditioners, electricity meters, etc.), intelligent robots, workshop equipment, wireless terminals in self-driving, wireless terminals in remote medical surgery , wireless terminal in smart grid, wireless terminal in transportation safety, wireless terminal in smart city, or wireless terminal in smart home, flying equipment (such as , intelligent robots, hot air balloons, drones, airplanes), etc.
- in-vehicle equipment for example, cars, bicycles, electric
- the terminal device 102 may also be other devices with terminal device functions.
- the terminal device 102 may also be a device serving as a terminal device function in D2D communication, Internet of Vehicles, or M2M communication.
- a network device that functions as a terminal device can also be regarded as a terminal device.
- Channel estimation plays a very important role in the wireless communication process.
- the detection of pilot signals can usually be used to achieve channel estimation.
- the transmitter sends a pilot signal to the receiver.
- the pilot signal is known to the transmitter and receiver.
- the receiver performs channel estimation by detecting the pilot signal, and feeds back the channel estimation result to the transmitter.
- This method requires that the pilot signals of different terminal devices or different antennas are orthogonal, so the overhead is relatively high.
- the parameters estimated according to the pilot signal are limited, so the accuracy of the channel estimation is also limited.
- the neural network-based channel estimation method provided in the embodiments of the present application is expected to improve the accuracy and efficiency of channel estimation through the learning and use of the neural network, and can help to build an intelligent communication system.
- the neural network is first described.
- Neural network is a network structure that imitates the behavioral characteristics of animal neural network for information processing.
- a neural network can be composed of neural units, and a neural unit can refer to an operation unit that takes x s and an intercept 1 as inputs, and the output of the operation unit can be shown in formula (1):
- W s is the weight of x s
- b is the bias of the neural unit.
- f is an activation function of the neural unit, which is used to introduce nonlinear characteristics into the neural network to convert the input signal in the neural unit into an output signal.
- the output signal of the activation function can be used as the input of the next convolutional layer, and the activation function can be a sigmoid function, a ReLU function, a tanh function, etc.
- a neural network is a network formed by connecting a plurality of the above single neural units together, that is, the output of one neural unit can be the input of another neural unit.
- the input of each neural unit can be connected with the local receptive field of the previous layer to extract the features of the local receptive field, and the local receptive field can be an area composed of several neural units.
- the neural network has N processing layers, N ⁇ 3 and N is a natural number, the first layer of the neural network is the input layer, which is responsible for receiving the input signal x i , the last layer of the neural network is the output layer, and the output layer of the neural network is the output layer. Process the result h i .
- the other layers except the first layer and the last layer are intermediate layers. These intermediate layers together form a hidden layer. Each intermediate layer in the hidden layer can receive input signals and output signals.
- the hidden layer is responsible for the processing of input signals. process.
- Each layer represents a logic level of signal processing. Through multiple layers, data signals can be processed by multi-level logic.
- FIG. 2b a schematic diagram of a fully connected neural network is exemplarily shown.
- Neurons in two adjacent layers are connected in pairs.
- the output h of the neurons in the next layer is the weighted sum of all the neurons in the previous layer connected to it and passes through the activation function.
- the matrix can be expressed as formula (2).
- a neural network can be understood as a mapping relationship from an input data set to an output data set.
- the neural network is randomly initialized, and the process of obtaining this mapping relationship from random w and b with existing data is called neural network training.
- the specific method of training is to use a loss function to evaluate the output of the neural network, and to backpropagate the error.
- w and b can be iteratively optimized until the loss function reaches the minimum value.
- Figure 3 illustrates a schematic diagram of loss function optimization.
- the process of gradient descent can be expressed as Equation (4).
- ⁇ is the parameters to be optimized (such as w and b)
- L is the loss function
- ⁇ is the learning rate, which controls the step size of gradient descent.
- the neural network can use the error back propagation (BP) algorithm to correct the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller.
- BP error back propagation
- the input signal is passed forward until the output will generate error loss, and the parameters in the initial neural network model are updated by back-propagating the error loss information, so that the error loss converges.
- the back-propagation algorithm is a back-propagation movement dominated by error loss, aiming to obtain the parameters of the optimal neural network model, such as the weight matrix.
- the process of backpropagation uses the chain rule of partial derivatives, that is, the gradient of the parameters of the previous layer can be calculated recursively by the gradient of the parameters of the latter layer, as shown in Figure 4, which is a schematic diagram of gradient backpropagation.
- the process of backpropagation can be expressed as formula (5)
- w ij is the weight of node j connecting node i
- s i is the weighted sum of inputs on node i.
- the specific process of the neural network-based channel estimation method provided by the embodiment of the present application is as follows.
- the method can be applied to a communication device, and the communication device can be a terminal device or a network device.
- the first communication apparatus may be a terminal device.
- the terminal device can be any terminal device served by the network device.
- the first channel information is information of a wireless channel between the first communication device and the second communication device.
- the information of the wireless channel between the first communication device and the second communication device may be the information of the wireless channel between the terminal device and the network device.
- the terminal device may be a network device with a terminal function, then the information of the wireless channel between the first communication device and the second communication device may also be the information of the wireless channel between the network device and the network device.
- the network device may also be a terminal device with a network device function, then the information of the wireless channel between the first communication device and the second communication device may also be the information of the wireless channel between the terminal device and the terminal device.
- the method in the embodiment of FIG. 5 may be executed by the first communication apparatus, or may be executed by the second communication apparatus.
- the first communication apparatus may be a terminal device, or may be a component in the terminal device, such as a processor, a chip or a chip system.
- the second communication apparatus may be a network device, or may be a component in the network device, such as a processor, a chip, or a chip system.
- the first communication device is an example of a terminal device
- the second communication device is an example of a network device for description.
- the terminal device acquires its own location information, which is recorded as the first location information.
- the terminal device uses the neural network to process the first location information to obtain the first channel information.
- the terminal device communicates with the network device according to the first channel information.
- the terminal device can also communicate with other devices according to the first channel information.
- the first channel information can be used for cell handover.
- the terminal device switches to other network devices after cell handover and communicates with other network devices.
- the network device acquires the first location information of the terminal device, and the network device can receive the first location information from the terminal device.
- the network device uses the neural network to process the first location information, obtains the first channel information, and communicates with the terminal device according to the first channel information.
- the network device can also use the first channel information to communicate with other devices, and the other devices can be other devices.
- the network equipment can also be other terminal equipment.
- a neural network can be used to process the location information of the terminal device to obtain the information of the wireless channel between the terminal device and the network device.
- a wireless communication system in a fixed service area it includes network equipment and one or more terminal equipment. The locations of network devices are fixed, and the locations of terminal devices are randomly distributed in the service area.
- the channel information between a terminal device and a network device is determined by the location information of the terminal device.
- the channel information is determined by the location information of the sending device and the receiving device.
- the channel information is determined by the location information of the terminal device.
- the channel information is determined by the location information of the terminal device.
- the channel information is determined by the location information of the terminal.
- the location information of the terminal For a satellite, when the satellite serves a specific area, based on the satellite's own ephemeris, for terminals within its coverage, the channel information is determined by the location information of the terminal.
- the communication signal will be affected by some fixed reflectors in the communication environment. For example, during the propagation of the signal from the transmitter antenna position to the receiver, if there are reflectors between , it will receive reflections from the surface of the reflector.
- the reflector may be, for example, an obstacle such as a building or a tree.
- the communication environment (the communication environment may also be called a communication scene) can be learned through the neural network, so that the channel information can be expressed as a function of the position information, and the channel information can be estimated by using the position information. In this way, the overhead of the pilot signal can be saved, and the estimation efficiency and accuracy of the channel information can be improved.
- the first channel information may be uplink channel information, or downlink channel information, or channel information involved in D2D communication or IoV communication, or may be channel information for communication between terminal devices and terminal devices, or may be Channel information for communication between network devices and network devices.
- the training process of the neural network is described below.
- the neural network is trained according to the historical data generated during the communication between the terminal device and the network device.
- the terminal device When the network device communicates with the terminal device, the terminal device needs to measure the channel and report the measured channel information to the network device. Form a piece of historical data. There may be multiple terminal devices reporting channel information to the network device. In this way, after a period of communication, the network device will store a large amount of historical communication data, and a large amount of historical data can form a training data set.
- the training data set includes a plurality of mappings of historical location information and historical channel information. For example, the format of each piece of historical data in the training data set is (historical location information, historical channel information).
- the historical location information is the location information during the communication between the training device and the network device.
- the training device may also be called a training terminal device, which is a terminal device in the historical communication process with the network device.
- the terminal equipment is trained to communicate with the network equipment for a period of time, and the channel information is measured, and the channel information can also be reported to the network equipment.
- the first terminal device in the embodiment of FIG. 5 may also be a training terminal device.
- the training terminal equipment may be other terminal equipment other than the first terminal equipment. As long as it stays in the service area of the network equipment or as long as the terminal equipment has been served by the network equipment, it is possible to report the measured channel information to the network equipment.
- the devices can all be training terminal devices.
- the historical channel information is the channel information measured by the training terminal device during communication with the network device.
- the network device obtaining the training data set.
- the terminal device can also obtain the training data set and train the neural network.
- the terminal device obtains the channel information when measuring the channel, and knows its own For the location information, the process of training the neural network by the terminal device is similar to that of the network device, and details are not repeated here.
- the channel information may be any parameter that reflects the channel, such as channel impulse response (channel impulse response) CIR.
- the location information can be any information that characterizes the location, such as a three-dimensional vector of location coordinates, or latitude and longitude.
- the format of the historical data can be expressed as (pos, CIR), where pos represents position information, and CIR can be a sampling of the channel impulse response from the terminal device to the network device in the time domain. Taking the position information represented by pos and the channel information represented by CIR as an example, after a period of communication history data collection, a training data set can be obtained. The format of each piece of historical data in the training dataset is (pos, CIR).
- the parameters of the neural network can be obtained by training the neural network according to the historical data set.
- the parameters of the neural network can be, for example, at least one of w and b, where w is a weight or a weight matrix, and b is a bias or a bias vector.
- the training process can refer to the description of neural network training above.
- the historical data set is collected and generated in a certain communication environment, and the parameters of the neural network reflect the information of the communication environment.
- the channel information is also related to the radio parameters, so the channel information in the historical data set also reflects the radio parameters to a certain extent. That is to say, the parameters of the neural network not only reflect the information of the communication environment, but also reflect the information of the radio parameters.
- the parameters of the neural network are determined by the communication environment and radio parameters.
- the radio parameter can be, for example, the frequency of the electromagnetic wave or the bandwidth of the carrier wave.
- the trained neural network can reflect the communication environment and radio parameters.
- the obtained first channel information can be closer to the real target value under the communication environment and radio parameters.
- FIG. 6a A schematic diagram of estimating channel information through a neural network is shown in FIG. 6a.
- the first position information is input into the neural network, and the first channel information is obtained through the operation of the neural network.
- the static channel information between the terminal device and the network device is determined by the location of the terminal device, and the specific channel information is determined by the environment and electromagnetic wave parameters.
- the embodiment of the present application proposes a learning method, intelligently takes the communication scene/environment as the implicit information to be learned, adopts a new network framework to effectively express the implicit characteristics of the channel, and establishes the location information and channel information of the terminal device in a fixed scene mapping relationship.
- the channel information can be expressed as a function of the position information, and the parameters of the function are determined by the scene and electromagnetic wave parameters.
- the mapping of location information to channel information is learned through the dataset of [Location Information-Channel Information], and the location information of the terminal device is used to predict the channel information to construct an intelligent communication system.
- the network device needs to measure the channel in the process of communicating with the terminal device, and at the same time record the specific location of the current terminal device to form a piece of historical data. After a period of communication, the network device stores a large amount of communication history data to form a training data set.
- the stored data format is (pos, CIR), where pos is the three-dimensional vector of the position coordinates of the terminal device, and CIR is the sampling of the channel impulse response from the terminal device to the network device in the time domain.
- the embodiment of the present application proposes a learning method, which intelligently uses the communication scene/environment as the implicit information to be learned to effectively express the implicit characteristics of the channel, and establishes the mapping relationship between the position of the terminal device and the channel response in a fixed scene.
- the channel can be expressed as a function of geographic location, the parameters of the function are determined by the scene and electromagnetic wave parameters, and the parameters are obtained through training on the data set.
- the channel overhead caused by the pilot sequence can be avoided, and at the same time, the prediction accuracy can be achieved close to that which can be achieved by the ray tracing model.
- the parameters of the neural network are obtained by supervised training from the geographic location-channel impulse response dataset ⁇ (pos, CIR) ⁇ , where pos and CIR in the dataset are a pair of labeled data collected from a specific environment under specific radio parameters of. After training, the neural network parameters reflect the relevant information of the specific radio parameters and the specific environment, that is, reflect the relationship between the geographic location and the channel impulse response in the specific environment.
- the activation function of the neural network may be a periodic function.
- the output signal of the neural network is the channel information, and the input signal of the neural network is the position information.
- the channel information is an implicit periodic function of the location information, eg the phase of the channel information is an implicit periodic function of the location information.
- the neural network adopts periodic function to better adapt to the characteristics of location information and channel information.
- the following takes several neural networks as examples to further describe the channel estimation method based on the neural network in this embodiment of the present application.
- Example 1 of a neural network
- the dimension of location information is often different from the dimension of channel information.
- the dimension can be the number of elements contained in the pointer. For example, if the location information is a vector of three-dimensional coordinates, the dimension of the location information is 3.
- the channel information is a CIR vector, and the length of the CIR vector is usually higher than 3, that is, the dimension of the channel information is higher than that of the location information.
- the neural network is used to process the first location information to obtain the first channel information, which may be implemented through the following process based on the consideration of dimensions.
- the neural network consists of two parts, denoted as the first part and the second part.
- the first part of the neural network processes the first position information to obtain second channel information, and the dimension of the second channel information is lower than the dimension of the first channel information.
- the second channel information is processed by the second part of the neural network to obtain the first channel information.
- the first part of the neural network is used to process the first location information to obtain 3-dimensional second channel information.
- the 3-dimensional second channel information is processed by the second part of the neural network to obtain high-dimensional first channel information.
- Training the neural network can combine the training of the first part with the training of the second part.
- the specific process includes the following steps. Change the dimension of the historical channel information from the first dimension to the second dimension and from the second dimension to the first dimension; according to the historical location information corresponding to the historical channel information and the historical channel information of the second dimension, the second dimension of the neural network is Part of the second training.
- the neural network is trained by using the historical data set.
- the historical data set includes one or more pieces of historical data, and the historical data includes the mapping of historical location information and historical channel information.
- the first part is trained using the historical channel information as the input and output of the first part of the training.
- the dimensions of the historical channel information are compressed to obtain low-dimensional channel information, and the dimensions of the low-dimensional channel information are decompressed or decompressed to obtain original high-dimensional historical channel information.
- the compression process can be implemented by an encoder, and the decompression process can be implemented by a decoder.
- the process of compression can also be considered as a process of dimensionality reduction, and the process of decompression can also be considered as a process of dimensionality improvement.
- the first part can be seen as an autoencoder network, and the first part of the training is to train the autoencoder network.
- the result of the autoencoder network can be shown in Figure 8.
- the input layer on the left is the historical channel information.
- the first hidden layer processes the historical channel information and reaches the bottleneck layer.
- the output of the bottleneck layer is processed by the second hidden layer to reach the output layer, and the output signal is obtained.
- the signal is high-dimensional historical channel information.
- the processing process of the first hidden layer corresponds to the compression step in FIG. 7
- the output of the bottleneck layer corresponds to the low-dimensional channel information in FIG. 7 .
- the processing procedure of the second hidden layer corresponds to the decompression step in FIG. 7 .
- the dimension of the encoder output may be predetermined, that is, the dimension to which the original historical channel vector is reduced, for example, the reduced dimension is the second dimension.
- the second dimension can be set according to the dimension of the historical location information, and the number of neurons in the bottleneck layer is the value of the second dimension.
- the dimension of the historical channel information is 3, and the second dimension can be set to 3, that is, the number of neurons in the bottleneck layer is set to 3.
- the number of neurons in the bottleneck layer can be determined first.
- Other network parameters in the autoencoder network can be obtained by continuous network parameter tuning, wherein other network parameters such as the number of neurons in other network layers except the bottleneck layer, the number of encoder network layers and the decoding The number of network layers of the device.
- the goal of autoencoder network training is to minimize the mean square error (MSE) of the network on the test dataset.
- MSE mean square error
- the historical data can be divided into a training data set and a test data set, the training data set is used for training the parameters of the neural network, and the test data set is used to test the performance of the neural network obtained by training.
- the second part of the neural network can be seen as a learning machine.
- a learning machine denoted as location information-channel information
- the training of the network can be realized through supervised learning.
- the second part of the training of the neural network requires the output of the low-dimensional channel information in the first part of the training as input, and the data pair formed by the low-dimensional channel information obtained in the first part of the training and its corresponding historical position information is used as the supervised learning data.
- the historical channel information used in the training of the first part is the data in the historical data set, and a piece of historical data includes the mapping relationship between the historical location information and the historical channel information.
- the low-dimensional channel information obtained in the first part of the training is obtained from a historical channel information, then this historical channel information corresponds to a historical location information, that is, the low-dimensional channel information obtained in the first part of the training corresponds to a historical location information.
- the low-dimensional channel information obtained in the first part of the training and its corresponding historical position information are used as a piece of training data for the second part of training. Multiple low-dimensional channel information and multiple historical position information form the training data set for the second part of training. , to train the second part.
- the training of the network in the second part can be realized by a learning machine. Multiple low-dimensional channel information and multiple historical location information can be input into the location information-channel information learning machine for training, and the second part of the network is trained through the back-propagation algorithm until convergence.
- the neural network is used to process the first position information to obtain the first channel information, which may be implemented through the following process, specifically the process shown by the bold solid arrow in FIG. 7 .
- Input the first location information into the location information-channel information learning machine obtain second channel information, the dimension of the second channel information is lower than the dimension of the first channel information, input the second channel information into the decoder for decompression, and obtain the first channel information.
- the example 1 of the neural network will be further described in detail below in conjunction with specific application scenarios.
- the training process of the neural network includes two stages: first, the CIR vector in the original stored historical data set is extracted and used as the input and output of the auto-encoder network, and the auto-encoder network is trained. It is assumed that the channel information depends on the location information of the terminal device in the scenario to which the embodiments of the present application are applied, so the minimum feature space dimension of the CIR vector is 3 dimensions. Therefore, the number of neurons in the bottleneck layer in the network can be set to 3 dimensions. Other network parameters, such as the number of neurons in other network layers, the number of network layers in the encoder and decoder, etc., need to be optimized through continuous network parameter tuning. The goal of training is to minimize the average MSE of the network on the test dataset.
- the channel CIR vector of each piece of data in the dataset is used as the input of the autoencoder network, and the output z of the bottleneck layer is obtained, forming a new data pair (pos, z with the position coordinates) ).
- a fully connected deep neural network is constructed as a learning machine for location information-channel information. The network takes the user's location pos as the network input, and the mapped low-dimensional representation z of the impulse response as the network target output. The network is trained by the back-propagation algorithm until convergence.
- the user position pos whose channel information needs to be predicted is input into the deep neural network trained in the second part, that is, the position information-channel information learning machine, and the predicted compressed channel z' is obtained.
- the output of the autoencoder is the final predicted channel impulse response vector CIR'.
- Example 2 of a neural network
- the neural network can be a sinusoidal representation network (SIREN), which is a fully-connected neural network activated by a periodic function.
- SIREN sinusoidal representation network
- the neural network conforms to the following mathematical formula:
- the input of the ith layer of the neural network is x i
- the dimension of the output of the i-th layer of the neural network is N i (N i >0)
- the weights of the neural network Bias of the neural network A sine function (sin function) is used as the nonlinear activation function of the neural network; x is the input of the neural network, and ⁇ (x) is the output of the neural network.
- ( ⁇ n-1 ⁇ n-2 ⁇ ... ⁇ 0 ) indicates that the ⁇ function of one layer operates on the output result of the next layer, namely ( ⁇ n-1 ⁇ n-2 ⁇ ... ⁇ 0 )( x) represents the output result of the i-th layer ⁇ function operation as the input of the i+1-th layer ⁇ function.
- ⁇ 0 is used to operate on the input of layer 0
- ⁇ 1 is used to operate on the output of ⁇ 0
- ⁇ n-1 is used to operate on the output of ⁇ n-2 .
- W n is the weight from the (n-1)th layer to the nth layer.
- the first position information is three-dimensional coordinates
- the first channel information is CIR
- a schematic structural diagram of the neural network is shown in FIG. 9 .
- the first position information is represented by pos.
- the first position information is the input signal, which is used as the input layer of the neural network.
- (x, y, z) is the three-dimensional coordinate value of the first position information, which is the value of the three neurons in the input layer.
- the activation functions of other layers of the neural network except the last layer are periodic functions, such as sin functions.
- the output signal of the neural network is the channel information
- the input signal of the neural network is the position information.
- the channel information is an implicit periodic function of the location information, eg the phase of the channel information is an implicit periodic function of the location information.
- the periodic activation function sin function can better adapt to the characteristics of location information and channel information.
- the channel information is an implicit periodic function with respect to the location information, as shown in Equation (6) and Equation (7).
- the phase of the kth path in the channel impulse response is the implicit periodic function of the distance d k of the path (the periodic function of the function G(d k )); and the distance is as shown in formula (7) is determined by the user's location; therefore, the channel impulse response is an implicit periodic function of the user's location information.
- the SIREN network has significantly better performance in fitting hidden periodic functions. Therefore, the SIREN network is very suitable for representing the function with periodic characteristics required by this embodiment.
- the SIREN network adopts a special periodic activation function, in order to make the network have better convergence, a special initialization method of network parameters is required.
- This embodiment of the present application provides an optional initialization solution.
- n is the input dimension of the first layer of the network, for example, the input of the first layer is location information, and the value of n can be set to 3.
- c and w 0 are constants, for example, c can take a value of 6, and w 0 can take a value of 30. It should be noted that the setting of the constant is not necessarily the optimal value. If you need to further improve the network performance, you can consider trying more network initialization parameters.
- the network can be trained using the historical dataset using the backpropagation algorithm.
- the first channel information is obtained by processing the first position information through the neural network of Example 3, which may be implemented in the following manner.
- the neural network is used to process the first position information to obtain one or more second position information.
- the second position information is a mirror image point of the first position information, and the mirror image point is determined according to the propagation path of the electromagnetic wave sent by the first position information.
- the one or more pieces of second location information are processed to obtain first channel information.
- the transmitted signal sent from a certain position is reflected by the surface of one or more reflectors to the receiver.
- FIG. 10 a schematic diagram of the path of the transmitted signal being reflected by the reflective surface and reaching the receiving end is shown.
- Tx is the sending end, the sending end sends a signal at a certain position, and it is reflected by the reflective surface to reach the receiving end Rx.
- the propagation path of the signal through the reflecting surface is determined.
- the meaning of the mirror point is: the signal propagation path of the transmitter Tx to the receiver Rx through the reflection surface, which can be equivalent to the propagation path from the mirror point to the receiver in a straight line.
- the transmitted signal from the sender is in one position and propagates to the receiver, possibly passing through one or more reflective surfaces. Therefore, in the case of a fixed position, there may be one or more mirror points at that position.
- the first position information can determine a fixed position, and the first position information has one or more mirror points.
- the neural network of Example 3 can process the first position information to obtain one or more second position information, where the second position information is a mirror point of the first position information.
- the second position information processed by the neural network is not necessarily a true mirror image of the position determined by the first position information, and the second position information output by the neural network may be different from the target value to a certain extent.
- the neural network of Example 3 can also be regarded as a combination of two partial networks, denoted as the third part and the fourth part.
- the third part of the neural network processes the first position information to obtain one or more second position information; the fourth part of the neural network processes the second position information to obtain the first channel information.
- Training the neural network can combine the training of the third part with the training of the fourth part.
- the input layer of the third part is the historical position information.
- the output of the third part is multiple mirror points.
- the dimension of each mirror point is the same as the dimension of the historical location information. For example, if the dimension of the historical location information is 3, then the dimension of each mirror point is 3, and the output of the third part is multiple sets of 3-dimensional mirror points. Multiple mirror points are used as the input of the fourth part, and after the processing of the fourth part of the network layer, the output signal is obtained.
- the output signal is historical channel information.
- the fourth part can be a radial basis function (RBF) layer network, and the activation function of the RBF layer is a periodic kernel (Kernel) function.
- RBF radial basis function
- Kernel periodic kernel
- the activation function of the RBF layer conforms to the following formula (8):
- x is the input of the RBF layer
- ⁇ (x) is the output of the RBF layer
- a, b, c, w, and ⁇ are the parameters to be trained.
- the Gaussian kernel function commonly used in the RBF layer is replaced with a kernel function with periodic terms, and by adding periodic terms, the neural network has the ability to track the phase change of the channel response.
- the third part of the neural network is used to process the first position information to obtain one or more second position information.
- each second position information can be regarded as a mirror point of the first position information.
- the dimension of the second location information is the same as the dimension of the first location information. That is, the output of the network in the third part is an integer multiple of the dimension of the first position information.
- the output of the third part of the network can be seen as an intermediate layer of the entire neural network. The output of this intermediate layer is sent to the RBF layer network for processing.
- the number of neurons in the middle layer is an integer multiple of the dimension of the first position information.
- the third part is the deep network
- the fourth part is the RBF layer network.
- the application process of the neural network is as follows. Compared with ordinary neural networks, the neural network has a very strong interpretability in that the problem is treated as a black-box problem.
- the front-end deep network learns to find the mapping of the reflection image point positions of the path composed of the channel impulse response CIR for all the channel impulse response CIR given the input user position coordinates Tx in a fixed electromagnetic propagation environment.
- the output is a series of three-dimensional coordinate values.
- the output of the actual depth layer is not equivalent to the position coordinates of the reflected image points in the real physical propagation process, but the image point coordinates plus a random offset. item.
- the RBF layer in the backend is used as a function fitter, and the obtained image point map position coordinates are used as input to obtain the channel response of the corresponding path.
- This scheme replaces the Gaussian kernel function commonly used in RBF networks with a kernel function with periodic terms, as shown in formula (8).
- the network By adding the periodic term, the network has the ability to track the phase change of the channel response. A simpler amplitude fit is achieved by a weighted sum of a series of Gaussian kernel functions. It should be noted that the front and rear networks are jointly trained.
- the RBF layer bias parameter a is designed to: adaptively cancel out the reflected image point coordinate bias term obtained by the depth layer.
- the RBF layer bias parameter b is used to adaptively fit the real or imaginary part of the response. Because the special electromagnetic propagation physical process and mathematical form are introduced into the neural network structure design, at the same time, the robustness of the network is guaranteed by combining with the currently commonly used neural network structure. In this embodiment, accurate prediction of the channel can still be achieved even with fewer training samples.
- the position information is represented by pos.
- the position information is the input signal, which is used as the input layer of the neural network.
- (x, y, z) is the three-dimensional coordinate value of the position information, which is the value of the three neurons in the input layer.
- multiple mirror points of the intermediate layer are obtained.
- the dimension of each mirror point is three-dimensional.
- the middle layer includes multiple groups of neurons, each group has three neurons, corresponding to a three-dimensional mirror point.
- the output of the intermediate layer is processed by the RBF layer and reaches the output layer to obtain the output signal.
- the output signal is channel information, which is illustrated by taking the CIR as an example in FIG. 11 .
- Each three-dimensional mirror point is processed by a processing unit in the RBF layer.
- the input signal is historical location information
- the output signal is historical channel information.
- the input signal is the first position information
- the output signal is the first channel information.
- the first location information is location information of any channel to be predicted.
- neural networks of the above three examples are only examples, and other types of neural networks may also be used in practical applications.
- the following describes the performance of channel estimation using the method provided by the embodiment of the present application according to a specific application scenario.
- the method provided by the embodiment of the present application is applied to the channel prediction of the cell as shown in FIG. 12a and FIG. 12b.
- Fig. 12a is a real scene of the cell
- Fig. 12b is a 3D model of the cell.
- the simulation data is generated by a ray tracing algorithm.
- the specific parameters are as follows:
- buildings 1 and 2 are 55 ⁇ 16 ⁇ 18(m);
- the dimensions of buildings 3 and 4 are 55 ⁇ 10 ⁇ 18(m);
- Base station location (45m, 48m, 37m);
- Electromagnetic wave frequency 3Ghz
- the relative permittivity of the building 5.31;
- the scale of the data set in practical applications is simulated by the average number of data samples sampled in a unit area.
- the performance comparison of the proposed example 1, example 2 and example 3 of the three neural networks is simulated under three data scales.
- the neural network of Example 1 is correspondingly shown as an autoencoder
- the neural network of Example 2 is correspondingly shown as SIREN
- the neural network of Example 3 is correspondingly shown as a Gauss-sine radial basis function. It can be found from Figure 13 that by innovatively designing the structure of the physics-inspired network, the convergence speed of the network can be greatly improved, and at the same time, the network prediction performance can be achieved more accurately under the condition that fewer neurons are required.
- the channel overhead caused by the pilot sequence can be avoided, and at the same time, the prediction accuracy can be achieved close to that which can be achieved by the ray tracing model.
- the training process of the neural network autoencoder in Example 1 does not require location information, and can be performed independently according to historical channel information, making data easier to obtain.
- the neural network of example 2 has better performance and stronger expressive ability when the amount of data is large.
- the neural network in Example 3 takes into account the transmission characteristics of electromagnetic waves, and uses prior information, which requires less data and can still accurately predict the channel even with fewer training samples.
- the communication device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
- an embodiment of the present application further provides a communication device 1400 .
- the communication device 1400 may be a communication device, a device in a communication device, or a communication device that can be matched and used. installation.
- the communication apparatus 1400 may be a terminal device or a network device.
- the communication apparatus 1400 may include modules that perform one-to-one correspondence with the methods/operations/steps/actions performed by the terminal device or the network device in the above method embodiments, and the module may be a hardware circuit, software, or It can be implemented by hardware circuit combined with software.
- the apparatus may include a processing module 1401 and a communication module 1402 .
- the processing module 1401 is used to call the communication module 1402 to perform the function of receiving and/or sending.
- the processing module 1401 is configured to obtain first location information of a first communication device; and is configured to process the first location information by using a neural network to obtain first channel information.
- the first channel information is information of a wireless channel between the first communication device and the second communication device.
- the communication module 1402 is further configured to communicate according to the first channel information.
- the communication module 1402 is further configured to perform operations related to receiving or sending signals performed by the communication device in the above method embodiments, and the processing module 1401 is further configured to perform other operations performed by the communication device in the above method embodiments except for sending and receiving signals. This will not be repeated one by one.
- the communication device may be a terminal device or a network device.
- the division of modules in the embodiments of the present application is schematic, and is only a logical function division. In actual implementation, there may be other division methods.
- the functional modules in the various embodiments of the present application may be integrated into one processing unit. In the device, it can also exist physically alone, or two or more modules can be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
- a communication apparatus 1500 provided by an embodiment of the present application is used to implement the functions of the communication apparatus in the foregoing method.
- the communication device may be a terminal device or a network device.
- the device When the function of the network device is implemented, the device may be a network device, a device in a network device, or a device that can be matched and used with the network device.
- the device When implementing the function of the terminal device, the device may be the terminal device, or may be a device in the terminal device, or a device that can be matched and used with the terminal device.
- the communication device may be a chip system. In this embodiment of the present application, the chip system may be composed of chips, or may include chips and other discrete devices.
- the communication apparatus 1500 includes at least one processor 1520, which is configured to implement the function of the terminal device or the network device in the method provided in the embodiment of the present application.
- Communication device 1500 may also include communication interface 1510 .
- the communication interface may be a transceiver, a circuit, a bus, a module, or other types of communication interfaces, which are used to communicate with other devices through a transmission medium.
- the communication interface 1510 is used for the device in the communication device 1500 to communicate with other devices.
- the other device can be a network device or other terminal device; for another example, the communication device 1500 is a In the case of a network device, other devices may be terminal devices or other network devices; for another example, when the communication device 1500 is a chip, it may be the same as other chips or devices in the communication device.
- the processor 1520 uses the communication interface 1510 to send and receive data, and is used to implement the methods described in the above method embodiments.
- the processor 1520 is configured to acquire first location information of the first communication device; and to process the first location information by using a neural network to obtain first channel information, where the first channel information is the first communication device The information of the wireless channel with the second communication device; the communication interface 1510 is further used for communication according to the first channel information.
- the parameters of the neural network are obtained by training according to historical data
- the historical data includes one or more mappings of historical position information and historical channel information
- the historical position information is the position information during the communication between the training device and the second communication device
- the history The channel information is the channel information of the training device during communication with the second communication device.
- the processor 1520 when using a neural network to process the first location information to obtain the first channel information, is specifically configured to:
- the second channel information is processed to obtain the first channel information.
- processor 1520 is further configured to:
- the first training is performed on the neural network according to the historical channel information, and the process of the first training includes: changing the dimension of the historical channel information from the first dimension to the second dimension and from the second dimension to the first dimension;
- the second training is performed on the neural network according to the historical location information corresponding to the historical channel information and the historical channel information of the second dimension.
- the activation function of the neural network is a periodic function.
- the neural network includes:
- the input of the ith layer of the neural network is x i
- the sine function sin is used as the nonlinear activation function of the neural network; x is the input of the neural network, and ⁇ (x) is the output of the neural network.
- the processor 1520 is configured to:
- One or more pieces of second location information are processed to obtain first channel information.
- the dimension of the second location information is the same as that of the first location information.
- the neural network includes an intermediate layer, and the number of neurons in the intermediate layer is an integer multiple of the dimension of the first position information.
- the neural network further includes a radial basis function RBF layer, and the RBF layer is used to process the output of the intermediate layer.
- the activation function of the RBF layer is a periodic kernel function.
- the activation function of the RBF layer conforms to the following formula:
- x is the input of the RBF layer
- ⁇ (x) is the output of the RBF layer
- a, b, c, w, and ⁇ are the parameters to be trained.
- the first channel information includes uplink channel information and/or downlink channel information.
- the processor 1520 and the communication interface 1510 may also be used to perform other corresponding steps or operations performed by the terminal device or the network device in the above method embodiments, which will not be repeated here.
- Communication apparatus 1500 may also include at least one memory 1530 for storing program instructions and/or data.
- Memory 1530 and processor 1520 are coupled.
- the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
- Processor 1520 may cooperate with memory 1530.
- Processor 1520 may execute program instructions stored in memory 1530 . At least one of the at least one memory may be integrated with the processor.
- the specific connection medium between the communication interface 1510 , the processor 1520 , and the memory 1530 is not limited in the embodiments of the present application.
- the memory 1530, the processor 1520, and the communication interface 1510 are connected through a bus 1540 in FIG. 15.
- the bus is represented by a thick line in FIG. 15, and the connection between other components is only for schematic illustration. , is not limited.
- the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in FIG. 15, but it does not mean that there is only one bus or one type of bus.
- the communication module 1402 and the communication interface 1510 may output or receive baseband signals.
- the output or reception of the communication module 1402 and the communication interface 1510 may be radio frequency signals.
- the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which can implement or The methods, steps and logic block diagrams disclosed in the embodiments of this application are executed.
- a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
- the memory 1530 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), Such as random-access memory (random-access memory, RAM).
- Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
- Some or all of the operations and functions performed by the communication device described in the above method embodiments of the present application may be performed by a chip or an integrated circuit.
- An embodiment of the present application provides a computer-readable storage medium storing a computer program, where the computer program includes instructions for executing the foregoing method embodiments.
- the embodiments of the present application provide a computer program product containing instructions, which, when executed on a computer, cause the computer to execute the above method embodiments.
- the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
- the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Power Engineering (AREA)
- Neurology (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本申请公开了一种基于神经网络的信道估计方法及通信装置,该方法可以通过以下步骤实现:获取第一通信装置的第一位置信息;利用神经网络处理所述第一位置信息,获得第一信道信息,所述第一信道信息为所述第一通信装置与第二通信装置之间无线信道的信息;根据所述第一信道信息进行通信。通过神经网络利用位置信息来估计信道信息。这样可以避免使用导频信号进行信道估计,节省导频信号的开销,提高信道信息的估计效率和准确度。
Description
相关申请的交叉引用
本申请要求在2021年01月15日提交中国专利局、申请号为202110057615.8、申请名称为“一种基于神经网络的信道估计方法及通信装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及通信技术领域,尤其涉及一种基于神经网络的信道估计方法及通信装置。
随着网络的快速发展,未来的无线通信网络将会承载越来越多的连接设备和信息流量。为了提高整个通信系统的资源利用效率,未来的通信网络需要进行大量的网络优化。
神经网络(neural network,NN)的应用越来越广泛,在无线通信系统网络优化中引入神经网络,能够使通信系统具有智能无线环境感知学习的能力。为了使无线通信系统网络优化能够更加精细,同时对具体环境具有较强的自适应能力,还需要更进一步加强通信系统的智能无线环境感知学习的能力。
在无线通信中,信道信息是保证通信质量不可或缺的因素。如何应用神经网络以更高效更准确的获取信道信息是需要解决的问题。
发明内容
本申请实施例提供一种基于神经网络的信道估计方法及通信装置,以期能够基于神经网络获取信道信息,以提高获取信道信息的效率和准确度。
第一方面,提供一种基于神经网络的信道估计方法,该方法可以由通信装置执行,也可以由通信装置的部件(例如处理器、芯片、或芯片系统等)执行。通信装置可以是终端设备也可以是网络设备。该方法可以通过以下步骤实现:获取第一通信装置的第一位置信息;利用神经网络处理第一位置信息,获得第一信道信息,第一信道信息为第一通信装置与第二通信装置之间无线信道的信息;根据第一信道信息进行通信。通过神经网络利用位置信息来估计信道信息。这样可以避免使用导频信号进行信道估计,节省导频信号的开销,提高信道信息的估计效率和准确度。
在一个可能的设计中,所述神经网络的参数根据历史数据训练得到,所述历史数据包括一条或多条历史位置信息和历史信道信息的映射,所述历史位置信息为训练装置与所述第二通信装置进行通信期间的位置信息,所述历史信道信息为所述训练装置在与所述第二通信装置进行通信期间的信道信息。由于通信过程收到环境中固定的一些散射体影响,历史通信数据隐含了丰富的环境信息,通过充分利用历史数据,能够提高神经网络的训练精度,提高利用神经网络估计信道信息的准确度。
在一个可能的设计中,利用神经网络处理所述第一位置信息,获得第一信道信息,具 体可以通过以下方式实现:对所述第一位置信息进行处理,得到第二信道信息,所述第二信道信息的维度低于所述第一信道信息的维度;对所述第二信道信息进行处理,得到所述第一信道信息。这样第二信道信息的维度可以更接近第一位置信息的维度,第一位置信息处理得到第二信道信息能够更加准确反映信道质量,通过将第二信道信息处理得到第一信道信息,得到的第一信道信息为与常规信道信息相同维度的信道信息,能够更好的使用第一信道信息进行通信。
在一个可能的设计中,根据所述历史信道信息对所述神经网络进行第一训练,所述第一训练的过程包括:将所述历史信道信息的维度由第一维度改变为第二维度并由所述第二维度改变为所述第一维度;根据所述历史信道信息对应的历史位置信息以及所述第二维度的历史信道信息,对所述神经网络进行第二训练。将所述历史信道信息的维度由第一维度改变为第二维度并由所述第二维度改变为所述第一维度,可以是压缩历史信道信息的维度。第一训练的过程不需要位置信息,可以根据历史信道信息独立进行,数据更容易获取。
在一个可能的设计中,所述神经网络的激活函数为周期函数。信道信息是关于位置信息的隐周期函数,例如信道信息的相位是关于位置信息的隐周期函数。神经网络采用周期性函数能够更好的适应位置信息和信道信息的特征。
在一个可能的设计中,所述神经网络包括下述公式或者神经网络符合下述公式:
Φ(x)=W
n(φ
n-1·φ
n-2·…·φ
0)(x)+b
n
其中,所述神经网络的第i层输入为x
i,所述神经网络的第i层输入的维度为M
i(M
i>0),i=0,1,…,n-1,;所述神经网络的第i层输出为φ
i(x
i),φ
i(x
i)=sin(W
ix
i+b
i),所述神经网络的第i层输出的维度为N
i(N
i>0);所述神经网络的权重
所述神经网络的偏置
正弦函数sin作为所述神经网络的非线性激活函数;x为所述神经网络的输入,Φ(x)为神经网络的输出。该神经网络相比于其他基于普通增激活函数的神经网络在拟合隐周期函数上具有显著优异的性能。因此该神经网络非常适合用于表示本实施例所需的具有周期特性的函数。
在一个可能的设计中,利用神经网络处理所述第一位置信息,获得第一信道信息,还可以通过以下方式实现:对所述第一位置信息进行处理,得到一个或多个第二位置信息,所述第二位置信息为所述第一位置信息的镜像点的函数;对所述一个或多个第二位置信息进行处理,得到所述第一信道信息。这样,可以将特殊的电磁传播物理过程和数学形式引进到神经网络结构设计中,同时又结合目前常用的神经网络结构保证网络的鲁棒性。该神经网络考虑了电磁波传输特性,利用先验的信息,对数据量的需求更小,能实现在较少训练样本下依然能对信道进行准确预测。
在一个可能的设计中,所述第二位置信息与所述第一位置信息的维度相同。
在一个可能的设计中,所述神经网络包括中间层,所述中间层神经元的个数是所述第一位置信息的维度的整数倍。
在一个可能的设计中,所述神经网络还包括径向基函数RBF层,所述RBF层用于对所述中间层输出进行处理。
在一个可能的设计中,所述RBF层的激活函数为周期性核函数。通过加入周期项,使得神经网络具有跟踪信道响应相位变化的能力。
在一个可能的设计中,所述RBF层的激活函数符合下述公式:
x为所述RBF层的输入,φ(x)为所述RBF层的输出,a、b、c、w、σ为待训练的参数。
在一个可能的设计中,第一信道信息包括上行信道信息和/或下行信道信息。
第二方面,提供一种通信装置,该装置可以是通信装置,也可以是位于通信装置中的装置(例如,芯片,或者芯片系统,或者电路),或者是能够和通信装置匹配使用的装置。通信装置可以是终端设备也可以是网络设备。该装置具有实现上述第一方面和第一方面的任一种可能的设计中所述的方法的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。一种设计中,该装置可以包括通信模块和处理模块。示例性地:
处理模块,用于获取第一通信装置的第一位置信息;以及用于利用神经网络处理所述第一位置信息,获得第一信道信息,所述第一信道信息为所述第一通信装置与第二通信装置之间无线信道的信息;通信模块,还用于根据所述第一信道信息进行通信。
在一个可能的设计中,所述神经网络的参数根据历史数据训练得到,所述历史数据包括一条或多条历史位置信息和历史信道信息的映射,所述历史位置信息为训练装置与所述第二通信装置进行通信期间的位置信息,所述历史信道信息为所述训练装置在与所述第二通信装置进行通信期间的信道信息。
在一个可能的设计中,在利用神经网络处理所述第一位置信息,获得第一信道信息,所述处理模块具体用于:对所述第一位置信息进行处理,得到第二信道信息,所述第二信道信息的维度低于所述第一信道信息的维度;对所述第二信道信息进行处理,得到所述第一信道信息。
在一个可能的设计中,所述处理模块还用于:根据所述历史信道信息对所述神经网络进行第一训练,所述第一训练的过程包括:将所述历史信道信息的维度由第一维度改变为第二维度并由所述第二维度改变为所述第一维度;根据所述历史信道信息对应的历史位置信息以及所述第二维度的历史信道信息,对所述神经网络进行第二训练。
在一个可能的设计中,所述神经网络的激活函数为周期函数。
在一个可能的设计中,所述神经网络包括:
Φ(x)=W
n(φ
n-1·φ
n-2·…·φ
0)(x)+b
n
其中,所述神经网络的第i层输入为x
i,所述神经网络的第i层输入的维度为M
i(M
i>0),i=0,1,…,n-1,;所述神经网络的第i层输出为φ
i(x
i),φ
i(x
i)=sin(W
ix
i+b
i),所述神经网络的第i层输出的维度为N
i(N
i>0);所述神经网络的权重
所述神经网络的偏置
正弦函数sin作为所述神经网络的非线性激活函数;x为所述神经网络的输入,Φ(x)为神经网络的输出。
在一个可能的设计中,在利用神经网络处理所述第一位置信息,获得第一信道信息时,处理模块用于:对所述第一位置信息进行处理,得到一个或多个第二位置信息,所述第二位置信息为所述第一位置信息的镜像点的函数;对所述一个或多个第二位置信息进行处理,得到所述第一信道信息。
在一个可能的设计中,所述第二位置信息与所述第一位置信息的维度相同。
在一个可能的设计中,所述神经网络包括中间层,所述中间层神经元的个数是所述第一位置信息的维度的整数倍。
在一个可能的设计中,所述神经网络还包括径向基函数RBF层,所述RBF层用于对 所述中间层输出进行处理。
在一个可能的设计中,所述RBF层的激活函数为周期性核函数。
在一个可能的设计中,所述RBF层的激活函数符合下述公式:
在一个可能的设计中,第一信道信息包括上行信道信息和/或下行信道信息。
第二方面以及各个可能的设计的有益效果可以参考第一方面对应部分的描述,在此不再赘述。
第三方面,本申请实施例提供一种通信装置,该装置包括通信接口和处理器,所述通信接口用于该装置与其它设备进行通信,例如数据或信号的收发。示例性的,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口,其它设备可以为其它通信装置。处理器用于调用一组程序、指令或数据,执行上述第一方面、第一方面各个可能的设计所描述的方法。所述装置还可以包括存储器,用于存储处理器调用的程序、指令或数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的、指令或数据时,可以实现上述第一方面或第一方面各个可能的设计描述的方法。
第四方面,本申请实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可读指令,当所述计算机可读指令在计算机上运行时,使得如第一方面或第一方面各个可能的设计中所述的方法被执行。
第五方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现上述第一方面或第一方面各个可能的设计中所述的方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
第六方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得如上述第一方面或第一方面各个可能的设计中所述的方法被执行。
图1为本申请实施例中通信系统的架构示意图;
图2a为本申请实施例中全连接神经网络的原理示意图;
图2b为本申请实施例中全连接神经网络的示意图;
图3为本申请实施例中损失函数优化示意图;
图4为本申请实施例中梯度反向传播的示意图;
图5为本申请实施例中基于神经网络的信道估计方法的具体流程示意图;
图6a为本申请实施例中通过神经网络估计信道信息的示意图之一;
图6b为本申请实施例中通过神经网络估计信道信息的示意图之二;
图7为本申请实施例中通过示例一的神经网络进行信道估计的流程示意图;
图8为本申请实施例中自编码器的网络示意图;
图9为本申请实施例中示例一的神经网络示意图;
图10为本申请实施例中发射信号经过反射面进行反射到达接收端的路径示意图;
图11为本申请实施例中示例三的神经网络示意图;
图12a为本申请实施例中小区的真实场景示意图;
图12b为本申请实施例中小区的3D模型示意图;
图13为本申请实施例中神经网络估计信道的仿真效果示意图;
图14为本申请实施例中一种通信装置结构示意图;
图15为本申请实施例中另一种通信装置结构示意图。
本申请实施例提供一种基于神经网络的信道估计方法及通信装置。其中,方法和装置是基于同一技术构思或者基于相似的技术构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
本申请实施例的描述中,“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。本申请中所涉及的多个是指两个或两个以上。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
下面将结合附图,对本申请实施例进行详细描述。
本申请实施例提供的基于神经网络的信道估计方法可以应用于5G通信系统,例如5G新空口(new radio,NR)系统,也可以应用于未来演进的各种通信系统,例如第六代(6th generation,6G)通信系统、或者空天海地一体化通信系统。
图1示出了本申请实施例适用的一种通信系统的架构。参阅图1所示,通信系统100中包括网络设备101和终端设备102。
首先对网络设备101和终端设备102的可能实现形式和功能进行举例介绍。
网络设备101为覆盖范围内的终端设备102提供服务。例如,参见图1所示,网络设备101为网络设备101覆盖范围内的一个或多个终端设备102提供无线接入。
网络设备101为无线接入网(radio access network,RAN)中的节点,又可以称为基站,还可以称为RAN节点(或设备)。目前,一些网络设备101的举例为:下一代基站(next generation nodeB,gNB)、下一代演进的基站(next generation evolved nodeB,Ng-eNB)、传输接收点(transmission reception point,TRP)、演进型节点B(evolved Node B,eNB)、无线网络控制器(radio network controller,RNC)、节点B(Node B,NB)、基站控制器(base station controller,BSC)、基站收发台(base transceiver station,BTS)、家庭基站(例如,home evolved NodeB,或home Node B,HNB)、基带单元(base band unit,BBU),或无线保真(wireless fidelity,Wifi)接入点(access point,AP),网络设备101还可以是卫星,卫星还可以称为高空平台、高空飞行器、或卫星基站。网络设备101还可以是其他具有网络设备功能的设备,例如,网络设备101还可以是设备到设备(device to device,D2D)通信、车联网或机器到机器(machine to machine,M2M)通信中担任网络设备功能的设备。网络设备101还可以是未来通信系统中任何可能的网络设备。
终端设备102,又称之为用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等,是一种向用户提供语音和/或数据连通性的设备。例如,终端设备102包括具有无线连接功能的手持式设备、车载设备等。目前,终端设备102可以是:手机(mobile phone)、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备(例如智能手表、智能手环、计步器等), 车载设备(例如,汽车、自行车、电动车、飞机、船舶、火车、高铁等)、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、智能家居设备(例如,冰箱、电视、空调、电表等)、智能机器人、车间设备、无人驾驶(self driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端,或智慧家庭(smart home)中的无线终端、飞行设备(例如,智能机器人、热气球、无人机、飞机)等。终端设备102还可以是其他具有终端设备功能的设备,例如,终端设备102还可以是D2D通信、车联网或M2M通信中担任终端设备功能的设备。特别地,在网络设备间进行通信的时候,担任终端设备功能的网络设备也可以看作是终端设备。
信道估计在无线通信过程中具有非常重要的作用。通常可以采用对导频信号的检测来实现信道估计。例如,发送端向接收端发送导频信号,导频信号是发送端和接收端已知的,接收端通过检测导频信号进行信道估计,并反馈信道估计的结果给发送端。这种方法需要不同的终端设备或者不同的天线的导频信号是正交的,因此开销比较大。根据导频信号估计的参数是有限的,因此对信道估计的准确程度也是有限的。
本申请实施例提供的基于神经网络的信道估计方法,以期能够通过神经网络的学习和使用,提高信道估计的准确度和效率,并能够有助于构建智能通信系统。
为了更好的理解本申请实施例提供的方法,首先对神经网络进行说明。
(1)神经网络的概念
神经网络是一种模仿动物神经网络行为特征进行信息处理的网络结构。神经网络可以是由神经单元组成的,神经单元可以是指以x
s和截距1为输入的运算单元,该运算单元的输出可以如公式(1)所示:
其中,s=1、2、……n,n为大于1的自然数,W
s为x
s的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),用于将非线性特性引入神经网络中,来将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入,激活函数可以是sigmoid函数、ReLU函数、tanh函数等。神经网络是将多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。
以全连接神经网络为例,如图2a所示,为全连接神经网络的原理示意图。
该神经网络具有N个处理层,N≥3且N为自然数,该神经网络的第一层为输入层,负责接收输入信号x
i,该神经网络的最后一层为输出层,输出神经网络的处理结果h
i。除去第一层和最后一层的其他层为中间层,这些中间层共同组成隐藏层,隐藏层中的每一层中间层既可以接收输入信号,也可以输出信号,隐藏层负责输入信号的处理过程。每一层代表了信号处理的一个逻辑级别,通过多个层,数据信号可经过多级逻辑的处理。
如图2b所示,示例性的展示了一种全连接神经网络的示意图。相邻两层的神经元间两两相连。相邻两层的神经元,下一层的神经元的输出h为所有与之相连的上一层神经元x的加权和并经过激活函数。用矩阵可以表示为公式(2)。
h=f(wx+b) (2)
其中w为权重矩阵,b为偏置向量,f为激活函数。则神经网络的输出可以递归表达为公式(3)。
y=f
n(w
nf
n-1(…)+b
n) (3)
(2)神经网络的训练
简单的说,可以将神经网络理解为一个从输入数据集合到输出数据集合的映射关系。而通常神经网络都是随机初始化的,用已有数据从随机的w和b得到这个映射关系的过程被称为神经网络的训练。
训练的具体方式为采用损失函数(loss function)对神经网络的输出结果进行评价,并将误差反向传播,通过梯度下降的方法即能迭代优化w和b直到损失函数达到最小值。图3实例了损失函数优化示意图。梯度下降的过程可以表示为公式(4)。
其中θ为待优化参数(如w和b),L为损失函数,η为学习率,控制梯度下降的步长。
在训练神经网络的过程中,由于期望神经网络的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重向量。当然,在第一次更新之前通常会有初始化的过程,即为深度神经网络中的各层预先配置参数。如果网络的预测值高了,就调整权重向量让预测低一些,不断地调整,直到神经网络能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么神经网络的训练就变成了尽可能缩小这个loss的过程。
神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的大小,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。
反向传播的过程利用到求偏导的链式法则,即前一层参数的梯度可以由后一层参数的梯度递推计算得到,如图4所示为梯度反向传播的示意图。反向传播的过程可以表达为公式(5)
其中w
ij为节点j连接节点i的权重,s
i为节点i上的输入加权和。
结合上述描述,下面对本申请实施例提供的基于神经网络的信道估计方法进行详细说明。
如图5所示,本申请实施例提供的基于神经网络的信道估计方法的具体流程如下所述。该方法可以应用于通信装置,该通信装置可以是终端设备,也可以是网络设备。
S501、获取第一通信装置的第一位置信息。
第一通信装置可以是终端设备。终端设备可以是网络设备服务的任意一个终端设备。
S502、利用神经网络处理第一位置信息,获得第一信道信息。
其中,第一信道信息为第一通信装置与第二通信装置之间无线信道的信息。第一通信装置与第二通信装置之间无线信道的信息,可以是终端设备与网络设备之间无线信道的信息。当然,终端设备可以是具有终端功能的网络设备,那么第一通信装置与第二通信装置之间无线信道的信息也可以是网络设备与网络设备之间无线信道的信息。网络设备也可以是具有网络设备功能的终端设备,那么第一通信装置与第二通信装置之间无线信道的信息也可以是终端设备与终端设备之间无线信道的信息。
S503、根据第一信道信息进行通信。
图5实施例的方法可以是由第一通信装置执行的,也可以是由第二通信装置执行的。第一通信装置可以是终端设备,也可以是终端设备中的部件,例如处理器、芯片或芯片系统。第二通信装置可以是网络设备,也可以是网络设备中的部件,例如处理器、芯片或芯片系统。
以下描述中,将以第一通信装置为终端设备为例,且第二通信装置为网络设备为例进行描述。
当该方法的执行主体为终端设备时,终端设备获取自身的位置信息,记为第一位置信息。该终端设备利用神经网络处理第一位置信息,获得第一信道信息。终端设备根据第一信道信息与网络设备进行通信。终端设备也可以根据第一信道信息与其他设备进行通信,例如,该第一信道信息可以用于小区切换,终端设备在小区切换后切换到其他网络设备,并与其他网络设备进行通信。
当该方法的执行主体为网络设备时,网络设备获取终端设备的第一位置信息,网络设备可以接收来自终端设备的第一位置信息。网络设备利用神经网络处理第一位置信息,获得第一信道信息,根据第一信道信息与终端设备进行通信,当然网络设备也可以利用该第一信道信息与其他设备进行通信,其他设备可以是其他网络设备也可以是其他终端设备。
图5实施例,可以使用神经网络对终端设备的位置信息进行处理,得到终端设备与网络设备之间无线信道的信息。对于一个固定服务区域的无线通信系统,包括网络设备,还包括一个或多个终端设备。网络设备的位置是固定的,终端设备的位置是随机分布在该服务区域的。根据电磁波传播理论,一个终端设备与网络设备之间的信道信息决定于该终端设备的位置信息。对于固定的服务区域,信道信息决定于发送设备和接收设备的位置信息。特别地,对于固定位置的网络设备,信道信息决定于终端设备的位置信息。对于卫星来说,当卫星服务于特定区域时,基于卫星自身的星历,对其覆盖范围内的终端而言,信道信息决定于终端的位置信息。
在终端设备与网络设备之间的通信过程中,通信信号会受到通信环境中固定的一些反射体的影响,例如发射机天线位置发出的信号到接收机的传播过程中,如果之间存在反射体,则会收到反射体表面的反射。反射体例如可以是建筑物或树木等障碍物。本申请实施例中,通过神经网络可以学习通信环境(通信环境也可以称为通信场景),从而信道信息可以表达成位置信息的函数,利用位置信息来估计信道信息。这样可以节省导频信号的开销,提高信道信息的估计效率和准确度。
下面对图5实施例的一些可选的实现方式做进一步说明。
第一信道信息可以是上行信道信息,也可以是下行信道信息,也可以是D2D通信或车 联网通信中涉及的信道信息,还可以是终端设备到终端设备之间通信的信道信息,还可以是网络设备与网络设备之间通信的信道信息。
以下对神经网络的训练过程进行说明。
根据终端设备与网络设备在通信过程中产生的历史数据,对神经网络进行训练。网络设备在与终端设备进行通信的过程中,终端设备需要对信道进行测量,并向网络设备上报测量的信道信息,网络设备会记录上报信道信息的终端设备的位置信息,从而位置信息与信道信息形成一条历史数据。可能会有多个终端设备向网络设备上报信道信息,这样,在经过一段时间的通信后,网络设备会存储大量的通信的历史数据,大量的历史数据可以形成训练数据集。训练数据集包括多条历史位置信息和历史信道信息的映射。例如,训练数据集中每一条历史数据的格式为(历史位置信息,历史信道信息)。其中,历史位置信息是训练装置与网络设备进行通信期间的位置信息,训练装置也可以称为训练终端设备,是与网络设备在历史通信过程中的终端设备。训练终端设备在一段时间内与网络设备进行通信,并测量信道信息,还可以将信道信息上报给网络设备。图5实施例中的第一终端设备也可以是训练终端设备。训练终端设备可以是第一终端设备之外的其它终端设备,只要是在网络设备服务区域停留或者只要是在网络设备服务过的终端设备,都有可能向网络设备上报测量的信道信息,这些终端设备都可以是训练终端设备。历史信道信息即训练终端设备在与网络设备进行通信期间所测量得到的信道信息。
可以理解的是,上段内容以网络设备获取训练数据集进行描述的,终端设备也可以获取训练数据集并对神经网络进行训练,终端设备在对信道进行测量时获取信道信息,并可知当时自身的位置信息,终端设备对神经网络进行训练的过程与网络设备是类似地,在此不再赘述。
本申请实施例中,信道信息可以是任意反映信道的参数,例如信道脉冲响应(channel impulse response)CIR。位置信息可以是任意表征位置的信息,例如位置坐标的三维向量,又例如经纬度。历史数据的格式可以表示为(pos,CIR),其中,pos表示位置信息,CIR可以是终端设备到网络设备的信道脉冲响应在时域上的采样。以位置信息用pos表示、信道信息用CIR表示为例,经过一段时间的通信历史数据的采集,可以获得训练数据集。训练数据集中每一个条历史数据的格式为(pos,CIR)。
根据历史数据集对神经网络进行训练,可以得到神经网络的参数。神经网络的参数例如可以是w和b中的至少一项,w为权重或权重矩阵,b为偏置或偏置向量。训练过程可以参考上文中对神经网络训练的描述。历史数据集是在一定的通信环境中采集并生成的,神经网络的参数反映了该通信环境的信息。通常情况下,在通信环境一定的情况下,信道信息还与无线电参数有关,因此历史数据集中的信道信息还在一定程度上反映了无线电参数。也就是说神经网络的参数不仅反映了通信环境的信息,还反映了无线电参数的信息。或者说,神经网络的参数由通信环境和无线电参数决定。无线电参数例如可以是电磁波的频率,也可以是载波带宽。
训练后的神经网络能够反映通信环境和无线电参数,当利用神经网络处理第一位置信息时,获得的第一信道信息能够更加接近在通信环境和无线电参数下真实的目标值。通过神经网络估计信道信息的示意图如图6a所示,将第一位置信息输入神经网络,经过神经网络的运算,得到第一信道信息。
下面结合具体的应用场景对本申请实施例的方法做进一步详细说明。
为了使未来无线通信系统对具体环境具有较强的自适应能力,需要有更强的智能无线环境感知学习的能力,设计一种能够准确且高效地获取信道信息的方案在未来的通信系统中具有非常重要的意义。目前的通信系统设计并没有充分利用历史通信数据。事实上,由于通信过程收到环境中固定的一些反射体影响,历史通信数据隐含了丰富的环境信息。因此,本申请实施例考虑如何利用过去在场景中发生过的信道测量值,预测场景中处于任一位置的终端设备与网络设备之间的信道信息。
对于一个固定服务区域的无线通信系统,包括其中的网络设备和终端设备,如固定位置的网络设备和可能随机分布在该区域中任意位置的终端设备。根据电磁传播理论,终端设备到网络设备之间的静态信道信息决定于终端设备的位置,具体的信道信息由环境和电磁波参数决定。
本申请实施例提出采用学习的方法,智能将通信场景/环境作为待学习的隐含信息,采用新的网络框架来有效表达信道隐含特征,建立在固定场景下终端设备的位置信息和信道信息的映射关系。具体来说,信道信息可以表达成位置信息的函数,函数的参数由场景和电磁波参数决定。通过[位置信息-信道信息]的数据集学习位置信息到信道信息的映射,利用终端设备的位置信息预测信道信息,构建智能通信系统。
网络设备在与终端设备进行通信的过程中需要对信道进行测量,同时记录当前终端设备的具体位置,形成一条历史数据。在一段时间的通信后,网络设备存储了大量的通信历史数据,形成训练数据集。存储的数据格式为(pos,CIR),其中pos为终端设备位置坐标的三维向量,CIR为终端设备到网络设备处信道冲激响应在时域上的采样。本申请实施例提出采用学习的方法,智能将通信场景/环境作为待学习的隐含信息,来有效表达信道隐含特征,建立在固定场景下终端设备位置和信道响应的映射关系。具体来说,信道可以表达成地理位置的函数,函数的参数由场景和电磁波参数决定,参数由数据集通过训练获得。相比于传统的信道获取方式,一方面能避免导频序列带来的信道开销,同时达到接近射线追踪模型所能达到的预测准确性。
如图6b所示,信道冲击响应CIR由终端设备的地理位置pos经过神经网络预测,即CIR=f(pos;θ),其中神经网络的参数为θ。神经网络的参数由地理位置-信道冲击响应数据集{(pos,CIR)}通过监督训练得到,其中数据集中pos和CIR作为一对带标签的数据,是在特定无线电参数下从特定环境中采集的。训练完成后的神经网络参数反映了特定无线电参数和特定环境的相关信息,也即反映了在特定环境下,地理位置和信道冲击响应的关系。
本申请实施例中,神经网络的激活函数可以为周期性函数。神经网络的输出信号为信道信息,神经网络的输入信号为位置信息。信道信息是关于位置信息的隐周期函数,例如信道信息的相位是关于位置信息的隐周期函数。神经网络采用周期性函数能够更好的适应位置信息和信道信息的特征。
以下以几种神经网络为例,对本申请实施例基于神经网络的信道估计方法做进一步详细说明。
神经网络的示例一:
由于神经网络的输入为位置信息,输出为信道信息,而往往位置信息的维度与信道信息的维度是不同的。维度可以是指向量中包含元素的个数。例如,位置信息为三维坐标的向量,则位置信息的维度为3。信道信息为CIR向量,CIR向量的长度通常要高于3,即 信道信息的维度高于位置信息的维度。
基于此,本申请实施例中,利用神经网络处理第一位置信息,获得第一信道信息,可以基于维度的考量,通过以下过程实现。对第一位置信息进行处理,得到第二信道信息,第二信道信息的维度低于第一信道信息的维度,例如第二信道信息的维度与第一位置信息的维度相同;对第二信道信息进行处理,得到第一信道信息。
可以认为神经网络包括两个部分,记为第一部分和第二部分。神经网络的第一部分对第一位置信息进行处理,得到第二信道信息,第二信道信息的维度低于第一信道信息的维度。利用神经网络的第二部分对第二信道信息进行处理,得到第一信道信息。
以第一位置信息的维度为3,第一信道信息的维度高于3为例,利用神经网络的第一部分处理第一位置信息,得到3维的第二信道信息。利用神经网络的第二部分对3维的第二信道信息进行处理,得到高维度的第一信道信息。
对神经网络进行训练可以将对第一部分的训练和对第二部分的训练结合起来。具体过程包括以下步骤。将历史信道信息的维度由第一维度改变为第二维度并由第二维度改变为第一维度;根据历史信道信息对应的历史位置信息以及第二维度的历史信道信息,对神经网络的第二部分进行第二训练。
采用历史数据集对神经网络的训练,历史数据集的获取过程可以参考上文中对神经网络的训练过程部分的描述。历史数据集包括一条或多条历史数据,历史数据中包括历史位置信息和历史信道信息的映射。将历史信道信息作为第一部分训练的输入和输出,对第一部分进行训练。
如图7所示,将历史信道信息的维度进行压缩,得到低维度的信道信息,将低维度的信道信息的维度进行解压缩或解压,得到原始的高维度的历史信道信息。其中,压缩的过程可以通过编码器实现,解压缩的过程可以通过解码器实现。压缩的过程也可以认为是维度降低的过程,解压缩的过程也可以认为是维度提高的过程。
第一部分可以看成是一个自编码器的网络,第一部分的训练即训练自编码器网络。自编码器网络的结果可以如图8所示。左侧的输入层为历史信道信息,第一隐藏层对历史信道信息进行处理,到达瓶颈层(bottleneck layer),瓶颈层的输出经过第二隐藏层的处理,达到输出层,获得输出信号,输出信号即高维度的历史信道信息。第一隐藏层的处理过程对应于图7中的压缩的步骤,瓶颈层的输出对应图7中低维度信道信息。第二隐藏层的处理过程对应于图7中的解压的步骤。
可以预先确定编码器输出的维度,即将原始的历史信道向量降低的维度,例如降低后的维度为第二维度。可以根据历史位置信息的维度设置第二维度,瓶颈层的神经元的个数为第二维度的值。例如历史信道信息的维度为3,可以将第二维度设置为3,即瓶颈层(bottleneck layer)的神经元个数设立为3。
在对自编码器的网络进行训练时,可以先确定瓶颈层的神经元的个数。自编码器的网络中其它网络参数可以通过不断地网络参数调优得到,其中,其它网络参数例如除瓶颈层之外的其它网络层的神经元的个数、编码器的网络层个数和解码器的网络层个数。自编码器的网络训练的目标为使网络在测试数据集上的均方误差(mean square error,MSE)最小。其中,历史数据可以分为训练数据集和测试数据集,训练数据集用于神经网络参数的训练,测试数据集用于测试训练得到的神经网络的性能。
如图7所示,神经网络的第二部分可以看成是一个学习机。例如记为位置信息-信道信 息的学习机,网络的训练可以通过监督学习实现。对神经网络的第二部分训练需要第一部分训练中低维度信道信息的输出作为输入,第一部分的训练所得到的低维度信道信息与其对应的历史位置信息构成的数据对作为监督学习的数据。对第一部分的训练所采用的历史信道信息为历史数据集中的数据,一条历史数据中包括历史位置信息与历史信道信息的映射关系。第一部分的训练所得到的低维度信道信息是由一个历史信道信息得来的,那么这个历史信道信息就会对应一个历史位置信息,即第一部分的训练所得到的低维度信道信息对应一个历史位置信息。将第一部分的训练所得到的低维度信道信息与其对应的历史位置信息作为第二部分训练的一条训练数据,多个低维度信道信息与多个历史位置信息构成了第二部分训练的训练数据集合,对第二部分进行训练。第二部分的网络的训练可以通过学习机实现。可以将多个低维度信道信息与多个历史位置信息输入位置信息-信道信息的学习机进行训练,通过反向传播算法训练第二部分的网络,直至收敛。
继续如图7所示,在对第一部分和第二部分训练完成之后,即完成对神经网络的训练。S502中利用神经网络处理第一位置信息获得第一信道信息,可以通过以下过程实现,具体如图7中加粗实线箭头所示的流程。将第一位置信息输入位置信息-信道信息的学习机,获得第二信道信息,第二信道信息的维度低于第一信道信息的维度,将第二信道信息输入解码器进行解压缩,得到第一信道信息。
下面结合具体的应用场景对神经网络的示例一作进一步详细说明。
神经网络的训练过程包括2个阶段:首先将原始存储的历史数据集中的CIR向量提取出用作自编码器网络的输入和输出,训练自编码器网络。假设在本申请实施例所应用的场景中信道信息取决于终端设备的位置信息,因此CIR向量的最小特征空间维度为3维。因此可以将网络中的bottleneck层神经元个数设立为3维。其它网络参数例如其他网络层的神经元个数、编码器和解码器的网络层个数等需要通过不断地网络调参找到最优。训练的目标为使网络在测试数据集上的平均MSE最小。
自编码器网络训练完成后,对于数据集中每一条数据的信道CIR向量,将其作为自编码器网络的输入,并获得bottleneck层的输出z,与位置坐标构成一条新的数据对(pos,z)。所有数据对构成训练集D={(pos
1,z
1),(pos
2,z
2),…,(pos
k,z
k)}。构建一个全连接深度神经网络作为位置信息-信道信息的学习机,该网络以用户位置pos作为网络输入,冲激响应经过映射后的低维表示z作为网络目标输出。通过反向传播算法训练该网络直至收敛。
进行信道估计或信道预测时,首先将需要预测信道信息的用户位置pos输入第二部分训练的深度神经网络,即位置信息-信道信息的学习机中,获得预测的压缩信道z’。将z’作为训练完成的自编码器网络的解码器的输入,那么自编码器的输出结果就是最终预测的信道冲激响应向量CIR’。
神经网络的示例二:
神经网络可以是正弦表征网络(sinusoidal representation networks,SIREN),SIREN是一种以周期函数激活的全连接结构的神经网络。
该神经网络符合下述数学公式:
Φ(x)=W
n(φ
n-1·φ
n-2·…·φ
0)(x)+b
n
其中,所述神经网络的第i层输入为x
i,所述神经网络的第i层输入的维度为M
i(M
i>0),i=0,1,…,n-1,n为正整数,所述神经网络的第i层输出为φ
i(x
i),φ
i(x
i)=sin(W
ix
i+b
i),所述神经网络的第i层输出的维度为N
i(N
i>0);所述神经网络的权重
所述 神经网络的偏置
正弦函数(sin函数)作为所述神经网络的非线性激活函数;x为所述神经网络的输入,Φ(x)为神经网络的输出。(φ
n-1·φ
n-2·…·φ
0)表示一层的φ函数对下一层的输出结果进行运算,即(φ
n-1·φ
n-2·…·φ
0)(x)表示第i层φ函数运算的输出结果作为第i+1层φ函数的输入。第i层φ函数运算采用公式φ
i(x
i)=sin(W
ix
i+b
i)。例如,φ
0用于对第0层的输入进行运算,φ
1用于对φ
0的输出进行运算,φ
n-1用于对φ
n-2的输出进行运算。W
n为第(n-1)层到第n层的权重。
假设第一位置信息为三维的坐标,第一信道信息为CIR,该神经网络的结构示意图如图9所示。第一位置信息用pos表示,第一位置信息为输入信号,作为神经网络的输入层,(x,y,z)为第一位置信息的三维坐标值,为输入层三个神经元的值。输入层经过多层处理,每层的输入为x
i,除最后一层外,其他层对应的输出为φ
i(x
i),φ
i(x
i)=sin(W
ix
i+b
i),最后一层(输出层)的输出为W
nx
n+b
n,W
i和b
i为该层的待训练参数,x
n=(φ
n-1·φ
n-2·…·φ
0)(x)。输出层得到的即为第一信道信息CIR。
该神经网络除最后一层外其他层的激活函数为周期性函数,如sin函数。
神经网络的输出信号为信道信息,神经网络的输入信号为位置信息。信道信息是关于位置信息的隐周期函数,例如信道信息的相位是关于位置信息的隐周期函数。采用周期性激活函数sin函数能够更好的适应位置信息和信道信息的特征。
信道信息是关于位置信息的隐周期函数,如公式(6)和公式(7)所示。
公式(6)可以看出,信道冲击响应中第k第路径的相位是该条路径的距离d
k的隐周期函数(函数G(d
k)的周期函数);而距离如公式(7)所示,由用户的位置决定;因此,信道冲击响应是用户位置信息的隐周期函数。SIREN网络相比于其他基于普通增激活函数的神经网络在拟合隐周期函数上具有显著优异的性能。因此SIREN网络非常适合用于表示本实施例所需的具有周期特性的函数。
由于SIREN网络采用特殊的周期性激活函数,为使网络具有较好的收敛性,需要采用特殊的网络参数初始化方法。本申请实施例给出一种可选的初始化方案。对于神经网络的第一层,权重初始化为W=w
i×w
0,其中
i表示第i个神经元,
表示在
之间的均匀分布,n为网络第一层的输入维度,例如第一层的输入为位置信息,n的值可以设置为3。c与w
0为常数,例如,c可以取值为6,w
0可以取值为30。需要说明的是,其中常数的设定并不一定是最优值,如果需要对网络性能有进一步改善,可以考虑尝试更多的网络初始化参数。可以使用历史数据集训练该网络,训练方法为反向传播算法。
神经网络的示例三:
通过示例三的神经网络处理第一位置信息获得第一信道信息,可以通过以下方式实现。利用神经网络对第一位置信息进行处理,得到一个或多个第二位置信息,第二位置信息为第一位置信息的镜像点,镜像点是根据第一位置信息发出的电磁波传播路径确定的。对该一个或多个第二位置信息进行处理,得到第一信道信息。
在固定的传播环境和电磁波参数下,在一个确定的位置发出的发射信号,经过一个或 多个反射体的表面进行反射,至接收机。如图10所示,示意了发射信号经过反射面进行反射到达接收端的路径示意图。Tx为发送端,发送端在一个确定的位置发出信号,经过反射面反射,到达接收端Rx。其中,在发送端的位置和接收端的位置确定的基础上,信号经过反射面的传播路径是确定的。并且可以确定出发送端的位置相对于反射面的镜像点,镜像点的意义即:发送端Tx经过反射面到达接收端Rx的信号传播路径,可以相当于从镜像点直线传播到接收端的传播路径。
可以理解的是,发送端的发射信号在一个位置,传播到接收端,可能经过一个或多个反射面。因此,在一个位置固定的情况下,该位置可能存在一个或多个镜像点。例如,第一位置信息能够确定一个固定的位置,第一位置信息有一个或多个镜像点。
示例三的神经网络,能够对第一位置信息进行处理,得到一个或多个第二位置信息,第二位置信息为第一位置信息的镜像点。当然,经过神经网络处理得到的第二位置信息,不一定是第一位置信息确定的位置的真实镜像点,神经网络输出的第二位置信息可能会与目标值有一定的差别。
示例三的神经网络也可以看作两个部分网络的结合,记为第三部分和第四部分。神经网络的第三部分对第一位置信息进行处理,得到一个或多个第二位置信息;利用神经网络的第四部分对第二位置信息进行处理,得到第一信道信息。
对神经网络进行训练可以将对第三部分的训练和对第四部分的训练结合起来。第三部分的输入层为历史位置信息,经过隐藏层的处理,第三部分的输出为多个镜像点。每一个镜像点的维度都与历史位置信息的维度相同。例如,历史位置信息的维度为3,那么每一个镜像点的维度都是3,第三部分的输出为多组3维的镜像点。多个镜像点作为第四部分的输入,经过第四部分网络层的处理后,得到输出信号。输出信号为历史信道信息。第四部分可以为径向基函数(radial basis function,RBF)层网络,RBF层的激活函数为周期性核(Kernel)函数。
RBF层的激活函数符合下述公式(8):
x为所述RBF层的输入,φ(x)为所述RBF层的输出,a、b、c、w、σ为待训练的参数。
RBF层常用的高斯核函数替换为带有周期项的核函数,通过加入周期项,使得神经网络具有跟踪信道响应相位变化的能力。
在应用神经网络对第一位置信息进行处理得到第一信道信息的过程中,采用神经网络的第三部分处理第一位置信息,得到一个或多个第二位置信息。其中,每一个第二位置信息可以看作第一位置信息的镜像点,当然可以不是严格意义上的镜像点,可能会存在一定的偏置项,所以每一个第二位置信息可以看作第一位置信息的镜像点的函数。第二位置信息的维度与第一位置信息的维度相同。即第三部分的网络的输出为第一位置信息的维度的整数倍。第三部分的网络的输出可以看作是整个神经网络的一个中间层。该中间层的输出至RBF层网络进行处理。中间层神经元的个数是第一位置信息的维度的整数倍。
举例来说,第三部分为深度网络,第四部分为RBF层网络。在一个具体应用场景中,该神经网络的应用过程如下所述。该神经网络相比于普通的神经网络将问题作为黑盒问题处理而言,具有非常强的可解释性。前端的深度网络通过学习,寻找在固定的电磁传播环境下,给定输入的用户位置坐标Tx后,用户位置坐标对于所有的信道冲激响应CIR组成 路径的反射像点位置的映射,深度网络的输出为一系列三维坐标值。需要说明的是,由于深度网络需要和后端的RBF层联合训练,因此实际深度层的输出并不等同于真实物理传播过程的反射像点位置坐标,而是像点坐标加上一个随机的偏置项。后端的RBF层用作函数拟合器,将所获得的像点映射位置坐标作为输入获取对应路径的信道响应。
本方案将RBF网络中常用的高斯核函数替换为带有周期项的核函数,如公式(8)所示。
通过加入周期项,使得网络具有跟踪信道响应相位变化的能力。而更为简单的幅值拟合则由一系列高斯核函数加权和实现。需要说明的是,前后网络为联合训练。RBF层偏置参数a的设计用于:自适应地与深度层得到的反射像点坐标偏置项相抵消。RBF层偏置参数b用于自适应拟合响应的实部或虚部。由于将特殊的电磁传播物理过程和数学形式引进到神经网络结构设计中,同时又结合目前常用的神经网络结构保证网络的鲁棒性。本实施例能实现在较少训练样本下依然能对信道进行准确预测。
假设位置信息为三维的坐标,该神经网络的结构示意图如图11所示。位置信息用pos表示,位置信息为输入信号,作为神经网络的输入层,(x,y,z)为位置信息的三维坐标值,为输入层三个神经元的值。位置信息经过隐藏层的处理后,获得中间层的多个镜像点。每个镜像点的维度为三维。中间层包括多组神经元,每组有三个神经元,对应一个三维的镜像点。中间层的输出经过RBF层的处理,到达输出层,得到输出信号。输出信号为信道信息,图11中以CIR为例进行示意。每一个三维的镜像点经过RBF层的一个处理单元进行处理。
本申请实施例中,各个神经网络在训练时,输入信号为历史位置信息,输出信号为历史信道信息。在使用神经网络对第一位置信息进行处理时,输入信号为第一位置信息,输出信号为第一信道信息。第一位置信息为任意一个待预测信道的位置信息。
上述三种示例的神经网络仅仅是作为举例,实际应用中还可以使用其他类型的神经网络。
下面根据具体的应用场景介绍使用本申请实施例提供的方法进行信道估计的性能。将本申请实施例提供的方法应用于如图12a和图12b所示的小区的信道预测中。图12a为小区的真实场景,图12b为小区的3D模型,仿真数据通过射线追踪算法产生。具体参数如下:
建筑物1、2外形尺寸为55×16×18(m);
建筑物3、4外形尺寸为55×10×18(m);
基站位置:(45m,48m,37m);
电磁波频率:3Ghz;
建筑的相对介电常数:5.31;
漫反射能量比例:0.5;
用户分布在[x=20~120;y=15~30]的矩形区域中;
设定不同的采样密度,在场景中泊松随机采样。以单位区域内平均采样的数据样本个数仿真实际应用中所拥有的数据集规模。仿真了三种数据规模下,所提出的示例一、示例二和示例三这三种神经网络的性能对比。如图13所示,示例一的神经网络对应示意为自编码器,示例二的神经网络对应示意为SIREN,示例三的神经网络对应示意为高斯-正弦径 向基函数。从图13可以发现,通过创新性地设计物理启发网络结构,可以大大提高网络地收敛速度,同时在需要更少神经元个数地情况下达到更准确地网络预测性能。相比于传统的信道获取方式,一方面能避免导频序列带来的信道开销,同时达到接近射线追踪模型所能达到的预测准确性。示例一的神经网络自编码器的训练的过程不需要位置信息,可以根据历史信道信息独立进行,数据更容易获取。示例二的神经网络在数据量多的时候性能更好,表达能力更强。示例三的神经网络考虑了电磁波传输特性,利用先验的信息,对数据量的需求更小,能实现在较少训练样本下依然能对信道进行准确预测。
需要说明的是,本申请中的各个应用场景中的举例仅仅表现了一些可能的实现方式,是为了对本申请的方法更好的理解和说明。本领域技术人员可以根据申请提供的参考信号的指示方法,得到一些演变形式的举例。
上述对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,通信装置可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
如图14所示,基于同一技术构思,本申请实施例还提供了一种通信装置1400,该通信装置1400可以是通信装置,也可以是通信装置中的装置,或者是能够和通信装置匹配使用的装置。通信装置1400可以是终端设备或网络设备,。一种设计中,该通信装置1400可以包括执行上述方法实施例中终端设备或网络设备执行的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括处理模块1401和通信模块1402。处理模块1401用于调用通信模块1402执行接收和/或发送的功能。
处理模块1401,用于获取第一通信装置的第一位置信息;以及用于利用神经网络处理所述第一位置信息,获得第一信道信息。
第一信道信息为第一通信装置与第二通信装置之间无线信道的信息。
通信模块1402,还用于根据第一信道信息进行通信。
通信模块1402还用于执行上述方法实施例中通信装置执行的接收或发送信号相关的操作,处理模块1401还用于执行上述方法实施例中通信装置执行的除收发信号之外的其它操作,在此不再一一赘述。通信装置可以是终端设备,也可以是网络设备。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
如图15所示为本申请实施例提供的通信装置1500,用于实现上述方法中通信装置的功能。通信装置可以是终端设备,也可以是网络设备。当实现网络设备的功能时,该装置可以是网络设备,也可以是网络设备中的装置,或者是能够和网络设备匹配使用的装置。当实现终端设备的功能时,该装置可以是终端设备,也可以是终端设备中的装置,或者是能够和终端设备匹配使用的装置。其中,该通信装置可以为芯片系统。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。通信装置1500包括至少一个处理器1520,用于实现本申请实施例提供的方法中终端设备或网络设备的功能。通信装置1500还可以包括通信接口1510。在本申请实施例中,通信接口可以是收发器、电路、 总线、模块或其它类型的通信接口,用于通过传输介质和其它装置进行通信。例如,通信接口1510用于通信装置1500中的装置可以和其它装置进行通信,例如,通信装置1500是终端设备时,其它装置可以是网络设备也可以是其它终端设备;又例如,通信装置1500是网络设备时,其它装置可以是终端设备也可以是其它网络设备;又例如,通信装置1500是芯片时,与通信设备中其他芯片或器件。处理器1520利用通信接口1510收发数据,并用于实现上述方法实施例所述的方法。示例性地,处理器1520,用于获取第一通信装置的第一位置信息;以及用于利用神经网络处理所述第一位置信息,获得第一信道信息,第一信道信息为第一通信装置与第二通信装置之间无线信道的信息;通信接口1510,还用于根据第一信道信息进行通信。
可选的,神经网络的参数根据历史数据训练得到,历史数据包括一条或多条历史位置信息和历史信道信息的映射,历史位置信息为训练装置与第二通信装置进行通信期间的位置信息,历史信道信息为训练装置在与第二通信装置进行通信期间的信道信息。
可选的,在利用神经网络处理第一位置信息,获得第一信道信息,处理器1520具体用于:
对第一位置信息进行处理,得到第二信道信息,第二信道信息的维度低于第一信道信息的维度;
对第二信道信息进行处理,得到第一信道信息。
可选的,处理器1520还用于:
根据历史信道信息对神经网络进行第一训练,第一训练的过程包括:将历史信道信息的维度由第一维度改变为第二维度并由第二维度改变为第一维度;
根据历史信道信息对应的历史位置信息以及第二维度的历史信道信息,对神经网络进行第二训练。
可选的,神经网络的激活函数为周期函数。
可选的,神经网络包括:
Φ(x)=W
n(φ
n-1·φ
n-2·…·φ
0)(x)+b
n
其中,神经网络的第i层输入为x
i,神经网络的第i层输入的维度为M
i(M
i>0),i=0,1,…,n-1,;神经网络的第i层输出为φ
i(x
i),φ
i(x
i)=sin(W
ix
i+b
i),神经网络的第i层输出的维度为N
i(N
i>0);神经网络的权重
神经网络的偏置
正弦函数sin作为神经网络的非线性激活函数;x为神经网络的输入,Φ(x)为神经网络的输出。
可选的,在利用神经网络处理第一位置信息,获得第一信道信息,处理器1520用于:
对第一位置信息进行处理,得到一个或多个第二位置信息,第二位置信息为第一位置信息的镜像点的函数;
对一个或多个第二位置信息进行处理,得到第一信道信息。
可选的,第二位置信息与第一位置信息的维度相同。
可选的,神经网络包括中间层,中间层神经元的个数是第一位置信息的维度的整数倍。
可选的,神经网络还包括径向基函数RBF层,RBF层用于对中间层输出进行处理。
可选的,RBF层的激活函数为周期性核函数。
可选的,RBF层的激活函数符合下述公式:
x为RBF层的输入,φ(x)为RBF层的输出,a、b、c、w、σ为待训练的参数。
可选的,第一信道信息包括上行信道信息和/或下行信道信息。
处理器1520和通信接口1510还可以用于执行上述方法实施例终端设备或网络设备执行的其它对应的步骤或操作,在此不再一一赘述。
通信装置1500还可以包括至少一个存储器1530,用于存储程序指令和/或数据。存储器1530和处理器1520耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器1520可能和存储器1530协同操作。处理器1520可能执行存储器1530中存储的程序指令。所述至少一个存储器中的至少一个可以与处理器集成在一起。
本申请实施例中不限定上述通信接口1510、处理器1520以及存储器1530之间的具体连接介质。本申请实施例在图15中以存储器1530、处理器1520以及通信接口1510之间通过总线1540连接,总线在图15中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图15中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信装置1400和通信装置1500具体是芯片或者芯片系统时,通信模块1402和通信接口1510所输出或接收的可以是基带信号。通信装置1400和通信装置1500具体是设备时,通信模块1402和通信接口1510所输出或接收的可以是射频信号。在本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器1530可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请上述方法实施例描述的通信装置所执行的操作和功能中的部分或全部,可以用芯片或集成电路来完成。
本申请实施例提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序包括用于执行上述方法实施例的指令。
本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法实施例。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图 和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
Claims (29)
- 一种基于神经网络的信道估计方法,其特征在于,所述方法包括:获取第一通信装置的第一位置信息;利用神经网络处理所述第一位置信息,获得第一信道信息,所述第一信道信息为所述第一通信装置与第二通信装置之间无线信道的信息;根据所述第一信道信息进行通信。
- 如权利要求1所述的方法,其特征在于,所述神经网络的参数根据历史数据训练得到,所述历史数据包括一条或多条历史位置信息和历史信道信息的映射,所述历史位置信息为训练装置与所述第二通信装置进行通信期间的位置信息,所述历史信道信息为所述训练装置在与所述第二通信装置进行通信期间的信道信息。
- 如权利要求1或2所述的方法,其特征在于,利用神经网络处理所述第一位置信息,获得第一信道信息,包括:对所述第一位置信息进行处理,得到第二信道信息,所述第二信道信息的维度低于所述第一信道信息的维度;对所述第二信道信息进行处理,得到所述第一信道信息。
- 如权利要求3所述的方法,其特征在于,所述方法还包括:根据所述历史信道信息对所述神经网络进行第一训练,所述第一训练的过程包括:将所述历史信道信息的维度由第一维度改变为第二维度并由所述第二维度改变为所述第一维度;根据所述历史信道信息对应的历史位置信息以及所述第二维度的历史信道信息,对所述神经网络进行第二训练。
- 如权利要求1~4任一项所述的方法,其特征在于,所述神经网络的激活函数为周期函数。
- 如权利要求1或2所述的方法,其特征在于,利用神经网络处理所述第一位置信息,获得第一信道信息,包括:对所述第一位置信息进行处理,得到一个或多个第二位置信息,所述第二位置信息为所述第一位置信息的镜像点的函数;对所述一个或多个第二位置信息进行处理,得到所述第一信道信息。
- 如权利要求7所述的方法,其特征在于,所述第二位置信息与所述第一位置信息的维度相同。
- 如权利要求7或8所述的方法,其特征在于,所述神经网络包括中间层,所述中间 层神经元的个数是所述第一位置信息的维度的整数倍。
- 如权利要求9所述的方法,其特征在于,所述神经网络还包括径向基函数RBF层,所述RBF层用于对所述中间层输出进行处理。
- 如权利要求10所述的方法,其特征在于,所述RBF层的激活函数为周期性核函数。
- 如权利要求1~12任一项所述的方法,其特征在于,第一信道信息包括上行信道信息和/或下行信道信息。
- 一种通信装置,其特征在于,包括:处理模块,用于获取第一通信装置的第一位置信息;以及用于利用神经网络处理所述第一位置信息,获得第一信道信息,所述第一信道信息为所述第一通信装置与第二通信装置之间无线信道的信息;通信模块,还用于根据所述第一信道信息进行通信。
- 如权利要求14所述的装置,其特征在于,所述神经网络的参数根据历史数据训练得到,所述历史数据包括一条或多条历史位置信息和历史信道信息的映射,所述历史位置信息为训练装置与所述第二通信装置进行通信期间的位置信息,所述历史信道信息为所述训练装置在与所述第二通信装置进行通信期间的信道信息。
- 如权利要求14或15所述的装置,其特征在于,在利用神经网络处理所述第一位置信息,获得第一信道信息,所述处理模块具体用于:对所述第一位置信息进行处理,得到第二信道信息,所述第二信道信息的维度低于所述第一信道信息的维度;对所述第二信道信息进行处理,得到所述第一信道信息。
- 如权利要求16所述的装置,其特征在于,所述处理模块还用于:根据所述历史信道信息对所述神经网络进行第一训练,所述第一训练的过程包括:将所述历史信道信息的维度由第一维度改变为第二维度并由所述第二维度改变为所述第一维度;根据所述历史信道信息对应的历史位置信息以及所述第二维度的历史信道信息,对所述神经网络进行第二训练。
- 如权利要求14~17任一项所述的装置,其特征在于,所述神经网络的激活函数为周期函数。
- 如权利要求14或15所述的装置,其特征在于,在利用神经网络处理所述第一位置信息,获得第一信道信息,所述处理模块用于:对所述第一位置信息进行处理,得到一个或多个第二位置信息,所述第二位置信息为所述第一位置信息的镜像点的函数;对所述一个或多个第二位置信息进行处理,得到所述第一信道信息。
- 如权利要求20所述的装置,其特征在于,所述第二位置信息与所述第一位置信息的维度相同。
- 如权利要求20或21所述的装置,其特征在于,所述神经网络包括中间层,所述中间层神经元的个数是所述第一位置信息的维度的整数倍。
- 如权利要求22所述的装置,其特征在于,所述神经网络还包括径向基函数RBF层,所述RBF层用于对所述中间层输出进行处理。
- 如权利要求23所述的装置,其特征在于,所述RBF层的激活函数为周期性核函数。
- 如权利要求14~25任一项所述的装置,其特征在于,第一信道信息包括上行信道信息和/或下行信道信息。
- 一种通信装置,其特征在于,包括处理器和通信接口,所述通信接口用于与其他装置进行通信,所述处理器用于运行一组程序,以使得权利要求1~13任一项所述的方法被执行。
- 如权利要求27所述的装置,其特征在于,所述装置还包括存储器,用于存储所述处理器运行的程序。
- 一种计算机可读存储介质,其特征在于,所述计算机存储介质中存储有计算机可读指令,当所述计算机可读指令在通信装置上运行时,使得权利要求1~13任一项所述的方法被执行。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21919088.1A EP4270824A4 (en) | 2021-01-15 | 2021-12-15 | CHANNEL ESTIMATION METHOD BASED ON A NEURAL NETWORK AND COMMUNICATION DEVICE |
US18/352,825 US20230362039A1 (en) | 2021-01-15 | 2023-07-14 | Neural network-based channel estimation method and communication apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110057615.8A CN114764610A (zh) | 2021-01-15 | 2021-01-15 | 一种基于神经网络的信道估计方法及通信装置 |
CN202110057615.8 | 2021-01-15 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/352,825 Continuation US20230362039A1 (en) | 2021-01-15 | 2023-07-14 | Neural network-based channel estimation method and communication apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022151900A1 true WO2022151900A1 (zh) | 2022-07-21 |
Family
ID=82365428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/138374 WO2022151900A1 (zh) | 2021-01-15 | 2021-12-15 | 一种基于神经网络的信道估计方法及通信装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230362039A1 (zh) |
EP (1) | EP4270824A4 (zh) |
CN (1) | CN114764610A (zh) |
WO (1) | WO2022151900A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024158340A1 (en) * | 2023-01-25 | 2024-08-02 | Nanyang Technological University | Method of receiving a transmitted signal over a time-varying frequency-selective channel and receiver thereof |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117479182A (zh) * | 2022-07-22 | 2024-01-30 | 华为技术有限公司 | 信息传输的方法与通信装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108512621A (zh) * | 2018-03-02 | 2018-09-07 | 东南大学 | 一种基于神经网络的无线信道建模方法 |
CN108768750A (zh) * | 2018-06-22 | 2018-11-06 | 广东电网有限责任公司 | 通信网络故障定位方法及装置 |
CN110061946A (zh) * | 2019-03-28 | 2019-07-26 | 南京邮电大学 | 一种面向高铁的深度信号检测方法 |
WO2020051776A1 (en) * | 2018-09-11 | 2020-03-19 | Intel Corporation | Method and system of deep supervision object detection for reducing resource usage |
CN112153616A (zh) * | 2020-09-15 | 2020-12-29 | 南京信息工程大学滨江学院 | 一种基于深度学习的毫米波通信系统中的功率控制方法 |
-
2021
- 2021-01-15 CN CN202110057615.8A patent/CN114764610A/zh active Pending
- 2021-12-15 EP EP21919088.1A patent/EP4270824A4/en active Pending
- 2021-12-15 WO PCT/CN2021/138374 patent/WO2022151900A1/zh active Application Filing
-
2023
- 2023-07-14 US US18/352,825 patent/US20230362039A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108512621A (zh) * | 2018-03-02 | 2018-09-07 | 东南大学 | 一种基于神经网络的无线信道建模方法 |
CN108768750A (zh) * | 2018-06-22 | 2018-11-06 | 广东电网有限责任公司 | 通信网络故障定位方法及装置 |
WO2020051776A1 (en) * | 2018-09-11 | 2020-03-19 | Intel Corporation | Method and system of deep supervision object detection for reducing resource usage |
CN110061946A (zh) * | 2019-03-28 | 2019-07-26 | 南京邮电大学 | 一种面向高铁的深度信号检测方法 |
CN112153616A (zh) * | 2020-09-15 | 2020-12-29 | 南京信息工程大学滨江学院 | 一种基于深度学习的毫米波通信系统中的功率控制方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4270824A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024158340A1 (en) * | 2023-01-25 | 2024-08-02 | Nanyang Technological University | Method of receiving a transmitted signal over a time-varying frequency-selective channel and receiver thereof |
Also Published As
Publication number | Publication date |
---|---|
CN114764610A (zh) | 2022-07-19 |
EP4270824A4 (en) | 2024-06-19 |
EP4270824A1 (en) | 2023-11-01 |
US20230362039A1 (en) | 2023-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Teganya et al. | Deep completion autoencoders for radio map estimation | |
CN112152948B (zh) | 一种无线通信处理的方法和装置 | |
WO2022151900A1 (zh) | 一种基于神经网络的信道估计方法及通信装置 | |
WO2022174642A1 (zh) | 基于空间划分的数据处理方法和通信装置 | |
CN114729982A (zh) | 用于定位的方法及装置 | |
WO2024021440A1 (zh) | 一种迭代聚焦式毫米波一体化通信与感知方法 | |
CN112911608A (zh) | 一种面向边缘智能网络的大规模接入方法 | |
CN115278526A (zh) | 终端定位方法、装置、电子设备及存储介质 | |
CN113825236B (zh) | 一种无线网络中感知、计算和通信的融合方法 | |
CN116996142A (zh) | 无线信道参数预测方法、装置、电子设备及存储介质 | |
Vankayala et al. | Deep-learning based proactive handover for 5G/6G mobile networks using wireless information | |
Nagao et al. | A study on path loss modeling using ResNet and pre-training with free space path loss | |
WO2023097634A1 (zh) | 定位方法、模型训练方法和设备 | |
CN113030853B (zh) | 基于rss和aoa联合测量的多辐射源无源定位方法 | |
WO2024059969A1 (zh) | 一种信道估计方法、装置及系统 | |
WO2024109682A9 (zh) | 一种用于定位的方法及装置 | |
CN113346970B (zh) | 面向无线三维信道的用户级信道空域特征建模方法 | |
Ding et al. | Multi-Base Station Radio Map Prediction Based on Residual Enhanced Dual Path UNet | |
WO2023160656A1 (zh) | 一种通信方法及装置 | |
CN117200845B (zh) | 一种基于低频信号位置感知的毫米波波束对齐方法 | |
WO2023115254A1 (zh) | 处理数据的方法及装置 | |
WO2023070675A1 (zh) | 数据处理的方法及装置 | |
WO2024011565A1 (en) | On-demand labelling for channel classification training | |
US20240323719A1 (en) | Radio frequency radiance field models for communication system control | |
CN116684019A (zh) | 信道预测方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21919088 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021919088 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021919088 Country of ref document: EP Effective date: 20230727 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |