WO2022141397A1 - 一种神经网络的训练方法以及相关装置 - Google Patents

一种神经网络的训练方法以及相关装置 Download PDF

Info

Publication number
WO2022141397A1
WO2022141397A1 PCT/CN2020/142103 CN2020142103W WO2022141397A1 WO 2022141397 A1 WO2022141397 A1 WO 2022141397A1 CN 2020142103 W CN2020142103 W CN 2020142103W WO 2022141397 A1 WO2022141397 A1 WO 2022141397A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
information
network
channel
reference signal
Prior art date
Application number
PCT/CN2020/142103
Other languages
English (en)
French (fr)
Inventor
孙琰
吴艺群
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080107089.5A priority Critical patent/CN116458103A/zh
Priority to PCT/CN2020/142103 priority patent/WO2022141397A1/zh
Priority to EP20967736.8A priority patent/EP4262121A4/en
Publication of WO2022141397A1 publication Critical patent/WO2022141397A1/zh
Priority to US18/345,904 priority patent/US20230342593A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0224Channel estimation using sounding signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0254Channel estimation channel estimation algorithms using neural network algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/26Systems using multi-frequency codes
    • H04L27/2601Multicarrier modulation systems
    • H04L27/2626Arrangements specific to the transmitter only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/26Systems using multi-frequency codes
    • H04L27/2601Multicarrier modulation systems
    • H04L27/2647Arrangements specific to the receiver only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0048Allocation of pilot signals, i.e. of signals known to the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0048Allocation of pilot signals, i.e. of signals known to the receiver
    • H04L5/0051Allocation of pilot signals, i.e. of signals known to the receiver of dedicated pilots, i.e. pilots destined for a single user or terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0053Allocation of signaling, i.e. of overhead other than pilot signals
    • H04L5/0057Physical resource allocation for CQI

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a neural network training method and related devices.
  • a wireless communication system may include three parts: a transmitter, a channel, and a receiver. Channels are used to transmit the signals that are exchanged between the sender and the receiver.
  • the sending end may be an access network device, such as a base station (base station, BS), and the receiving end may be a terminal device.
  • the sending end may be a terminal device, and the receiving end may be an access network device.
  • the above-mentioned sender and receiver may be optimized. Both the sender and the receiver can have independent mathematical models, so they are usually optimized independently based on their respective mathematical models. For example, a mathematical channel model can be used to generate channel samples to optimize the transmitter and receiver.
  • the channel samples generated by the mathematical channel model defined by the protocol cannot reflect the real actual channel environment. However, when a large number of actual channel samples are transmitted between the transmitting end and the receiving end, too many air interface resources are occupied, which affects the data transmission efficiency.
  • a first aspect of the embodiments of the present application provides a method for training a neural network, including:
  • the first device receives the first channel sample information from the second device; the first device determines the first neural network, the first neural network is obtained by training according to the first channel sample information, and the first neural network is used for inference to obtain the second channel sample information.
  • the first device is an access network device and the second device is a terminal device as an example for description. It can be understood that the first device may be an access network device, a chip used in the access network device, or a circuit used in the access network device, etc., and the second device may be a terminal device, a chip used in the terminal device, etc. Chips or circuits used in terminal equipment, etc.
  • the method includes: the first device obtains the first neural network by training according to the first channel sample information.
  • the first neural network is used to generate new channel sample information, such as second channel sample information.
  • the method includes: the first device receives information of the first neural network from the third device, and determines the first neural network according to the information of the first neural network.
  • the first neural network is obtained by training the third device according to the first channel sample information.
  • the first neural network can be obtained by training according to the first channel sample information.
  • the first neural network is used for inference to obtain second channel sample information.
  • the second channel sample information is used to train the second neural network and/or the third neural network, and the second neural network and/or the third neural network are used for the first device to communicate with The second device transmits target information.
  • the method includes: the first device trains the second neural network and/or the third neural network according to the second channel sample information, or according to the second channel sample information and the first channel sample information, The second neural network and/or the third neural network are used for the first device and the second device to transmit target information.
  • the method includes: the first device receives information of the second neural network and/or information of the third neural network from the third device, and the second neural network and/or the third neural network are used for the first neural network.
  • the device transmits target information with the second device.
  • the second neural network and/or the third neural network is obtained by training the third device according to the second channel sample information, or the second channel sample information and the first channel sample information.
  • the air interface signaling overhead can be effectively reduced, and at the same time, the channel environment can be adapted, the second neural network and the third neural network obtained by training are closer to the actual channel environment, and the communication performance is improved.
  • the speed of training the second neural network and the third neural network has also been greatly improved.
  • the method further includes: the first device sends the first reference signal to the second device.
  • the first reference signal includes a demodulation reference signal DMRS or a channel state information reference signal CSI-RS.
  • the sequence type of the first reference signal includes a ZC sequence or a Glod sequence.
  • the first channel sample information includes but is not limited to a second reference signal and/or channel state information (channel state information, CSI).
  • the second reference signal is the first reference signal propagated through the channel, or described as the second reference signal is the first reference signal received by the second device from the first device.
  • the information of the first neural network includes: a model change amount of the first neural network relative to the reference neural network.
  • the information of the first neural network includes one or more of the following: the weight of the neural network, the activation function of the neuron, the number of neurons in each layer of the neural network, and the inter-layer level of the neural network. relationship, and the network type of each layer in the neural network.
  • the first neural network is a generative neural network.
  • the first neural network is a Generative Adversarial Network (GAN) or a Variational Autoencoder (VAE).
  • GAN Generative Adversarial Network
  • VAE Variational Autoencoder
  • the second channel sample information with the same distribution as the first channel sample information or a similar distribution can be obtained, so that the second channel sample information is closer to the actual channel environment.
  • the method further includes: the first device receiving capability information of the second device from the second device.
  • the capability information is used to indicate one or more of the following information:
  • the communication module includes but is not limited to: OFDM modulation module, OFDM demodulation module, constellation mapping module, constellation demapping module, channel coding module, channel decoding module, precoding module, equalization module, interleaving module and/or deinterleaving module .
  • the first device can receive the capability information sent by the second device.
  • the capability information is used to notify the first device of relevant information of the second device, and the first device can perform operations on the third neural network according to the capability information to ensure that the second device can normally use the third neural network.
  • the method further includes: the first device sends the information of the third neural network to the second device.
  • the first device sends the information of the third neural network to the second device.
  • the information of the third neural network includes but is not limited to: the weight of the neural network, the activation function of the neuron, the number of neurons in each layer of the neural network, the cascade relationship between the layers of the neural network, and/or the neural network. Network type for each layer.
  • the information of the third neural network may indicate different activation functions for different neurons.
  • the information of the third neural network may also be the model variation of the third neural network.
  • the variation of the model includes but is not limited to: the weight of the changed neural network, the changed activation function, the number of one or more layers of neurons in the changed neural network, and the changed inter-layer level of the neural network connections, and/or the network type of one or more layers in the neural network that changes.
  • the pre-configuration may be that the access network device is pre-configured through signaling, and the pre-definition may be a protocol pre-definition.
  • the protocol pre-defines the third neural network in the terminal device as neural network A.
  • the information of the third neural network may have various implementation schemes, which improves the implementation flexibility of the scheme.
  • embodiments of the present application provide a method for training a neural network, including: a second device performs channel estimation according to a first reference signal received from the first device, and determines first channel sample information; the second device performs channel estimation according to a first reference signal received from the first device; The first channel sample information is sent to the first device; the second device receives the information of the third neural network from the first device, and the third neural network is used for the second device and the first device to transmit target information.
  • the method further includes: the second device sends capability information of the second device to the first device.
  • the information of the third neural network, and the capability information of the second device please refer to the first aspect, which will not be repeated here.
  • an embodiment of the present application proposes a method for training a neural network, including:
  • the first device sends the first reference signal to the second device; the first device receives the information of the first neural network from the second device; the first neural network is used for inference to obtain the second channel sample information.
  • the second device eg, terminal device
  • obtains the first neural network by training according to the first channel sample information.
  • the first neural network is used for inference to obtain second channel sample information.
  • the second channel sample information is used to train the second neural network and/or the third neural network, and the second neural network and/or the third neural network are used for the first device to communicate with The second device transmits target information.
  • the method includes: the first device trains the second neural network and/or the third neural network according to the second channel sample information, or according to the second channel sample information and the first channel sample information, The second neural network and/or the third neural network are used for the first device and the second device to transmit target information.
  • the method includes: the first device receives information of the second neural network and/or information of the third neural network from the third device, and the second neural network and/or the third neural network are used for the first
  • the device transmits target information with the second device.
  • the second neural network and/or the third neural network is obtained by training the third device according to the second channel sample information, or the second channel sample information and the first channel sample information.
  • the air interface signaling overhead can be effectively reduced, and at the same time, the channel environment can be adapted, the second neural network and the third neural network obtained by training are closer to the actual channel environment, and the communication performance is improved.
  • the speed of training the second neural network and the third neural network has also been greatly improved.
  • the first neural network for the introduction of the first reference signal, the first neural network, the information of the first neural network, the second neural network and/or the third neural network, etc., reference may be made to the first aspect, which will not be repeated.
  • the method further includes: the first device sends the information of the third neural network to the second device.
  • the method also includes:
  • the first device receives capability information from the second device, where the capability information is used to indicate one or more of the following information of the second device:
  • an embodiment of the present application proposes a method for training a neural network, including:
  • the second device performs channel estimation according to the first reference signal received from the first device, and determines the first channel sample information; the second device determines the first neural network, and the first neural network is obtained by training according to the first channel sample information; The second device sends the information of the first neural network to the first device.
  • the first neural network for the introduction of the first reference signal, the first neural network, the first channel sample information, and the information of the first neural network, etc., reference may be made to the third aspect, which will not be repeated.
  • the method also includes:
  • the second device receives information of the third neural network from the first device.
  • information about the third neural network please refer to the third aspect, which will not be repeated here.
  • the method further includes: the second device sends capability information to the first device.
  • capability information please refer to the third aspect, which will not be repeated here.
  • an embodiment of the present application proposes a method for training a neural network, including: a third device receives first channel sample information from a first device; the third device obtains first channel sample information by training according to the first channel sample information A neural network, where the first neural network is used for inference to obtain second channel sample information.
  • the method further includes: training a second neural network and/or a third neural network according to the second channel sample information, and sending the second neural network and/or the third neural network to the first device.
  • the information of the third neural network, the second neural network and/or the third neural network is used for the first device and the second device to transmit target information.
  • an apparatus in a sixth aspect, may be an access network device, a device in an access network device, or a device that can be matched and used with the access network device.
  • the apparatus may include modules corresponding to one-to-one execution of the methods/operations/steps/actions described in the first aspect, and the modules may be hardware circuits, software, or a combination of hardware circuits.
  • Software Implementation In one design, the apparatus may include a processing module and a transceiver module.
  • the transceiver module is configured to receive the first channel sample information from the second device;
  • the processing module is used for determining the first neural network, the first neural network is obtained by training according to the first channel sample information, and the first neural network is used for inference to obtain the second channel sample information.
  • the first channel sample information, the second channel sample information, and other operations please refer to the first aspect, which will not be repeated here.
  • the apparatus may include modules corresponding to one-to-one execution of the methods/operations/steps/actions described in the third aspect, and the modules may be hardware circuits, software, or a combination of hardware circuits.
  • Software Implementation In one design, the apparatus may include a processing module and a transceiver module.
  • the transceiver module is used for sending the first reference signal to the second device; and receiving the information of the first neural network from the second device; the first neural network is used for inference to obtain the second channel sample information.
  • the first channel sample information For the introduction of the first neural network, the first channel sample information, the second channel sample information, and other operations, please refer to the third aspect, which will not be repeated here.
  • an apparatus in a seventh aspect, is provided, and the apparatus may be a terminal device, a device in a terminal device, or a device that can be matched and used with the terminal device.
  • the apparatus may include modules corresponding to one-to-one execution of the methods/operations/steps/actions described in the second aspect, and the modules may be hardware circuits, software, or a combination of hardware circuits.
  • Software Implementation In one design, the apparatus may include a processing module and a transceiver module.
  • a processing module configured to perform channel estimation according to the first reference signal received from the first device, and determine first channel sample information
  • a transceiver module configured to send the first channel sample information to the first device
  • the transceiver module is further configured to receive information from the third neural network from the first device, and the third neural network is used for the second device and the first device to transmit target information.
  • the first reference signal For the introduction of the first reference signal, the first channel sample information, the third neural network, and other operations, please refer to the second aspect, which will not be repeated here.
  • the apparatus may include modules corresponding to one-to-one execution of the methods/operations/steps/actions described in the fourth aspect, and the modules may be hardware circuits, software, or a combination of hardware circuits.
  • Software Implementation In one design, the apparatus may include a processing module and a transceiver module.
  • a processing module configured to perform channel estimation according to the first reference signal received from the first device, and determine first channel sample information
  • the processing module is also used to determine the first neural network, and the first neural network is obtained by training according to the first channel sample information;
  • the transceiver module is used for sending the information of the first neural network to the first device.
  • the first reference signal For the introduction of the first reference signal, the first channel sample information, the first neural network, and other operations, please refer to the fourth aspect, which will not be repeated here.
  • a device in an eighth aspect, may be an AI node, a device in an AI node, or a device that can be matched and used with an AI node.
  • the device may include modules that perform one-to-one correspondence with the methods/operations/steps/actions described in the fifth aspect, and the modules may be hardware circuits, software, or a combination of hardware circuits.
  • Software Implementation may include a processing module and a transceiver module.
  • the transceiver module is configured to receive the first channel sample information from the first device
  • the processing module is used for obtaining a first neural network by training according to the first channel sample information, and the first neural network is used for inference to obtain the second channel sample information.
  • the processing module is further configured to train a second neural network and/or a third neural network according to the second channel sample information; the transceiver module is further configured to send the data to the first device.
  • information of the second neural network and/or the third neural network, the second neural network and/or the third neural network are used for the first device and the second device to transmit target information.
  • an embodiment of the present application provides an apparatus.
  • the apparatus includes a processor for implementing the method described in the first aspect above.
  • the apparatus may also include a memory for storing instructions and data.
  • the memory is coupled to the processor, and when the processor executes the instructions stored in the memory, the method described in the first aspect can be implemented.
  • the apparatus may also include a communication interface for the apparatus to communicate with other devices, for example, the communication interface may be a transceiver, circuit, bus, module or other type of communication interface.
  • the device includes:
  • a processor for receiving first channel sample information from the second device using the communication interface
  • the processor is further configured to determine a first neural network, the first neural network is obtained by training according to the first channel sample information, and the first neural network is used for inference to obtain the second channel sample information.
  • the first channel sample information, the second channel sample information, and other operations please refer to the first aspect, which will not be repeated here.
  • the apparatus includes a processor for implementing the method described in the third aspect.
  • the apparatus may also include a memory for storing instructions and data.
  • the memory is coupled to the processor, and when the processor executes the instructions stored in the memory, the method described in the third aspect can be implemented.
  • the apparatus may also include a communication interface for the apparatus to communicate with other devices, for example, the communication interface may be a transceiver, circuit, bus, module or other type of communication interface.
  • the device includes:
  • the processor is used for sending the first reference signal to the second device by using the communication interface; and receiving the information of the first neural network from the second device; the first neural network is used for inference to obtain the second channel sample information.
  • the first channel sample information For the introduction of the first neural network, the first channel sample information, the second channel sample information, and other operations, please refer to the third aspect, which will not be repeated here.
  • an embodiment of the present application provides an apparatus.
  • the apparatus includes a processor for implementing the method described in the second aspect above.
  • the apparatus may also include a memory for storing instructions and data.
  • the memory is coupled to the processor, and when the processor executes the instructions stored in the memory, the method described in the second aspect above can be implemented.
  • the apparatus may also include a communication interface for the apparatus to communicate with other devices, for example, the communication interface may be a transceiver, circuit, bus, module or other type of communication interface.
  • the device includes:
  • the processor is configured to use the communication interface to perform channel estimation according to the first reference signal received from the first device, determine the first channel sample information, send the first channel sample information to the first device, and receive the first channel sample information from the first device.
  • the information of the three neural networks, the third neural network is used for the second device and the first device to transmit target information.
  • the first reference signal For the introduction of the first reference signal, the first channel sample information, the third neural network, and other operations, please refer to the second aspect, which will not be repeated here.
  • the apparatus includes a processor for implementing the method described in the fourth aspect above.
  • the apparatus may also include a memory for storing instructions and data.
  • the memory is coupled to the processor, and when the processor executes the instructions stored in the memory, the method described in the fourth aspect can be implemented.
  • the apparatus may also include a communication interface for the apparatus to communicate with other devices, for example, the communication interface may be a transceiver, circuit, bus, module or other type of communication interface.
  • the device includes:
  • a processor configured to use the communication interface to perform channel estimation according to the first reference signal received from the first device, determine the first channel sample information, determine the first neural network and send the information of the first neural network to the first device,
  • the first neural network is obtained by training according to the first channel sample information
  • the first reference signal For the introduction of the first reference signal, the first channel sample information, the first neural network, and other operations, please refer to the fourth aspect, which will not be repeated here.
  • an embodiment of the present application provides an apparatus.
  • the apparatus includes a processor for implementing the method described in the fifth aspect.
  • the apparatus may also include a memory for storing instructions and data.
  • the memory is coupled to the processor, and when the processor executes the instructions stored in the memory, the method described in the fifth aspect can be implemented.
  • the apparatus may also include a communication interface for the apparatus to communicate with other devices, for example, the communication interface may be a transceiver, circuit, bus, module or other type of communication interface.
  • the device includes:
  • a processor configured to receive the first channel sample information from the first device by using the communication interface, and obtain a first neural network by training according to the first channel sample information, and the first neural network is used for inference to obtain the second channel sample information .
  • the processor is further configured to train a second neural network and/or a third neural network according to the second channel sample information;
  • the device sends information of the second neural network and/or the third neural network, and the second neural network and/or the third neural network is used for the first device and the second device to transmit target information.
  • embodiments of the present application further provide a computer-readable storage medium, including instructions, which, when executed on a computer, cause the computer to execute any one of the methods in the first to fifth aspects.
  • the embodiments of the present application further provide a computer program product, including instructions, which, when executed on a computer, cause the computer to execute any one of the methods in the first to fifth aspects.
  • an embodiment of the present application provides a chip system, where the chip system includes a processor, and may also include a memory, for implementing any of the methods in the first to fifth aspects above.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • an embodiment of the present application further provides a communication system, which includes:
  • the apparatus of the ninth aspect, the apparatus of the tenth aspect, and the apparatus of the eleventh aspect are identical to the apparatus of the ninth aspect, the apparatus of the tenth aspect, and the apparatus of the eleventh aspect.
  • FIG. 1 is a schematic diagram of a network architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a hardware structure of a communication device provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a neuron structure provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a layer relationship of a neural network provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a convolutional neural network CNN provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a feedforward neural network RNN provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a generative adversarial network GAN provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a variational autoencoder provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a neural network architecture for jointly optimizing constellation modulation/demodulation for transceivers according to an embodiment of the present application.
  • 10-12 are schematic flowcharts of a training method for a neural network provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a generator network of the first neural network provided by an embodiment of the application.
  • FIG. 14 is a schematic structural diagram of a discriminator network of the first neural network provided by an embodiment of the application.
  • 15a is a schematic structural diagram of a generator network of a first neural network provided by an embodiment of the application.
  • FIG. 15b is a schematic structural diagram of a discriminator network of a first neural network provided by an embodiment of the application.
  • 16a and 16b are schematic diagrams of network structures provided by the embodiments of the present application.
  • FIG. 17 is a schematic diagram of a communication apparatus according to an embodiment of the present application.
  • a wireless communication system includes communication devices, and wireless resources can be used for wireless communication between the communication devices.
  • the communication devices may include access network devices and terminal devices, and the access network devices may also be referred to as access-side devices.
  • Radio resources may include link resources and/or air interface resources.
  • the air interface resources may include at least one of time domain resources, frequency domain resources, code resources and space resources.
  • at least one (a) may also be described as one (a) or a plurality (a)), and a plurality (a) may be two (a), three (a), four (a) species(s) or more species(s), which are not limited in this application.
  • “/” may indicate that the objects associated before and after are an “or” relationship, for example, A/B may indicate A or B; “and/or” may be used to describe that there are three types of associated objects A relationship, for example, A and/or B, can mean that A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • words such as “first” and “second” may be used to distinguish technical features with the same or similar functions. The words “first”, “second” and the like do not limit the quantity and execution order, and the words “first” and “second” are not necessarily different.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations, and any embodiment or design solution described as “exemplary” or “for example” should not be construed are preferred or advantageous over other embodiments or designs.
  • the use of words such as “exemplary” or “such as” is intended to present the relevant concepts in a specific manner to facilitate understanding.
  • FIG. 1 it is a schematic diagram of a network architecture to which this embodiment of the present application is applied.
  • the communication system in this embodiment of the present application may be a system including an access network device (such as a base station as shown in FIG. 1 ) and a terminal device, or may be a system including two or more terminal devices.
  • the access network device can send configuration information to the terminal device, and the terminal device performs corresponding configuration according to the configuration information.
  • the access network device may send downlink data to the terminal device, and/or the terminal device may send uplink data to the access network device.
  • terminal device 1 can send configuration information to terminal device 2, terminal device 2 performs corresponding configuration according to the configuration information, and terminal device 1 can send configuration information to terminal device 2. 2 sends data, and terminal device 2 can also send data to terminal device 1.
  • the access network device may implement one or more of the following artificial intelligence (artificial intelligence, AI) functions: model training and reasoning.
  • AI artificial intelligence
  • the network side may include a node independent of the access network device for implementing one or more of the following AI functions: model training and reasoning.
  • the node may be called an AI node, a model training node, an inference node, a wireless intelligent controller, or other names without limitation.
  • the model training function and the inference function can be realized by the access network device; or, the model training function and the inference function can be realized by the AI node; or the model training function can be realized by the AI node, and the information of the model can be sent to the Access network equipment, the inference function is implemented by the access network equipment.
  • the AI node implements the inference function
  • the AI node can send the inference result to the access network device for use by the access network device, and/or the AI node can send the inference result to the terminal device through the access network device for use by the access network device.
  • the access network device can use the inference result, or send the inference result to the terminal device for the terminal network device to use.
  • the AI node can be separated into two nodes, one of which implements the model training function and the other node implements the inference function.
  • the embodiments of the present application do not limit the specific number of network elements in the involved communication system.
  • the terminal device involved in the embodiments of the present application may also be referred to as a terminal or an access terminal, and may be a device with a wireless transceiver function.
  • the terminal equipment may communicate with one or more core networks (CNs) via access network equipment.
  • the terminal equipment may be a subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, user equipment (UE), user agent or user equipment, and the like.
  • Terminal equipment can be deployed on land, including indoor or outdoor, handheld or vehicle; can also be deployed on water (such as ships, etc.); can also be deployed in the air (such as aircraft, balloons and satellites, etc.).
  • the terminal device can be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a smart phone (smart phone), a mobile phone (mobile phone), a wireless local loop (WLL) station, personal digital assistant (PDA).
  • the terminal device can also be a handheld device, computing device or other device with wireless communication function, in-vehicle device, wearable device, drone device, terminal in the Internet of Things or Internet of Vehicles, fifth generation (5G) mobile A terminal in a communication network, a relay user equipment, or a terminal in a future evolved mobile communication network, etc.
  • the relay user equipment may be, for example, a 5G home gateway (RG).
  • the terminal equipment may be a virtual Reality (virtual reality, VR) terminal, augmented reality (AR) terminal, wireless terminal in industrial control (industrial control), wireless terminal in unmanned driving (self driving), telemedicine (remote medical) Wireless terminal, wireless terminal in smart grid, wireless terminal in transportation safety, wireless terminal in smart city, wireless terminal in smart home, etc.
  • the device for realizing the function of the terminal may be a terminal; it may also be a device capable of supporting the terminal to realize the function, such as a chip system, and the device may be installed in the terminal
  • the chip system may be composed of chips, and may also include chips and other discrete devices.
  • the access network device can be regarded as a sub-network of the operator's network, and is the implementation system between the service node and the terminal device in the operator's network.
  • the terminal device may first pass through the access network device, and then connect to the service node of the operator's network through the access network device.
  • the access network device in the embodiments of the present application is a device located in a (radio) access network ((radio) access network, (R)AN) and capable of providing a wireless communication function for a terminal device.
  • Access network equipment includes base stations, including but not limited to: next generation node B (gNB) in 5G system, evolved node B (evolved node B) in long term evolution (long term evolution, LTE) system , eNB), radio network controller (RNC), node B (node B, NB), base station controller (base station controller, BSC), base transceiver station (base transceiver station, BTS), home base station ( For example, home evolved nodeB, or home node B, HNB), base band unit (BBU), transmission point (transmitting and receiving point, TRP), transmitting point (transmitting point, TP), small base station equipment (pico) , mobile switching center, or access network equipment in future networks.
  • gNB next generation node B
  • eNB evolved node B
  • RNC radio network controller
  • node B node B
  • base station controller base station controller
  • BTS base transceiver station
  • home base station For example, home evolved nodeB, or home node B,
  • the names of devices with access network device functions may be different.
  • the device for implementing the function of the access network device may be the access network device; it may also be a device capable of supporting the access network device to realize the function, such as a chip system, and the device may be installed in the access network device. It can be used in the network access device or matched with the access network device.
  • 5G can also be called new radio (NR).
  • NR new radio
  • the technical solutions provided in the embodiments of the present application can be applied to various communication scenarios, for example, can be applied to one or more of the following communication scenarios: enhanced mobile broadband (eMBB) communication, ultra-reliable low-latency communication ( ultra-reliable low-latency communication (URLLC), machine type communication (MTC), massive machine-type communication (mMTC), device-to-device (D2D) communication , vehicle-to-everything (V2X) communication, vehicle-to-vehicle (V2V) communication, and internet of things (IoT), etc.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low-latency communication
  • MTC machine type communication
  • mMTC massive machine-type communication
  • D2D device-to-device
  • V2X vehicle-to-everything
  • V2V vehicle-to-vehicle
  • IoT internet of things
  • the term “communication” may also be described as “transmission”, “information transmission”,
  • the technical solution is described by taking the communication between the access network device and the terminal device as an example.
  • Those skilled in the art can also use this technical solution for communication between other scheduling entities and subordinate entities, such as communication between a macro base station and a micro base station, and/or communication between terminal equipment 1 and terminal equipment 2, and so on.
  • FIG. 2 is a schematic diagram of a hardware structure of a communication device according to an embodiment of the present application.
  • the communication apparatus may be a possible implementation manner of an AI node, an access network device, or a terminal device in the embodiment of the present application.
  • the communication device may be an AI node, may be a device in the AI node, or may be a device that can be matched and used with the AI node.
  • the communication device may be an access network device, a device in the access network device, or a device that can be matched and used with the access network device.
  • the communication device may be a terminal device, may be a device in the terminal device, or may be a device that can be matched and used with the terminal device. Wherein, the device may be a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the connections in the embodiments of the present application are indirect couplings or communication connections between devices, units, or modules, which may be electrical, mechanical, or other forms, and are used for information exchange between devices, units, or modules.
  • the communication apparatus includes at least one processor 204 for implementing the technical solutions provided by the embodiments of the present application.
  • the communication device may further include a memory 203 .
  • Memory 203 is used to store instructions 2031 and/or data 2032 .
  • the memory 203 is connected to the processor 204 .
  • the processor 204 may cooperate with the memory 203 .
  • the processor 204 may execute the instructions stored in the memory 203 to implement the technical solutions provided by the embodiments of the present application.
  • the communication device may also include a transceiver 202 for receiving and/or transmitting signals.
  • the communication device may further include one or more of the following: an antenna 206 , an I/O (input/output, Input/Output) interface 210 and a bus 212 .
  • the transceiver 202 further includes a transmitter 2021 and a receiver 2022.
  • the processor 204 , the transceiver 202 , the memory 203 and the I/O interface 210 are communicatively connected to each other through the bus 212 , and the antenna 206 is connected to the transceiver 202 .
  • the bus 212 may include an address bus, a data bus, and/or a control bus, etc.
  • the bus 212 is only represented by a thick line in FIG. 2 , but does not mean that the bus 212 has only one bus or one type of bus.
  • the processor 204 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array, and/or other programmable logic devices, such as discrete gate or transistor logic devices and/or discrete hardware components.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the processor 204 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application. The steps of the method disclosed in conjunction with the embodiments of the present application may be embodied as being executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the processor 204 may be a central processing unit (Central Processing Unit, CPU), or a dedicated processor, such as, but not limited to, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application-specific integrated circuit). Specific Integrated Circuit, ASIC) and/or Field Programmable Gate Array (Field Programmable Gate Array, FPGA), etc.
  • the processor 204 may also be a neural network processing unit (NPU).
  • the processor 204 may also be a combination of multiple processors. In the technical solutions provided in the embodiments of the present application, the processor 204 may be configured to execute relevant steps in subsequent method embodiments.
  • the processor 204 may be a processor specially designed to perform the above steps and/or operations, or may be a processor that performs the above steps and/or operations by reading and executing the instructions 2031 stored in the memory 203, the processor 204 Data 2032 may be required in performing the steps and/or operations described above.
  • the transceiver 202 includes a transmitter 2021 and a receiver 2022 .
  • the transmitter 2021 is configured to transmit signals through at least one of the antennas 206 .
  • Receiver 2022 is configured to transmit a second reference signal through at least one of antennas 206.
  • the transceiver 202 is used to support the communication device to perform a receiving function and a sending function.
  • a processor with processing capabilities is considered processor 204 .
  • the receiver 2022 can also be referred to as an input port, a receiving circuit, a receiving bus or other devices that implement a receiving function, and the transmitter 2021 can be referred to as a transmitting port, a transmitting circuit, a transmitting bus, or other devices that implement a transmitting function.
  • Transceiver 202 may also be referred to as a communication interface.
  • the processor 204 may be configured to execute the instructions stored in the memory 203, for example, to control the transceiver 202 to receive messages and/or send messages, so as to complete the functions of the communication apparatus in the method embodiments of the present application.
  • the function of the transceiver 202 may be implemented by a transceiver circuit or a dedicated transceiver chip.
  • receiving a message by the transceiver 202 may be understood as an input message by the transceiver 202
  • sending a message by the transceiver 202 may be understood as an output message by the transceiver 202 .
  • the memory 203 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or may be a volatile memory (volatile memory), such as a random access memory (random access memory) -access memory, RAM).
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
  • the memory 203 is specifically used to store the instructions 2031 and the data 2032, and the processor 204 can perform the steps and/or operations in the method embodiments of the present application by reading and executing the instructions 2031 stored in the memory 203.
  • Data 2032 may be required during the operations and/or steps in the example.
  • the communication apparatus may further include an I/O interface 210, and the I/O interface 210 is used for receiving instructions and/or data from peripheral devices, and outputting instructions and/or data to peripheral devices.
  • I/O interface 210 is used for receiving instructions and/or data from peripheral devices, and outputting instructions and/or data to peripheral devices.
  • Machine learning has attracted extensive attention from academia and industry in recent years. Due to the huge advantages of machine learning in the face of structured information and massive data, many researchers in the field of communication have also turned their attention to machine learning.
  • Machine learning-based communication technologies have great potential for signal classification, channel estimation, and/or performance optimization.
  • Most communication systems are designed block by block, which means that these communication systems consist of multiple modules.
  • modules design-based communication architecture many techniques can be developed to optimize the performance of each module. But the best performance of each module does not necessarily mean the best performance of the entire communication system. End-to-end optimization (i.e. optimizing the entire communication system) may be better than optimizing a single model.
  • Machine learning provides an advanced and powerful tool for maximizing end-to-end performance.
  • Machine learning is an important technological approach to realize artificial intelligence.
  • Machine learning can include supervised learning, unsupervised learning, and reinforcement learning.
  • Supervised learning uses machine learning algorithms to learn the mapping relationship between sample values and sample labels based on the collected sample values and sample labels, and uses a machine learning model to express the learned mapping relationship.
  • the sample label may also be referred to as a label for short.
  • the process of training a machine learning model is the process of learning this mapping. For example, in signal detection, the received signal containing noise is the sample, and the real constellation point corresponding to the signal is the label. Machine learning expects to learn the mapping relationship between the sample value and the label through training, that is, the machine learning model learns A signal detector. At training time, the model parameters are optimized by calculating the error between the model's predicted value and the true label. After the learning of the mapping relationship is completed, the learned mapping can be used to predict the sample label of the new sample.
  • the mapping relationship learned by supervised learning can include linear mapping or nonlinear mapping.
  • the learned tasks can be divided into classification tasks and regression tasks according to the type of labels.
  • Unsupervised learning uses the algorithm to discover the internal pattern of the sample by itself based on the collected sample values.
  • the model parameters are optimized by calculating the error between the predicted value of the model and the sample itself.
  • self-supervised learning can be used for signal compression and decompression recovery applications.
  • Common algorithms include autoencoders and adversarial generative networks.
  • Reinforcement learning is a class of algorithms that learn strategies to solve problems by interacting with the environment. Unlike supervised and unsupervised learning, reinforcement learning problems do not have clear "correct" action label data.
  • the algorithm needs to interact with the environment to obtain reward signals from environmental feedback, and then adjust decision-making actions to obtain larger reward signal values.
  • the reinforcement learning model adjusts the downlink transmission power of each user according to the total system throughput rate fed back by the wireless network, and then expects to obtain a higher system throughput rate.
  • the goal of reinforcement learning is also to learn the mapping relationship between the state of the environment and the optimal decision-making action. But because the label of the "correct action" cannot be obtained in advance, the network cannot be optimized by calculating the error between the action and the "correct action”. Training in reinforcement learning is achieved through iterative interactions with the environment.
  • Deep neural network is a specific implementation form of machine learning. According to the general approximation theorem, DNN can theoretically approximate any continuous function, so that DNN has the ability to learn any mapping.
  • Traditional communication systems need to rely on rich expert knowledge to design communication modules, while DNN-based deep learning communication systems can automatically discover implicit pattern structures from a large number of data sets, establish mapping relationships between data, and obtain better results than traditional modeling. method performance.
  • the form of the activation function can be diversified.
  • DNN generally has a multi-layer structure, and each layer of DNN can contain one or more neurons.
  • the input layer of DNN processes the received values by neurons and then passes them to the middle hidden layer.
  • the hidden layer passes the calculation result to the adjacent next hidden layer or the adjacent output layer to generate the final output of the DNN.
  • FIG. 4 is a schematic diagram of the layer relationship of the neural network.
  • DNNs generally have one or more hidden layers, which can affect the ability to extract information and fit functions. Increasing the number of hidden layers of DNN or expanding the number of neurons in each layer can improve the function fitting ability of DNN.
  • the parameters of each neuron include weights, biases and activation functions, and the set of parameters of all neurons in the DNN is called DNN parameters (or neural network parameters).
  • the weights and biases of neurons can be optimized through the training process, so that DNN has the ability to extract data features and express the mapping relationship.
  • DNNs generally use supervised learning or unsupervised learning strategies to optimize neural network parameters.
  • DNNs can include feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • Figure 4 shows an FNN, which is characterized by the complete connection between neurons in adjacent layers, which makes FNN usually require a large amount of storage space, resulting in high computational complexity.
  • FIG. 5 is a schematic diagram of the CNN.
  • CNN is a type of neural network for processing data with grid-like structure. For example, time series data (time axis discrete sampling) and image data (two-dimensional discrete sampling) can both be considered as grid-like data.
  • CNN does not use all the input information to perform operations at one time, but uses a window (such as a fixed-size window) to intercept part of the information for convolution operations, which greatly reduces the computational complexity of neural network parameters.
  • a window such as a fixed-size window
  • each window can use different convolution kernel operations, which enables CNN to better extract the features of the input data.
  • the convolutional layer is used for feature extraction to obtain feature maps.
  • the pooling layer is used to compress the input feature map to make the feature map smaller and simplify the computational complexity of the network.
  • Fully connected layers are used to map the learned "distributed feature representation" to the sample label space. Exemplarily, in FIG. 5 , the probability of judging that the image is the sun is 0.7, the probability that it is the moon is 0.1, the probability that it is a car is 0.05, and the probability that it is a house is 0.02.
  • FIG. 6 is a schematic diagram of the RNN.
  • RNN is a type of DNN that uses feedback time series information, and its input includes the new input value at the current moment and its own output value at the previous moment.
  • RNN is suitable for obtaining sequence features that are correlated in time, and is especially suitable for applications such as speech recognition and channel coding and decoding. Referring to Figure 6, a neuron's input at multiple times produces multiple outputs.
  • the input is x 0 and s 0
  • the output is y 0 and s 1
  • the input is x 1 and s 1
  • the output is y 1 and s 2
  • the inputs are x t and s t
  • the outputs are y t and s t+1 .
  • Generative neural network is a special kind of deep learning neural network. Different from the classification tasks and prediction tasks mainly performed by general neural networks, generative neural networks can learn the probability distribution function obeyed by a set of training samples. Thus, it can be used to model random variables and can also be used to establish conditional probability distributions between variables.
  • Common generative neural networks include generative adversarial network (GAN) and variational autoencoder (VAE).
  • FIG. 7 is a schematic diagram of a generative adversarial network GAN.
  • the GAN network can include two parts according to the function: the generator network (referred to as the generator) and the discriminator network (referred to as the discriminator). Specifically, the generator network processes the input random noise and outputs generated samples. The discriminator network compares the generated samples output by the generator network with the training samples in the training set, determines whether the generated samples and the training samples approximately obey a similar probability distribution, and outputs a true or false judgment.
  • the generator network processes the input random noise and outputs generated samples.
  • the discriminator network compares the generated samples output by the generator network with the training samples in the training set, determines whether the generated samples and the training samples approximately obey a similar probability distribution, and outputs a true or false judgment.
  • the generator network hopes to generate samples that obey the distribution of the training set as much as possible, and the discriminator network hopes to distinguish the difference between the generated samples and the training set as much as possible.
  • the two can reach an equilibrium state, that is, the probability distribution of the generated samples output by the generator network is similar to the probability distribution of the training samples, and the discriminator network believes that the generated samples and the training set obey the same probability distribution. Similar distribution.
  • this similarity distribution may be referred to as an identical distribution.
  • VAE is a schematic diagram of a variational autoencoder.
  • VAE can include three parts according to functions: encoder network (referred to as encoder), decoder network (referred to as decoder) and discriminator network (referred to as discriminator).
  • encoder network compresses the samples in the input training set into intermediate variables, and the decoder network tries to restore the intermediate variables to the samples in the training set.
  • certain constraints can be placed on the form of the intermediate variables.
  • a discriminant network can also be used in VAE to judge whether the intermediate variables obey the distribution of random noise.
  • the decoder network can use the input random noise to generate generated samples that conform to the distribution of the training set.
  • the above-mentioned FNN, CNN, RNN, GAN and VAE are neural network structures, and these network structures are constructed based on neurons.
  • a communication scheme based on machine learning can be designed, which can achieve better performance.
  • These schemes optimize transmission performance or reduce processing complexity by replacing the sending or receiving module in the original communication system with a neural network model.
  • different neural network model information can be pre-defined or configured, so that the neural network model can adapt to the requirements of different scenarios.
  • FIG. 9 is a schematic diagram of a neural network architecture for jointly optimizing constellation modulation/demodulation for transceivers.
  • the constellation mapping neural network at the sending end also called the sending end
  • the constellation demapping neural network at the receiving end both use the neural network model.
  • the originating constellation mapping neural network maps the bit stream into constellation symbols
  • the constellation demapping neural network demaps (or demodulates) the received constellation symbols into log-likelihood ratios of bit information.
  • the originating end performs a series of processing on the bit stream to be sent, which may include one or more of the following: channel coding, constellation symbol mapping modulation, orthogonal frequency division multiplexing (OFDM) modulation, layer Mapping, precoding, and upconversion, etc.
  • channel coding may include one or more of the following: channel coding, constellation symbol mapping modulation, orthogonal frequency division multiplexing (OFDM) modulation, layer Mapping, precoding, and upconversion, etc.
  • OFDM orthogonal frequency division multiplexing
  • neural network involved in the embodiments of the present application is not limited to any specific application scenario, but can be applied to any communication scenario, such as CSI compression feedback, adaptive constellation point design, and/or robust precoding Wait.
  • the above neural network needs to be adaptively trained to ensure communication performance. For example, for each different wireless system parameters (including one or more of wireless channel type, bandwidth, number of receiving antennas, number of transmitting antennas, modulation order, number of paired users, channel coding method, and coding rate ), define a set of corresponding neural network model information (including neural network structure information and neural network parameters).
  • the adaptive modulation constellation design based on artificial intelligence (AI) when the number of antennas at the transmitting end is 1 and the number of antennas at the receiving end is 2, a set of neural network model information is required to generate the corresponding modulation constellation; When the number of antennas at the transmitting end is 1 and the number of antennas at the receiving end is 4, another set of neural network model information is required to generate the corresponding modulation constellation.
  • different wireless channel types, bandwidths, modulation orders, number of paired users, channel coding methods, and/or coding rates may have different corresponding neural network model information.
  • the sources of the channel sample information include: 1. Obtained by the receiving end according to actual measurements; 2. Obtained by using a mathematical (channel) model.
  • a receiver eg, UE of a signal (eg, a reference signal, or a synchronization signal, etc.) measures the real channel to obtain channel sample information.
  • the channel sample information can more accurately reflect the channel environment. If the network element used to train the neural network by using the channel sample information is the signal transmitter, the receiver will feed back the channel sample information to the transmitter (eg, base station) after measuring the channel sample information.
  • the sender trains the neural network according to the channel sample information.
  • the receiver of the signal needs to feed back a large amount of channel sample information to the transmitter of the signal. Feeding back a large amount of channel sample information will occupy a large amount of air interface resources and affect the data transmission efficiency between the sending and receiving ends.
  • a mathematical expression can be used to model the channel model. That is, the real channel is simulated using a mathematical (channel) model.
  • mathematical channel models such as tapped delay line (TDL) and clustered delay line (CDL) can be defined in the protocol.
  • TDL and CDL channels include five sub-categories A, B, C, D, and E respectively; each sub-category is further subdivided into multiple typical channel scenarios according to specific parameters. , such as subdivided into various typical scenarios such as 10 nanoseconds (ns), 30ns, 100ns, 300ns or 1000ns according to the multipath delay interval. Therefore, when training the neural network at the transceiver end, a mathematical channel model similar to the actual environment can be selected to generate a large number of channel sample information similar to the actual channel, and the channel sample information can be used for training.
  • the use of the mathematical channel model can greatly reduce the signaling overhead of acquiring channel samples, there is also the disadvantage of mismatch between the mathematical channel model and the actual channel model.
  • the TDL channel model assumes a finite number of reflection paths, and the channel coefficients of each path obey a simple Rayleigh distribution.
  • the number of reflection paths of the actual channel varies in different environments, and the Rayleigh distribution cannot accurately describe the distribution of the channel coefficients per path.
  • the multipath delay interval often varies according to the scene, and is divided into several coarse particles. Typical values inevitably lead to modeling errors. Therefore, the mathematical channel modeling method is difficult to accurately describe the actual channel model, which in turn affects the training effect of the neural network. That is, there is a problem of data model mismatch.
  • FIG. 10 is a schematic flowchart of a method for training a neural network according to an embodiment of the present application.
  • the training method of a neural network proposed by the embodiment of the present application includes:
  • a first device sends a first reference signal to a second device.
  • the first device is an access network device and the second device is a terminal device as an example for description. It can be understood that the first device may be an access network device or a chip in the access network device, and the second device may be a terminal device or a chip in the terminal device. The first device may also be a terminal device or a chip in the terminal device, and the second device may also be an access network device or a chip in the access network device, which is not limited here.
  • the first device sends a first reference signal to the second device, where the first reference signal may be a synchronization signal, a synchronization signal block (synchronization signal and PBCH block, SSB), a demodulation reference signal (demodulation reference signal, DMRS) or a channel state Information reference signal (channel state information reference signal, CSI-RS), etc.
  • the first reference signal may also be referred to as the first signal.
  • the first reference signal may also be a newly defined reference signal, which is not limited here.
  • the sequence type of the first reference signal includes but is not limited to: ZC (zadoff-chu) sequence or golden (glod) sequence.
  • the DMRS may be a DMRS of a physical downlink control channel (PDCCH) or a DMRS of a physical downlink shared channel (PDSCH).
  • the second device when the second device needs to use a neural network (the neural network used by the second device is called a third neural network), the second device sends capability information to the first device.
  • the capability information is used to indicate one or more of the following information of the second device:
  • the communication module includes but is not limited to: OFDM modulation module, OFDM demodulation module, constellation mapping module, constellation demapping module, channel coding module, channel decoding module, precoding module, equalization module, interleaving module and/or deinterleaving module .
  • the network type of the third neural network includes one or more of the following: fully connected neural network, radial basis neural network, convolutional neural network, recurrent neural network, Hopfield neural network, restricted Boltzmann machine Or deep belief networks, etc.
  • the third neural network can be any of the above-mentioned neural networks, and the third neural network can also be a combination of the above-mentioned multiple neural networks, which is not limited here.
  • the capability information may carry the identifier or index of the third neural network.
  • the capability information may also carry identifiers or indexes of other predefined or preconfigured neural networks already existing in the second device.
  • the computing power information refers to computing capability information for running the neural network, for example, including information such as the computing speed of the processor and/or the amount of data that the processor can process.
  • the second device performs channel estimation according to the received first reference signal from the first device, and determines first channel sample information.
  • the second device after receiving the first reference signal, performs channel estimation on the first reference signal to determine the first channel sample information.
  • the signal received at the second device is called the second reference signal.
  • the channel propagating the first reference signal can be understood as a function with transition probability P(y
  • the second device may configure the information of the first reference signal for the first device in advance through signaling, or the information of the first reference signal may be stipulated by a protocol, which is not limited.
  • the type of signaling is not limited, for example, it may be broadcast message, system information, radio resource control (radio resource control, RRC) signaling, media access control (media access control, MAC) control element ( control element, CE)), or downlink control information (downlink control information, DCI).
  • the second device Since the second device already knows the information of the first reference signal x sent by the first device in advance, after the second device receives the second reference signal y, the second device will use the second reference signal y and the sent first reference signal according to For the signal x, channel estimation is performed to determine the first channel sample information. At this point, the channel experienced by the first reference signal from being sent to being received, such as the amplitude variation and phase rotation experienced, can be estimated.
  • the first channel sample information includes, but is not limited to, the second reference signal and/or channel state information (channel state information, CSI).
  • the channel estimation algorithm used by the second device for channel estimation includes but is not limited to: least square estimation method (least square, LS) or linear minimum mean square error estimation method (linear minimum mean square error, LMMSE).
  • least square estimation method least square, LS
  • linear minimum mean square error estimation method linear minimum mean square error, LMMSE
  • the second device sends the first channel sample information to the first device.
  • the second device sends the first channel sample information to the first device.
  • the first device obtains a first neural network by training according to the first channel sample information.
  • the first device obtains the first neural network by training according to the first channel sample information.
  • the first neural network is used to generate new channel sample information.
  • the first neural network is a generative neural network.
  • the first neural network is GAN as an example for description. It can be understood that the first neural network may be other types of neural networks, such as VAE, etc., which is not limited here.
  • the first neural network includes a generator network and a discriminator network, wherein the generator network is used to generate the second channel sample information, and the discriminator network is used to determine the difference between the newly generated second channel sample information and the information from the second device. Whether the sample information of the first channel obeys similar probability and statistical characteristics.
  • jointly training the generator network and the discriminator network of the first neural network can make the second channel sample information output by the generator network converge to the probability distribution of the first channel sample information from the second device. Examples are given below.
  • FIG. 13 is a schematic structural diagram of a generator network of the first neural network in an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a discriminator network of the first neural network in an embodiment of the present application.
  • the generator network includes a 5-layer convolutional layer network.
  • the input of the generator network is random noise z, and the random noise includes but is not limited to Gaussian white noise.
  • the output of the generator network is the second channel sample information
  • the discriminator network includes a 3-layer convolutional layer network and a 3-layer fully connected layer network.
  • a part of the input of the discriminator network is the output information of the generator network
  • Another part of the input signal of the discriminator network includes first channel sample information h from the second device.
  • the output c of the discriminator network is a binary variable, which indicates whether the output of the second channel sample information of the generator network obeys the probability distribution of the first channel sample, that is Whether it obeys the probability distribution of h.
  • FIGS. 15a-15b are schematic structural diagrams of the first neural network.
  • FIG. 15a is a schematic structural diagram of a generator network of a first neural network in an embodiment of the present application.
  • FIG. 15b is a schematic structural diagram of a discriminator network of the first neural network in the embodiment of the present application.
  • the generator network in the first neural network shown in Figures 15a-15b is composed of a 7-layer convolutional layer network.
  • the input of the generator network includes random noise z and training sequence x, and the output is a sample
  • the discriminator network includes a 4-layer convolutional layer network and a 4-layer fully connected layer network, and the input includes the samples generated by the generator network Sample second reference signal y, and corresponding first reference signal x from the second device.
  • the output c of the discriminator network is a binary variable representing Whether it obeys the probability distribution of P(y
  • the channel can be understood as a function with transition probability P(y
  • the first neural network hopes to learn this probability transition feature of the channel. For each input x, an output y can be generated with a probability of P(y
  • the first device uses the first channel sample information to train the reference neural network.
  • the neural network obtained after the training is completed is called the first neural network.
  • the neural network before training using the first channel sample information may be referred to as a reference neural network.
  • the first device uses the first channel sample information including 5000 samples from the second device to train the reference neural network, thereby obtaining the first neural network by training.
  • the reference neural network may preconfigure the parameters of some neurons or the structure of the neural network.
  • the reference neural network may also be preconfigured with information of other neural networks, which is not limited here.
  • the reference neural network may be a neural network trained under a predefined channel environment and used to generate channel sample information (eg, CSI or a second reference signal, etc.).
  • the first device further trains the reference neural network using the first channel sample information to obtain a first neural network.
  • the reference neural network may be an initialized neural network.
  • the parameters of each neuron in the reference neural network are zero, random or other predetermined values, and/or the structure of the neural network is a random or predetermined initial structure, etc., which are not limited here.
  • the first device starts training the reference neural network using the first channel sample information, so as to obtain the first neural network through training.
  • the first device uses the first neural network to infer to obtain second channel sample information.
  • the first device after the first device obtains the trained first neural network, it uses the first neural network to infer to obtain the second channel sample information.
  • the first device inputs random noise into the first neural network
  • the second channel sample information can be obtained by inference from the first neural network.
  • the first neural network is a GAN
  • random noise is input into the generator network of the first neural network
  • the second channel sample information can be obtained.
  • the first neural network is a VAE
  • random noise is input into the decoder network of the first neural network to obtain the second channel sample information.
  • the form of the second channel sample information is the same as that of the first channel sample information.
  • the first channel sample information is CSI(h)
  • the second channel sample information is CSI
  • the first channel sample information is the second reference signal (y)
  • the second channel sample information is the second reference signal
  • the first device trains the second neural network and/or the third neural network according to the second channel sample information.
  • the first channel sample information is CSI (denoted CSI as h)
  • CSI denoted CSI as h
  • FIG. 16a a schematic structural diagram of the first device training the second neural network and/or the third neural network according to the second channel sample information
  • the input of the first neural network is Gaussian white noise
  • the output is the second channel sample information the second sample information and Gaussian white noise input to the signal over-channel module.
  • the originating signal and the second sample information are Multiply, the product result is added with Gaussian white noise, and the result is used as the input of the receiving end.
  • the receiver neural network eg: constellation demapping neural network
  • sender neural network tellation mapping neural network
  • the first channel sample information is the second reference signal (y) described above
  • a schematic structural diagram of the first device training the second neural network and/or the third neural network according to the second channel sample information is shown in Figure 16b.
  • the input of the first neural network (the generator network of the GAN) is white Gaussian noise and the transmission signal x
  • the output of the first neural network is the second reference signal
  • the first neural network is connected between the receiving end and the sending end, and through the machine learning method, the receiving end neural network (for example: constellation demapping neural network) and the sending end neural network (constellation mapping neural network) can be trained end-to-end. ).
  • the first device trains the second neural network and/or the third neural network according to the second channel sample information.
  • the second neural network and/or the third neural network are used for the first device and the second device to transmit target information.
  • the first device trains the second neural network and/or the third neural network according to the first channel sample information and the second channel sample information.
  • the second neural network and the third neural network are described below:
  • the second neural network is applied to the sending end, which can be understood as the second neural network performing data sending processing, for example, the constellation mapping neural network shown in FIG. 9 .
  • the second neural network may perform operations such as rate matching (Rate matching) and/or OFDM modulation.
  • the rate matching refers to that the bits to be sent are repeated and/or punctured to match the bearing capacity of the physical channel, so that the bit rate required by the transmission format can be achieved during channel mapping.
  • OFDM modulation mainly realizes moving the baseband spectrum to the radio frequency band, thereby realizing the wireless transmission function.
  • the second neural network may perform one or more functions of source coding (eg, scrambling, and/or signal compression, etc.), channel coding, constellation mapping, OFDM modulation, precoding, and/or filtering.
  • the network type of the second neural network includes but is not limited to: fully connected neural network, radial basis function (RBF) neural network, CNN, recurrent neural network, Hopfield neural network, restricted Boltzmann machine Or deep belief networks, etc.
  • the second neural network can be any of the above-mentioned neural networks, and the second neural network can also be a combination of the above-mentioned multiple neural networks, which is not limited here.
  • the first device is used as the sending end for description, that is, the second neural network is applied to the first device.
  • the second device can also be used as the sending end, that is, the second neural network is applied to the second device.
  • the third neural network is applied to the receiving end, which can be understood as the third neural network receiving and processing data from the sending end.
  • the receiving end can be understood as the third neural network receiving and processing data from the sending end.
  • the third neural network may perform processing such as source decoding (eg, descrambling, and/or signal decompression), channel decoding, de-constellation mapping, OFDM demodulation, signal detection, and/or equalization.
  • the network type of the third neural network includes but is not limited to: fully connected neural network, radial basis neural network, convolutional neural network, recurrent neural network, Hopfield neural network, restricted Boltzmann machine or deep belief network Wait.
  • the third neural network can be any of the above-mentioned neural networks, and the third neural network can also be a combination of the above-mentioned multiple neural networks, which is not limited here.
  • the second neural network and the third neural network may use the same neural network, may also use a different neural network, and may also use part of the same neural network, which is not limited here.
  • the second device is used as the receiving end for description, that is, the third neural network is applied to the second device.
  • the first device can also be used as a receiving end, that is, the third neural network is applied to the first device.
  • the second neural network and/or the third neural network are used to transmit target information between the first device and the second device. Based on the above description, it can be understood that when the second neural network and the third neural network perform different processing, the target information is different.
  • the target information is a modulation symbol sequence.
  • the target information is compressed CSI or CSI.
  • the target information is bit information to be sent before encoding or bit information to be sent after encoding.
  • the target information is an OFDM symbol.
  • the target information is a transmission signal
  • the transmission signal reaches the receiving end after being transmitted through the channel.
  • the first device can train the second neural network independently, the first device can also train the third neural network independently, and the first device can also train the second neural network and the third neural network jointly.
  • the first device may train the second neural network independently.
  • the function of the precoding neural network is to generate transmit weighting coefficients of each transmit antenna according to the spatial characteristics of the channel, so that multiple transmit data streams can be spatially isolated and the received signal-to-interference-noise ratio of the signal is improved.
  • the input of the precoding neural network is the CSI of the channel, and the output is the precoding weight on each transmit antenna. That is, the target information of the second neural network is the precoding weight on each transmit antenna.
  • the first device may train the third neural network independently.
  • the function of the equalizer neural network cancels the distortion effect caused by the channel to the propagation signal, and restores the signal emitted by the transmitting end.
  • the input of the equalizer neural network is the CSI of the channel and the second reference signal at the receiving end, and the output is the signal recovered after equalization. That is, the target information of the third neural network is the signal recovered after equalization.
  • the first device can use the second channel sample information generated by the first neural network to independently train the second neural network, or can use the second channel sample information and locally generated transmission data to separately train the third neural network, and use the trained The information of the third neural network is sent to the second device for use.
  • the first device can also jointly train the second and third neural networks to achieve optimal end-to-end communication performance.
  • step 1007 may be performed.
  • the completion of the training of the third neural network includes but is not limited to: the number of training samples used for training reaches a preset threshold, for example: the number of second channel sample information for training the third neural network reaches 45,000, then the third neural network The network is considered to have been trained. Or, when the number of times of training reaches a preset threshold, for example: the number of times of training the third neural network reaches 30,000 times, the third neural network is regarded as training completed.
  • the first device sends the information of the third neural network to the second device.
  • the first device sends the information of the third neural network to the second device.
  • the information of the third neural network includes but is not limited to: the weight of the neural network, the activation function of the neuron, the number of neurons in each layer of the neural network, the cascade relationship between the layers of the neural network, and/or the neural network. Network type for each layer.
  • the information of the third neural network may indicate the same or different activation functions for different neurons, which is not limited.
  • the information of the third neural network may also be a model variation of the third neural network.
  • the variation of the model includes, but is not limited to: the weight of the changed neural network, the changed activation function, the number of neurons in each layer of the changed neural network, and the inter-layer cascade relationship of the changed neural network, and/or the network type of each layer in the neural network that changes.
  • the pre-configuration may be that the access network device pre-configures the terminal device through signaling, and the pre-definition may be a protocol pre-definition, for example, the protocol pre-defines the third neural network in the terminal device as neural network A.
  • the first device sends the information of the third neural network to the second device through signaling.
  • signaling includes but is not limited to: broadcast messages (such as master information blocks (MIBs)), system messages (such as system information blocks (SIBs)), radio resource control ( radio resource control, RRC) signaling, medium access control control element (medium access control control element, MAC CE) and/or downlink control information (downlink control information, DCI).
  • broadcast messages such as master information blocks (MIBs)
  • system messages such as system information blocks (SIBs)
  • radio resource control radio resource control, RRC
  • RRC radio resource control
  • medium access control control element medium access control control element
  • MAC CE downlink control information
  • DCI downlink control information
  • the MAC CE and/or DCI may be a common message of multiple devices, or may be a specific message specific to a certain device (such as a second device).
  • the second device applies the third neural network in the second device according to the information of the third neural network.
  • the first device After the first device completes the training of the second neural network, the first device applies the second neural network.
  • the first device updates the local second neural network according to the training result in step 1006 .
  • the second device updates the local third neural network according to the information of the third neural network in step 1007 .
  • the information of the third neural network in step 1007 may include a complete third neural network. The second device locally configures the trained third neural network according to the information of the third neural network.
  • the first device can use the second neural network to process the target information (ie, the input of the second neural network) or obtain the target information (ie, the output of the second neural network).
  • the second device may process the target information (ie the input of the third neural network) or restore the target information (ie the output of the third neural network) using the third neural network.
  • the first device uses a second neural network to obtain modulation symbols
  • the second device uses a third neural network to recover bits from the received modulation symbols.
  • the second device can use the third neural network to process the target information (ie, the input of the third neural network) or obtain the target information (ie, the output of the third neural network).
  • the first device may process the target information (ie, the input of the second neural network) or restore the target information (ie, the output of the second neural network) using the second neural network.
  • the second device uses the third neural network to compress the CSI to obtain the compressed CSI; the first device uses the second neural network to recover the CSI from the received compressed CSI.
  • the second device may send a relatively small amount of first channel sample information to the first device (for example, an access network device).
  • the first device obtains a first neural network by training according to the first channel sample information.
  • the first neural network is used for inference to obtain second channel sample information.
  • the first device trains the second neural network and/or the third neural network, the second neural network and/or the third neural network according to the second channel sample information, or according to the second channel sample information and the first channel sample information Used by the first device and the second device to transmit target information.
  • the air interface signaling overhead can be effectively reduced, and at the same time, the channel environment can be adapted, the second neural network and the third neural network obtained by training are closer to the actual channel environment, and the communication performance is improved.
  • the speed of training the second neural network and the third neural network has also been greatly improved.
  • the relevant steps of training to obtain the first neural network, the relevant training steps of the second neural network and/or the relevant training steps of the third neural network can alternatively be implemented by other devices independent of the first device.
  • the other device is referred to as a third device.
  • the third device may be the aforementioned AI node, a mobile edge computing (mobile edge computing) device, or a cloud server, etc., without limitation.
  • the reference signal pool learned by the third device may be agreed in the protocol after offline learning, or sent to the first device through the interface between the third device and the first device, or forwarded to the first device through other network elements , without restriction.
  • Samples that the third device needs to use when performing model training, for example, the first channel sample information may be directly or indirectly sent by the first device to the third device, or directly or indirectly sent by the second device to the third device , without restriction.
  • FIG. 11 is a schematic flowchart of another method for training a neural network according to an embodiment of the present application.
  • the training method of a neural network proposed by the embodiment of the present application includes:
  • the first device sends a first reference signal to the second device.
  • the second device performs channel estimation according to the received first reference signal from the first device, and determines first channel sample information.
  • the second device sends the first channel sample information to the first device.
  • Steps 1101-1103 are the same as the aforementioned steps 1001-1003, and are not repeated here.
  • the first device sends the first channel sample information to the third device.
  • the first device after the first device receives the first channel sample information from the second device, the first device sends the first channel sample information to the third device.
  • the third device obtains a first neural network by training according to the first channel sample information.
  • the third device uses the first neural network to infer to obtain the second channel sample information.
  • 1106 can be replaced with: the third device sends the information of the first neural network to the first device.
  • the first device uses the first neural network to infer the second channel sample information.
  • the first device sends the second channel sample information to the third device.
  • the third device trains the second neural network and/or the third neural network according to the second channel sample information.
  • Steps 1105-1107 are similar to the aforementioned steps 1004-1006, and are not repeated here.
  • the third device sends the information of the second neural network and/or the information of the third neural network to the first device.
  • the third device may send the information of the third neural network to the second device through the first device.
  • the third device may also send the information of the third neural network to the first device through other devices (for example, other access network devices).
  • the third device may also directly send the information of the third neural network to the first device.
  • the third device may send the information of the second neural network to the second device.
  • the first device sends the information of the third neural network to the second device.
  • Step 1109 is the same as the aforementioned step 1007, and will not be repeated here.
  • the embodiment shown in FIG. 11 may be replaced with: the subject for training the first neural network is different from the subject for training the second neural network and/or the second neural network.
  • the former is the first device and the latter is the third device; or, the former is the third device and the latter is the first device.
  • the neural network is trained in the above manner, which effectively reduces the overhead of air interface signaling, and can adapt to the channel environment at the same time.
  • the second neural network and the third neural network obtained by training are closer to the actual channel environment, which improves communication performance.
  • the speed of training the second neural network and the third neural network has also been greatly improved.
  • the steps involving neural network training can be performed by other devices (third devices), which effectively reduces the computing load and power consumption of the first device (eg, access network devices).
  • the relevant steps of training the first neural network can alternatively be implemented by a second device (eg, a terminal device).
  • a second device eg, a terminal device.
  • FIG. 12 is a schematic flowchart of another method for training a neural network according to an embodiment of the present application.
  • the first neural network is trained by the second device.
  • the training method of a neural network proposed by the embodiment of the present application includes:
  • the second device sends capability information to the first device.
  • the first device is an access network device and the second device is a terminal device as an example for description.
  • the first device may be an access network device, a chip in an access network device, a module or circuit in an access network device, etc.
  • the second device may be a terminal device, a chip in a terminal device, a module or circuit in a terminal, etc.
  • the first device may also be a terminal device, a chip in the terminal device, a module or circuit in the terminal, etc.
  • the second device may also be an access network device or a chip in an access network device, a module in an access network device, or a Circuits, etc., are not limited here.
  • the first neural network that is not trained using the first channel sample information, or the first neural network before updating may be referred to as a reference neural network. After training the reference neural network using the first channel sample information, the first neural network is obtained.
  • the reference neural network is similar to the reference neural network in the aforementioned step 1004, and the second device starts to train the reference neural network using the first channel sample information, thereby obtaining the first neural network through training.
  • step 1201 the second device sends capability information to the first device, where the capability information is used to indicate one or more of the following information:
  • the communication module includes but is not limited to: OFDM modulation module, OFDM demodulation module, constellation mapping module, constellation demapping module, channel coding module, channel decoding module, precoding module, equalization module, interleaving module and/or deinterleaving module .
  • the capability information indicates whether the second device supports VAE or whether it supports GAN.
  • the capability information indicates that the second device supports GAN, supports VAE, supports both GAN and VAE, or supports neither GAN nor VAE.
  • the network type of the third neural network includes one or more of the following: fully connected neural network, radial basis neural network, convolutional neural network, recurrent neural network, Hopfield neural network, restricted Boltzmann machine Or deep belief networks, etc.
  • the third neural network can be any of the above-mentioned neural networks, and the third neural network can also be a combination of the above-mentioned multiple neural networks, which is not limited here.
  • the reference neural network is the first neural network that is not trained by the first channel sample information or is used to train the first neural network;
  • the capability information may carry an identifier or index of the reference neural network.
  • the capability information may also carry identifiers or indexes of other predefined or preconfigured neural networks already existing in the second device.
  • the computing power information refers to computing capability information for running the neural network, for example, including information such as the computing speed of the processor and/or the amount of data that the processor can process.
  • the longitude, latitude, and/or altitude described by the second device are reported.
  • the channel environment described by the second device is reported, for example, an office, a subway station, a park, or a street.
  • the capability information includes, but is not limited to, the above-mentioned information.
  • the capability information may further include: a type of channel sample information supported by the second device for training to obtain the first neural network.
  • a type of channel sample information supported by the second device for training to obtain the first neural network is CSI, or the channel sample information supported by the second device for training the first neural network is the second Reference signal (y).
  • Step 1201 is an optional step.
  • the second device may send the capability information when accessing the first device for the first time, and the second device may also periodically send the capability information to the first device, which is not limited here.
  • the capability information indicates that the reference neural network is not configured (not stored) in the second device. Then the first device can send the information of the reference neural network to the second device.
  • the information of the reference neural network includes: the weight of the neural network, the activation function of the neurons, the number of neurons in each layer of the neural network, the cascade relationship between the layers of the neural network, and/or the network of each layer in the neural network type.
  • the information of the reference neural network may be referred to as specific information of the reference neural network. Among them, the activation functions of different neurons may be the same or different, which is not limited.
  • the first device sends a first reference signal to the second device.
  • the second device performs channel estimation according to the received first reference signal from the first device, and determines first channel sample information.
  • step 1002 It is the same as step 1002, and is not repeated here.
  • the second device obtains a first neural network by training according to the first channel sample information.
  • the method for the second device to obtain the first neural network by training according to the first channel sample information is similar to the method for the first device to obtain the first neural network by training according to the first channel sample information in 1004 . It will not be repeated here.
  • the second device obtains the first neural network by training according to the first channel sample information and the reference neural network.
  • the second device may determine the reference neural network according to any one of the following manners 1 to 3:
  • Manner 1 The information of the reference neural network is stipulated in the protocol, or the second device determines the information of the reference neural network.
  • the protocol specifies (predefines) information about a reference neural network, that is, specifies the specific information of the reference neural network.
  • the information of the reference neural network includes: the weight of the neural network, the activation function of neurons, the number of neurons in each layer of the neural network, the cascade relationship between the layers of the neural network, and/or the neural network. Network type for each layer.
  • the information of the reference neural network may also be referred to as specific information of the reference neural network.
  • the activation functions of different neurons may be the same or different, which is not limited.
  • the protocol stipulates the information of multiple reference neural networks.
  • Each reference neural network corresponds to a channel environment.
  • the channel environment includes one or more of the following: offices, shopping malls, subway stations, city streets, squares, and suburbs.
  • the second device reports the location information of the second device to the first device
  • the first device can determine the channel environment in which the second device is located, thereby obtaining a reference neural network corresponding to the channel environment.
  • the access network device may maintain a piece of map information, in which the map information corresponds to channel environments corresponding to different location information markers.
  • the map information is shown in Table 1:
  • the first device may determine the information of the reference neural network corresponding to the channel environment.
  • the second device determines the information of the reference neural network, that is, determines the specific information of the reference neural network. As described above, the second device may report the information of the reference neural network to the first device through the capability information.
  • Mode 2 The reference neural network is indicated by the first device for the second device.
  • the first device sends the information of the reference neural network to the second device, that is, the first device sends the specific information of the reference neural network to the second device.
  • the protocol stipulates the information of multiple candidate reference neural networks, and each candidate reference neural network corresponds to a unique index.
  • the first device sends an identifier to the second device through signaling, where the identifier is used to indicate an index of a reference neural network configured for the second device among the multiple candidate reference neural networks.
  • the first device configures information of multiple candidate reference neural networks for the second device through the first signaling, that is, configures the specific information of the multiple candidate reference neural networks.
  • each candidate reference neural network corresponds to a unique index.
  • the first device sends an identifier to the second device through the second signaling, where the identifier is used to indicate an index of a reference neural network configured for the second device among the plurality of candidate reference neural networks.
  • the second device may send the information of multiple candidate reference neural networks stored or possessed by the second device to the first device through capability information.
  • each candidate reference neural network corresponds to a unique index.
  • the first device sends an identifier to the second device through signaling, where the identifier is used to indicate an index of a reference neural network configured for the second device among the multiple candidate reference neural networks.
  • the first device sends the type of the reference neural network to the second device through signaling.
  • the protocol stipulates information of various types of reference neural networks, or the second device autonomously determines information of various types of reference neural networks.
  • the second device sends the information of the first neural network to the first device.
  • the second device After the second device obtains the first neural network through training, the second device sends the information of the first neural network to the first device.
  • the information of the first neural network may be specific information of the first neural network.
  • the specific information may include: the weight of the neural network, the activation function of the neurons, the number of neurons in each layer of the neural network, the cascade relationship between the layers of the neural network, and/or the network type of each layer in the neural network.
  • the activation functions of different neurons may be the same or different, which is not limited.
  • the information of the first neural network may also be a model change amount of the first neural network relative to the reference neural network.
  • the variation of the model includes but is not limited to: the weight of the changed neural network, the changed activation function, the number of one or more layers of neurons in the changed neural network, and the changed inter-layer level of the neural network connections, and/or the network type of one or more layers in the neural network that changes.
  • the first device determines the first neural network according to the information of the first neural network.
  • the first device determines the first neural network based on information from the first neural network of the second device.
  • the first device determines the first neural network according to the specific information.
  • the first device determines the first neural network according to the model variation and the reference neural network.
  • the first device obtains second channel sample information by reasoning according to the first neural network.
  • the first device trains the second neural network and/or the third neural network according to the second channel sample information.
  • the first device sends the information of the third neural network to the second device.
  • Steps 1207-1209 are the same as the aforementioned steps 1005-1007, and will not be repeated here
  • the second device obtains the first neural network by training according to the first channel sample information and the reference neural network.
  • the second device obtains the first neural network by training according to the first channel sample information.
  • the first neural network is used for inference to obtain second channel sample information.
  • the first device trains the second neural network and/or the third neural network according to the second channel sample information, and the second neural network and/or the third neural network is used for the first device and the second device to transmit target information. It can effectively reduce the air interface signaling overhead, and at the same time can adapt to the channel environment, the second neural network and the third neural network obtained by training are closer to the actual channel environment, and the communication performance is improved.
  • the speed of training the second neural network and the third neural network has also been greatly improved.
  • the computing task of training the first neural network is allocated to the second device, which effectively reduces the computing burden of the first device (eg, access network device) and improves communication performance.
  • the methods provided by the embodiments of the present application are respectively introduced from the perspectives of the first device, the second device, the third device, and the interaction between them.
  • the first device, the second device, and the third device may include hardware structures and/or software modules, and the hardware structure, software modules, or hardware structures plus software modules form to achieve the above functions. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • FIG. 17 is a schematic diagram of a communication device provided by an embodiment of the present application.
  • the communication device 1700 includes a transceiver module 1710 and a processing module 1720, wherein:
  • the communication apparatus 1700 is configured to implement the function of the first device in the foregoing method embodiment.
  • the transceiver module 1710 is configured to send the first reference signal to the second device
  • the transceiver module 1710 is further configured to receive first channel sample information from the second device;
  • the processing module 1720 is used to determine the first neural network, the first neural network is obtained by training according to the first channel sample information, the first neural network is used for inference to obtain the second channel sample information, and the second channel sample information is used to train the first channel sample information.
  • the second neural network and/or the third neural network, the second neural network and/or the third neural network are used for the first device and the second device to transmit target information.
  • the transceiver module 1710 is configured to send the first reference signal to the second device
  • the transceiver module 1710 is further configured to receive the information of the first neural network from the second device;
  • a processing module 1720 configured to determine a second neural network and/or a third neural network, where the second neural network and/or the third neural network are used for the first device and the second device to transmit target information , the second neural network and/or the third neural network are obtained by training according to the second channel sample information, and the second channel sample information is obtained by inference from the first neural network.
  • the communication apparatus 1700 is configured to implement the function of the second device in the foregoing method embodiment.
  • the processing module 1720 is configured to perform channel estimation according to the first reference signal received from the first device, and determine the first channel sample information
  • a transceiver module 1710 configured to send the first channel sample information to the first device
  • the transceiver module 1710 is further configured to receive information of the third neural network from the first device, and the third neural network is used for the second device and the first device to transmit target information.
  • the processing module 1720 is configured to perform channel estimation according to the first reference signal received from the first device, and determine the first channel sample information
  • the processing module 1720 is further configured to determine a first neural network, where the first neural network is obtained by training according to the first channel sample information;
  • the transceiver module 1710 is configured to send the information of the first neural network to the first device.
  • the communication apparatus 1700 is configured to implement the function of the third device in the foregoing method embodiment.
  • the transceiver module 1710 is configured to receive the first channel sample information from the first device
  • the processing module 1720 is used to determine the first neural network, the first neural network is obtained by training according to the first channel sample information, the first neural network is used for inference to obtain the second channel sample information, and the second channel sample information is used to train the first channel sample information.
  • the second neural network and/or the third neural network, the second neural network and/or the third neural network are used for the first device and the second device to transmit target information.
  • the above-mentioned communication apparatus may further include a storage unit for storing data and/or instructions (also referred to as codes or programs).
  • the foregoing units may interact or be coupled with the storage unit to implement corresponding methods or functions.
  • the processing unit 1720 may read data or instructions in the storage unit, so that the communication apparatus implements the methods in the above embodiments.
  • the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • each functional module in each embodiment of this application may be integrated into one processing unit. In the device, it can also exist physically alone, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • a unit in any of the above communication devices may be one or more integrated circuits configured to implement the above method, eg, one or more application specific integrated circuits (ASICs), or, an or multiple microprocessors (digital singnal processors, DSP), or, one or more field programmable gate arrays (FPGA), or a combination of at least two of these integrated circuit forms.
  • ASICs application specific integrated circuits
  • DSP digital singnal processors
  • FPGA field programmable gate arrays
  • a unit in the communication device can be implemented in the form of a processing element scheduler
  • the processing element can be a general-purpose processor, such as a central processing unit (CPU) or other processors that can invoke programs.
  • CPU central processing unit
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • the technical solutions provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, an AI node, an access network device, a terminal device, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line, DSL) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, digital video discs (DVDs)), or semiconductor media, and the like.
  • the embodiments may refer to each other.
  • the methods and/or terms between the method embodiments may refer to each other, such as the functions and/or the device embodiments.
  • terms may refer to each other, eg, functions and/or terms between an apparatus embodiment and a method embodiment may refer to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Power Engineering (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

一种神经网络的训练方法以及相关装置,该方法包括:第一设备从第二设备接收第一信道样本信息;第一设备确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的,第一神经网络用于根据第一信道样本信息推理得到第二信道样本信息。通过该方法,可以有效减小空口信令开销,同时能适配所处信道环境,提升通信性能。

Description

一种神经网络的训练方法以及相关装置 技术领域
本申请涉及通信技术领域,尤其涉及一种神经网络的训练方法以及相关装置。
背景技术
无线通信系统可以包括三部分:发送端(transmitter)、信道(channel)和接收端(receiver)。信道用于传输发送端与接收端之间交互的信号。示例性的,发送端可以是接入网设备,例如:基站(base station,BS),接收端可以是终端设备。再示例性的,发送端可以是终端设备,接收端可以是接入网设备。
为了优化通信系统的性能,可以对上述发送端与接收端进行优化。发送端和接收端均可以有独立的数学模型,因此,通常是基于各自的数学模型进行独立优化。例如可以使用数学信道模型产生信道样本以优化发送端和接收端。
由于数学信道模型是非理想化和非线性的,因此采用协议定义的数学信道模型产生的信道样本难以反映真实的实际信道环境。而发送端和接收端之间传输大量的实际的信道样本时,又会占用过多的空口资源,影响数据的传输效率。
发明内容
本申请实施例第一方面提供了一种神经网络的训练方法,包括:
第一设备从第二设备接收第一信道样本信息;第一设备确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的,第一神经网络用于推理得到第二信道样本信息。
可选的,以第一设备为接入网设备,第二设备为终端设备为例进行说明。可以理解的是,第一设备可以是接入网设备、用于接入网设备中的芯片或用于接入网设备中的电路等,第二设备可以是终端设备、用于终端设备中的芯片或用于终端设备中的电路等。
一种可能的设计中,该方法包括:第一设备根据第一信道样本信息训练得到第一神经网络。第一神经网络用于生成新的信道样本信息,例如第二信道样本信息。
一种可能的设计中,该方法包括:第一设备从第三设备接收第一神经网络的信息,根据第一神经网络的信息确定第一神经网络。第一神经网络是第三设备根据第一信道样本信息训练得到的。
该方法中,可以根据第一信道样本信息,训练得到第一神经网络。该第一神经网络用于推理得到第二信道样本信息。通过该方法,可以有效减小传输信道样本信息时的空口信令开销。
一种可能的设计中,第二信道样本信息用于训练第二神经网络和/或第三神经网络,所述第二神经网络和/或所述第三神经网络用于所述第一设备与所述第二设备传输目标信息。
一种可能的设计中,该方法包括:第一设备根据该第二信道样本信息,或者根据该第二信道样本信息和第一信道样本信息,训练第二神经网络和/或第三神经网络,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。
一种可能的设计中,该方法包括:第一设备从第三设备接收第二神经网络的信息和/ 或第三神经网络的信息,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。第二神经网络和/或第三神经网络是第三设备根据第二信道样本信息,或者根据该第二信道样本信息和第一信道样本信息训练得到的。
通过该方法,可以有效减小空口信令开销,同时能适配所处信道环境,训练得到的第二神经网络与第三神经网络更贴近实际信道环境,提升通信性能。训练第二神经网络与第三神经网络的速度也得到大幅度提升。
一种可能的设计中,该方法还包括:第一设备向第二设备发送第一参考信号。可选的,第一参考信号包括解调参考信号DMRS或者信道状态信息参考信号CSI-RS。可选的,第一参考信号的序列类型包括ZC序列或Glod序列。
一种可能的设计中,该第一信道样本信息包括但不限于第二参考信号和/或信道状态信息(channel state information,CSI)。第二参考信号为经过信道传播的第一参考信号,或者描述为第二参考信号为第二设备从第一设备接收到的第一参考信号。
一种可能的设计中,第一神经网络的信息包括:第一神经网络相对参考神经网络的模型变化量。
一种可能的设计中,第一神经网络的信息包括以下一项或多项:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和神经网络中每层的网络类型。
一种可能的设计中,第一神经网络为生成型神经网络。可选的,第一神经网络为生成对抗网络(Generative Adversarial Network,GAN)或变分自编码器(Variational Auto encoder,VAE)。
通过该方法,可以得到与第一信道样本信息同分布或具有相似分布的第二信道样本信息,使得第二信道样本信息更加贴近实际信道环境。
一种可能的设计中,该方法还包括:第一设备从第二设备接收第二设备的能力信息。该能力信息用于指示以下信息中的一项或多项:
1)第二设备是否支持利用神经网络代替或实现通信模块的功能;
该通信模块包括但不限于:OFDM调制模块、OFDM解调模块、星座映射模块、星座解映射模块、信道编码模块、信道译码模块、预编码模块、均衡模块、交织模块和/或解交织模块。
2)第二设备是否支持第三神经网络的网络类型;
3)第二设备是否支持通过信令接收第三神经网络的信息;
4)第二设备存储的参考神经网络;
5)第二设备可用于存储第三神经网络的内存空间;和,
6)第二设备可用于运行神经网络的算力信息。
通过该方法,第一设备可以接收第二设备发送的能力信息。该能力信息用于向第一设备通知第二设备的相关信息,第一设备可以根据该能力信息进行关于第三神经网络的操作,以确保第二设备可以正常使用该第三神经网络。
一种可能的设计中,该方法还包括:第一设备向第二设备发送第三神经网络的信息。
本实施例中,当第一设备训练完成第三神经网络后,第一设备向第二设备发送第三神经网络的信息。该第三神经网络的信息包括但不限于:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和/或神经网络中每层的网络类型。示例性的,该第三神经网络的信息中可以针对不同的神经元指示不同的激活函数。
一种可能的设计中,当第二设备中预配置(或预定义)了第三神经网络时,该第三神经网络的信息还可以是第三神经网络的模型变化量。该模型变化量包括但不限于:发生变化的神经网络的权值、发生变化的激活函数、发生变化的神经网络中一层或多层神经元的个数、发生变化的神经网络的层间级联关系,和/或发生变化的神经网络中一层或多层的网络类型。示例性的,该预配置可以是接入网络设备通过信令预先配置,该预定义可以是协议预先定义,例如:协议预先定义终端设备中的第三神经网络为神经网络A。
本申请实施例中,第三神经网络的信息可以有多种实现方案,提升了方案的实现灵活性。
第二方面,本申请实施例面提供了一种神经网络的训练方法,包括:第二设备根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息;第二设备向第一设备发送第一信道样本信息;第二设备从第一设备接收第三神经网络的信息,第三神经网络用于第二设备与第一设备传输目标信息。
在一种可能的设计中,该方法还包括:第二设备向第一设备发送第二设备的能力信息。
关于第一信道样本信息、第三神经网的信息、以及第二设备的能力信息等的介绍请参见第一方面,此处不予赘述。
第三方面,本申请实施例提出一种神经网络的训练方法,包括:
第一设备向第二设备发送第一参考信号;第一设备从第二设备接收第一神经网络的信息;所述第一神经网络用于推理得到第二信道样本信息。
该方法中,第二设备(例如终端设备)根据该第一信道样本信息,训练得到第一神经网络。该第一神经网络用于推理得到第二信道样本信息。通过该方法,可以有效减小传输信道样本信息时的空口信令开销。
一种可能的设计中,第二信道样本信息用于训练第二神经网络和/或第三神经网络,所述第二神经网络和/或所述第三神经网络用于所述第一设备与所述第二设备传输目标信息。
一种可能的设计中,该方法包括:第一设备根据该第二信道样本信息,或者根据该第二信道样本信息和第一信道样本信息,训练第二神经网络和/或第三神经网络,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。
一种可能的设计中,该方法包括:第一设备从第三设备接收第二神经网络的信息和/或第三神经网络的信息,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。第二神经网络和/或第三神经网络是第三设备根据第二信道样本信息,或者根据该第二信道样本信息和第一信道样本信息训练得到的。
通过该方法,可以有效减小空口信令开销,同时能适配所处信道环境,训练得到的第二神经网络与第三神经网络更贴近实际信道环境,提升通信性能。训练第二神经网络与第三神经网络的速度也得到大幅度提升。
具体的,第一参考信号、第一神经网络、第一神经网络的信息、第二神经网络和/或第三神经网络等的介绍可以参见第一方面,不予赘述。
一种可能的设计中,该方法还包括:第一设备向第二设备发送第三神经网络的信息。
具体的,第三神经网络的信息的介绍可以参见第一方面,不予赘述。
一种可能的设计中,该方法还包括:
第一设备从第二设备接收能力信息,能力信息用于指示第二设备的以下信息中的一项或多项:
1)是否支持利用神经网络代替或实现通信模块的功能;
2)是否支持第一神经网络的网络类型;
3)是否支持第三神经网络的网络类型;
4)是否支持通过信令接收参考神经网络的信息,所述参考神经网络用于训练所述第一神经网络;
5)是否支持通过信令接收第三神经网络的信息;
6)存储的参考神经网络;
7)可用于存储第一神经网络和/或第三神经网络的内存空间;
8)可用于运行神经网络的算力信息;和,
9)第二设备的位置信息。
第四方面,本申请实施例提出一种神经网络的训练方法,包括:
第二设备根据从第一设备接收的第一参考信号进行信道估计,确定第一信道样本信息;第二设备确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的;第二设备向第一设备发送第一神经网络的信息。
具体的,第一参考信号、第一神经网络、第一信道样本信息、和第一神经网络的信息等的介绍可以参见第三方面,不予赘述。
一种可能的设计中,该方法还包括:
第二设备从第一设备接收第三神经网络的信息。该第三神经网络的信息请参阅第三方面,此处不作赘述。
一种可能的设计中,该方法还包括:第二设备向第一设备发送能力信息。能力信息请参阅第三方面,此处不作赘述。
第五方面,本申请实施例提出一种神经网络的训练方法,包括:第三设备从第一设备接收第一信道样本信息;所述第三设备根据所述第一信道样本信息训练得到第一神经网络,所述第一神经网络用于推理得到第二信道样本信息。
一种可能的设计中,所述方法还包括:根据所述第二信道样本信息训练第二神经网络和/或第三神经网络,向所述第一设备发送所述第二神经网络和/或所述第三神经网络的信息,所述第二神经网络和/或所述第三神经网络用于所述第一设备与第二设备传输目标信息。
第六方面,提供一种装置,该装置可以是接入网设备,也可以是接入网设备中的装置,或者是能够和接入网设备匹配使用的装置。
一种可能的设计中,该装置可以包括执行第一方面中所描述的方法/操作/步骤/动作所 一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括处理模块和收发模块。示例性地,
收发模块用于从第二设备接收第一信道样本信息;
处理模块用于确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的,第一神经网络用于推理得到第二信道样本信息。
关于第一神经网络、第一信道样本信息、第二信道样本信息以及其他操作的介绍请参见第一方面,此处不予赘述。
一种可能的设计中,该装置可以包括执行第三方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括处理模块和收发模块。示例性地,
收发模块用于向第二设备发送第一参考信号;以及从第二设备接收第一神经网络的信息;所述第一神经网络用于推理得到第二信道样本信息。
关于第一神经网络、第一信道样本信息、第二信道样本信息以及其他操作的介绍请参见第三方面,此处不予赘述。
第七方面,提供一种装置,该装置可以是终端设备,也可以是终端设备中的装置,或者是能够和终端设备匹配使用的装置。
一种可能的设计中,该装置可以包括执行第二方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括处理模块和收发模块。示例性地,
处理模块,用于根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息;
收发模块,用于向第一设备发送第一信道样本信息;
收发模块,还用于从第一设备接收第三神经网络的信息,第三神经网络用于第二设备与第一设备传输目标信息。
关于第一参考信号、第一信道样本信息、第三神经网络、以及其他操作的介绍请参见第二方面,此处不予赘述。
一种可能的设计中,该装置可以包括执行第四方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括处理模块和收发模块。示例性地,
处理模块,用于根据从第一设备接收的第一参考信号进行信道估计,确定第一信道样本信息;
处理模块还用于确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的;
收发模块用于向第一设备发送第一神经网络的信息。
关于第一参考信号、第一信道样本信息、第一神经网络、以及其他操作的介绍请参见第四方面,此处不予赘述。
第八方面,提供一种装置,该装置可以是AI节点,也可以是AI节点中的装置,或者 是能够和AI节点匹配使用的装置。
一种可能的设计中,该装置可以包括执行第五方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括处理模块和收发模块。示例性地,
收发模块用于从第一设备接收第一信道样本信息;
处理模块用于根据所述第一信道样本信息训练得到第一神经网络,所述第一神经网络用于推理得到第二信道样本信息。
一种可能的设计中,所述处理模块还用于根据所述第二信道样本信息训练第二神经网络和/或第三神经网络;所述收发模块还用于向所述第一设备发送所述第二神经网络和/或所述第三神经网络的信息,所述第二神经网络和/或所述第三神经网络用于所述第一设备与第二设备传输目标信息。
第九方面,本申请实施例提供一种装置。
一种可能的设计中,所述装置包括处理器,用于实现上述第一方面描述的方法。所述装置还可以包括存储器,用于存储指令和数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的指令时,可以实现上述第一方面描述的方法。所述装置还可以包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。在一种可能的设计中,该装置包括:
存储器,用于存储程序指令;
处理器,用于利用通信接口,从第二设备接收第一信道样本信息;
所述处理器还用于确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的,第一神经网络用于推理得到第二信道样本信息。
关于第一神经网络、第一信道样本信息、第二信道样本信息以及其他操作的介绍请参见第一方面,此处不予赘述。
一种可能的设计中,所述装置包括处理器,用于实现上述第三方面描述的方法。所述装置还可以包括存储器,用于存储指令和数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的指令时,可以实现上述第三方面描述的方法。所述装置还可以包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。在一种可能的设计中,该装置包括:
存储器,用于存储程序指令;
处理器,用于利用通信接口,向第二设备发送第一参考信号;以及从第二设备接收第一神经网络的信息;所述第一神经网络用于推理得到第二信道样本信息。
关于第一神经网络、第一信道样本信息、第二信道样本信息以及其他操作的介绍请参见第三方面,此处不予赘述。
第十方面,本申请实施例提供一种装置。
一种可能的设计中,所述装置包括处理器,用于实现上述第二方面描述的方法。所述装置还可以包括存储器,用于存储指令和数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的指令时,可以实现上述第二方面描述的方法。所述装置还可以 包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。在一种可能的设计中,该装置包括:
存储器,用于存储程序指令;
处理器,用于利用通信接口,根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息,向第一设备发送第一信道样本信息,以及从第一设备接收第三神经网络的信息,第三神经网络用于第二设备与第一设备传输目标信息。
关于第一参考信号、第一信道样本信息、第三神经网络、以及其他操作的介绍请参见第二方面,此处不予赘述。
一种可能的设计中,所述装置包括处理器,用于实现上述第四方面描述的方法。所述装置还可以包括存储器,用于存储指令和数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的指令时,可以实现上述第四方面描述的方法。所述装置还可以包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。在一种可能的设计中,该装置包括:
存储器,用于存储程序指令;
处理器,用于利用通信接口,根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息,确定第一神经网络并向第一设备发送第一神经网络的信息,第一神经网络是根据第一信道样本信息训练得到的;
关于第一参考信号、第一信道样本信息、第一神经网络、以及其他操作的介绍请参见第四方面,此处不予赘述。
第十一方面,本申请实施例提供一种装置。所述装置包括处理器,用于实现上述第五方面描述的方法。所述装置还可以包括存储器,用于存储指令和数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的指令时,可以实现上述第五方面描述的方法。所述装置还可以包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。在一种可能的设计中,该装置包括:
存储器,用于存储程序指令;
处理器,用于利用通信接口,从第一设备接收第一信道样本信息,根据所述第一信道样本信息训练得到第一神经网络,所述第一神经网络用于推理得到第二信道样本信息。
一种可能的设计中,所述处理器还用于根据所述第二信道样本信息训练第二神经网络和/或第三神经网络;所述处理器还利用所述通信接口向所述第一设备发送所述第二神经网络和/或所述第三神经网络的信息,所述第二神经网络和/或所述第三神经网络用于所述第一设备与第二设备传输目标信息。
第十二方面,本申请实施例还提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行第一方面至第五方面中的任一方法。
第十三方面,本申请实施例还提供一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行第一方面至第五方面中的任一方法。
第十四方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,还可以包 括存储器,用于实现上述第一方面至第五方面中的任一方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
第十五方面,本申请实施例还提供一种通信系统,该通信系统中包括:
第六方面的装置和第七方面的装置;
第六方面的装置、第七方面的装置和第八方面的装置;
第九方面的装置和第十方面的装置;
第九方面的装置、第十方面的装置和第十一方面的装置。
附图说明
图1为本申请实施例提供的网络架构示意图;
图2为本申请实施例提供的通信装置的硬件结构示意图;
图3为本申请实施例提供的神经元结构示意图;
图4为本申请实施例提供的神经网络的层关系示意图;
图5为本申请实施例提供的卷积神经网络CNN示意图;
图6为本申请实施例提供的前馈神经网络RNN示意图;
图7为本申请实施例提供的生成对抗网络GAN的示意图;
图8为本申请实施例提供的变分自编码器的示意图;
图9为本申请实施例提供的收发联合优化星座调制/解调的神经网络架构示意图;
图10-图12为本申请实施例提供的神经网络的训练方法的流程示意图;
图13为本申请实施例提供的第一神经网络的一种生成器网络的结构示意图;
图14为本申请实施例提供的第一神经网络的一种判别器网络的结构示意图;
图15a为本申请实施例提供的第一神经网络的一种生成器网络结构示意图;
图15b为本申请实施例提供的第一神经网络的一种判别器网络结构示意图;
图16a和图16b为本申请实施例提供的的网络结构示意图;
图17为本申请实施例提供的一种通信装置示意图。
具体实施方式
在无线通信系统中,包括通信设备,通信设备间可以利用无线资源进行无线通信。其中,通信设备可以包括接入网设备和终端设备,接入网设备还可以称为接入侧设备。无线资源可以包括链路资源和/或空口资源。空口资源可以包括时域资源、频域资源、码资源和空间资源中至少一种。在本申请实施例中,至少一种(个)还可以描述为一种(个)或多种(个)),多种(个)可以是两种(个)、三种(个)、四种(个)或者更多种(个),本申请不做限制。
在本申请实施例中,“/”可以表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;“和/或”可以用于描述关联对象存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。为了便于描述本申请实施例的技术方案,在本申请实施例中,可以采用“第一”、“第二”等字样对功能相同或相似的技术特征进行区分。该“第一”、“第二”等字样并不对数 量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。在本申请实施例中,“示例性的”或者“例如”等词用于表示例子、例证或说明,被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,便于理解。
如图1所示,为本申请实施例适用的网络架构示意图。本申请实施例中的通信系统,可以是包括接入网设备(如图1所示的基站)与终端设备的系统,也可以是包括两个或两个以上的终端设备的系统。该通信系统中,接入网设备可以向终端设备发送配置信息,终端设备根据配置信息进行相应配置。接入网设备可以向终端设备发送下行数据,和/或终端设备可以向接入网设备发送上行数据。在包括两个或两个以上的终端设备的通信系统(如车联网)中,终端设备1可以向终端设备2发送配置信息,终端设备2根据配置信息进行相应配置,终端设备1可以向终端设备2发送数据,终端设备2也可以向终端设备1发送数据。可选地,图1所示的通信系统中,接入网设备可以实现以下人工智能(artificial intelligence,AI)功能中的一种或多种:模型训练、和推理。可选地,图1所示的通信系统中,网络侧可以包括独立于接入网设备的节点,用于实现以下AI功能中的一种或多种:模型训练、和推理。该节点可以称为AI节点、模型训练节点、推理节点、无线智能控制器、或其他名称,不予限制。示例性地,可以由接入网设备实现模型训练功能和推理功能;或者,可以由AI节点实现模型训练功能和推理功能;或者,可以由AI节点实现模型训练功能,并将模型的信息发送给接入网设备,由接入网设备实现推理功能。可选地,如果AI节点实现推理功能,AI节点可以将推理结果发送给接入网设备以供接入网设备使用,和/或AI节点可以通过接入网设备将推理结果发送给终端设备以供终端设备使用;如果是接入网设备实现推理功能,接入网设备可以使用推理结果,或者将推理结果发送给终端设备以供终端网设备使用。如果AI节点用于实现模型训练功能和推理功能,AI节点可以分离为两个节点,其中一个节点实现模型训练功能,另一个节点实现推理功能。
本申请实施例不限制所涉及的通信系统中的网元的具体个数。
本申请实施例涉及到的终端设备还可以称为终端或接入终端,可以是一种具有无线收发功能的设备。该终端设备可以经接入网设备与一个或多个核心网(core network,CN)进行通信。终端设备可以是用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、用户设备(user equipment,UE)、用户代理或用户装置等。终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。终端设备可以是蜂窝电话(cellular phone)、无绳电话、会话启动协议(session initiation protocol,SIP)电话、智能电话(smart phone)、手机(mobile phone)、无线本地环路(wireless local loop,WLL)站、个人数字处理(personal digital assistant,PDA)。终端设备也可以是具有无线通信功能的手持设备、计算设备或其它设备、车载设备、可穿戴设备、无人机设备、物联网或车联网中的终端、第五代(fifth generation,5G)移动通信网络中的终端、中继用户设备或者未来演进的移动通信网络(中的终端等。其中,中继用户设备例如可以是5G家庭网关(residential gateway,RG)。再例如,终端设备可以是虚拟现实(virtual reality,VR) 终端、增强现实(augmented reality,AR)终端、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、或智慧家庭(smart home)中的无线终端等。本申请实施例对此并不限定。本申请实施例中,用于实现终端的功能的装置可以是终端;也可以是能够支持终端实现该功能的装置,例如芯片系统,该装置可以被安装在终端中或者和终端匹配使用。本申请实施例中,芯片系统可以由芯片构成,也可以包括芯片和其他分立器件。
接入网设备可以看作是运营商网络的子网络,是运营商网络中业务节点与终端设备之间的实施系统。终端设备要接入运营商网络,可以先经过接入网设备,进而可通过接入网设备与运营商网络的业务节点连接。本申请实施例中的接入网设备,是位于(无线)接入网((radio)access network,(R)AN)中,能够为终端设备提供无线通信功能的设备。接入网设备包括基站,例如包括但不限于:5G系统中的下一代节点B(next generation node B,gNB)、长期演进(long term evolution,LTE)系统中的演进型节点B(evolved node B,eNB)、无线网络控制器(radio network controller,RNC)、节点B(node B,NB)、基站控制器(base station controller,BSC)、基站收发台(base transceiver station,BTS)、家庭基站(例如,home evolved nodeB,或home node B,HNB)、基带单元(base band unit,BBU)、传输点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、小基站设备(pico)、移动交换中心,或者未来网络中的接入网设备等。采用不同无线接入技术的系统中,具备接入网设备功能的设备的名称可能会有所不同。本申请实施例中,用于实现接入网设备的功能的装置可以是接入网设备;也可以是能够支持接入网设备实现该功能的装置,例如芯片系统,该装置可以被安装在接入网设备中或者和接入网设备匹配使用。
本申请实施例提供的技术方案可以应用于各种通信系统,例如:LTE系统、5G系统、无线保真(wireless-fidelity,WiFi)系统、未来的第六代移动通信系统、或者多种通信系统融合的系统等,本申请实施例不做限定。其中,5G还可以称为新无线(new radio,NR)。
本申请实施例提供的技术方案可以应用于各种通信场景,例如可以应用于以下通信场景中的一种或多种:增强移动宽带(enhance mobile broadband,eMBB)通信、超可靠低时延通信(ultra-reliable low-latency communication,URLLC)、机器类型通信(machine type communication,MTC)、大规模机器类型通信(massive machine-type communications,mMTC)、设备到设备(device-to-device,D2D)通信、车辆外联(vehicle to everything,V2X)通信、车辆到车辆(vehicle to vehicle,V2V)通信、和物联网(internet of things,IoT)等。在本申请实施例中,术语“通信”还可以描述为“传输”、“信息传输”、“数据传输”、或“信号传输”等。传输可以包括发送和/或接收。本申请实施例中,以接入网设备和终端设备间的通信为例描述技术方案。本领域技术人员也可以将该技术方案用于进行其它调度实体和从属实体间的通信,例如宏基站和微基站之间的通信,和/或终端设备1和终端设备2间的通信等。
此外,本申请描述的网络架构以及业务场景是为了更加清楚的说明本申请的技术方案,并不构成对本申请提供的技术方案的限定,本领域技术人员可知,随着网络架构的演变和新业务场景的出现,本申请提供的技术方案对于类似的技术问题,同样适用。
图2为本申请实施例中通信装置的硬件结构示意图。该通信装置可以是本申请实施例中的AI节点、接入网设备或终端设备的一种可能的实现方式。该通信装置可以是AI节点,可以是AI节点中的装置,或者可以是能和AI节点匹配使用的装置。该通信装置可以是接入网设备,可以是接入网设备中的装置,或者可以是能和接入网设备匹配使用的装置。该通信装置可以是终端设备,可以是终端设备中的装置,或者可以是能和终端设备匹配使用的装置。其中,该装置可以为芯片系统。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。本申请实施例中的连接是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。
如图2所示,通信装置包括至少一个处理器204,用于实现本申请实施例提供的技术方案。可选地,该通信装置还可以包括存储器203。存储器203用于存储指令2031和/或数据2032。存储器203和处理器204连接。处理器204可以和存储器203协同操作。处理器204可能执行存储器203中存储的指令,用于实现本申请实施例提供的技术方案。该通信装置还可以包括收发器202,用于接收和/或发送信号。可选的,该通信装置还可以包括以下一项或多项:天线206,I/O(输入/输出,Input/Output)接口210和总线212。收发器202进一步包括发射器2021和接收器2022。处理器204,收发器202,存储器203和I/O接口210通过总线212彼此通信连接,天线206与收发器202相连。总线212可以包括地址总线、数据总线和/或控制总线等,图2中仅用一条粗线表示总线212,但并不表示总线212仅有一根总线或一种类型的总线。
处理器204可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列和/或其他可编程逻辑器件,例如:分立门或者晶体管逻辑器件和/或分立硬件组件。通用处理器可以是微处理器或者任何常规的处理器等。处理器204可以实现或者执行本申请实施例公开的各方法、步骤及逻辑框图。结合本申请实施例所公开的方法的步骤可以体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。示例性的,处理器204可以是中央处理器(Central Processing Unit,CPU),也可以是专用处理器,例如但不限于,数字信号处理器(Digital Signal Processor,DSP),应用专用集成电路(Application Specific Integrated Circuit,ASIC)和/或现场可编程门阵列(Field Programmable Gate Array,FPGA)等。该处理器204还可以是神经网络处理单元(neural processing unit,NPU)。此外,处理器204还可以是多个处理器的组合。在本申请实施例提供的技术方案中,处理器204可以用于执行后续方法实施例中的相关步骤。处理器204可以是专门设计用于执行上述步骤和/或操作的处理器,也可以是通过读取并执行存储器203中存储的指令2031来执行上述步骤和/或操作的处理器,处理器204在执行上述步骤和/或操作的过程中可能需要用到数据2032。
收发器202包括发射器2021和接收器2022,在一种可选的实现方式中,发射器2021用于通过天线206之中的至少一根天线发送信号。接收器2022用于通过天线206之中的至 少一根天线第二参考信号。
在本申请实施例中,收发器202用于支持通信装置执行接收功能和发送功能。将具有处理功能的处理器视为处理器204。接收器2022也可以称为输入口、接收电路、接收总线或其它实现接收功能的装置等,发射器2021可以称为发射口、发射电路、发射总线或其它实现发射功能的装置等。收发器202也可以称为通信接口。
处理器204可用于执行该存储器203存储的指令,例如以控制收发器202接收消息和/或发送消息,完成本申请方法实施例中通信装置的功能。作为一种实现方式,收发器202的功能可以考虑通过收发电路或者收发专用芯片实现。本申请实施例中,收发器202接收消息可以理解为收发器202输入消息,收发器202发送消息可以理解为收发器202输出消息。
存储器203可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,或者可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码、并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。存储器203具体用于存储指令2031和数据2032,处理器204可以通过读取并执行存储器203中存储的指令2031,来执行本申请方法实施例中的步骤和/或操作,在执行本申请方法实施例中操作和/或步骤的过程中可能需要用到数据2032。
可选的,该通信装置还可以包括I/O接口210,该I/O接口210用于接收来自外围设备的指令和/或数据,以及向外围设备输出指令和/或数据。
接下来,介绍本申请实施例涉及的一些概念:
机器学习(machine learning,ML)在近年来引起了学术界和工业界的广泛关注。由于机器学习在面对结构化信息与海量数据时的巨大优势,诸多通信领域的研究者也将目光投向机器学习。基于机器学习的通信技术在信号分类、信道估计、和/或性能优化等方面有着巨大的潜力。大多数通信系统都是逐块设计的,这意味着这些通信系统由多个模块组成。对于这种基于模块设计的通信体系结构,可以开发许多技术来优化每个模块的性能。但是每个模块的性能达到最佳,并不一定意味着整个通信系统的性能达到最佳。端到端优化(即优化整个通信系统)可能优于优化单个模型。机器学习为实现端到端性能尽可能地最大化提供了一种先进且强力的工具。在无线通信系统中,在复杂、大规模的通信场景下,信道状况变化迅速。许多传统的通信模型,如大规模多输入多输出(multiple input multiple output,MIMO)模型严重依赖信道状态信息,这些模型的性能在非线性时变信道下会发生恶化。因此,准确获取时变信道的信道状态信息(channel state information,CSI)对系统性能至关重要。通过利用机器学习技术,有可能使通信系统可以学习突变的信道模型,并及时反馈信道状态。
基于上述考虑,在无线通信中使用机器学习技术,可以适应未来无线通信场景中的新需求。
机器学习是实现人工智能的一种重要技术途径。机器学习可以包括监督学习、无监督 学习、和强化学习。
监督学习依据已采集到的样本值和样本标签,利用机器学习算法学习样本值到样本标签的映射关系,并用机器学习模型来表达学到的映射关系。其中,样本标签还可以简称为标签。训练机器学习模型的过程就是学习这种映射关系的过程。如信号检测中,含噪声的接收信号即为样本,该信号对应的真实星座点即为标签,机器学习期望通过训练学到样本值与标签之间的映射关系,即,使机器学习模型学到一种信号检测器。在训练时,通过计算模型的预测值与真实标签的误差来优化模型参数。映射关系学习完成后,可以利用学到的映射来预测新的样本的样本标签。监督学习学到的映射关系可以包括线性映射、或非线性映射。根据标签的类型可将学习的任务分为分类任务和回归任务。
无监督学习依据采集到的样本值,利用算法自行发掘样本的内在模式。无监督学习中有一类算法将样本自身作为监督信号,即模型学习从样本到样本的映射关系,也称为自监督学习。训练时,通过计算模型的预测值与样本本身之间的误差来优化模型参数。例如,自监督学习可用于信号压缩及解压恢复的应用,常见的算法包括自编码器和对抗生成网络等。
强化学习不同于监督学习,是一类通过与环境进行交互来学习解决问题的策略的算法。与监督、无监督学习不同,强化学习问题并没有明确的“正确的”动作标签数据,算法需要与环境进行交互,获取环境反馈的奖励信号,进而调整决策动作以获得更大的奖励信号数值。如下行功率控制中,强化学习模型根据无线网络反馈的系统总吞吐率,调整各个用户的下行发送功率,进而期望获得更高的系统吞吐率。强化学习的目标也是学习环境状态与最优决策动作之间的映射关系。但因为无法事先获得“正确动作”的标签,所以不能通过计算动作与“正确动作”之间的误差来优化网络。强化学习的训练是通过与环境的迭代交互而实现的。
深度神经网络(deep neural network,DNN)是机器学习的一种具体实现形式。根据通用近似定理,DNN理论上可以逼近任意连续函数,从而使得DNN具备学习任意映射的能力。传统通信系统需要借助丰富的专家知识来设计通信模块,而基于DNN的深度学习通信系统可以从大量的数据集中自动发现隐含的模式结构,建立数据之间的映射关系,获得优于传统建模方法的性能。
DNN的思想来源于大脑组织的神经元结构。每个神经元都对其输入值做加权求和运算,将加权求和结果通过一个激活函数产生输出。如图3所示,图3为神经元结构示意图。假设神经元的输入为x=[x 0,x 1,…x n],与输入对应的权值为w=[w 0,w 1,…w n],加权求和的偏置为b,激活函数的形式可以多样化,作为示例,如果一个神经元的激活函数为:y=f(x)=max{0,x},则一个神经元的执行的输出为:
Figure PCTCN2020142103-appb-000001
其中,w ix i表示w i与x i的乘积,b的数据类型为整数或浮点数,w i的数据类型为整数或浮点数。DNN一般具有多层结构,DNN的每一层都可包含一个或多个神经元。DNN的输入层将接收到的数值经过神经元处理后,传递给中间的隐藏层。类似的,隐藏层再将计算结果传递给相邻的下一层隐藏层或者相邻的输出层,产生DNN的最后输出。如图4所示,图4为神经网络的层关系示意图。
DNN一般具有一个或多个隐藏层,隐藏层可以影响提取信息和拟合函数的能力。增加DNN的隐藏层数或扩大每一层的神经元的个数都可以提高DNN的函数拟合能力。每个神经元的参数包括权值、偏置和激活函数,DNN中所有神经元的参数构成的集合称为DNN参数(或称为神经网络参数)。神经元的权值和偏置可以通过训练过程得到优化,从而使得DNN具备提取数据特征、和表达映射关系的能力。DNN一般使用监督学习或非监督学习策略来优化神经网络参数。
根据网络的构建方式,DNN可包括前馈神经网络(feedforward neural network,FNN)、卷积神经网络(convolutional neural networks,CNN)和递归神经网络(recurrent neural network,RNN)。
图4所示即为一种FNN,其特点为相邻层的神经元之间两两完全相连,这使得FNN通常需要大量的存储空间、导致较高的计算复杂度。
如图5所示,图5为CNN示意图。CNN是一种用于处理具有类似网格结构的数据的神经网络。例如,时间序列数据(时间轴离散采样)和图像数据(二维离散采样)都可以认为是类似网格结构的数据。CNN不是一次性利用全部的输入信息做运算,而是采用一个窗(例如固定大小的窗)截取部分信息做卷积运算,这就大大降低了神经网络参数的计算量。另外根据窗截取的信息类型的不同(如同一副图中的人和物为不同类型信息),每个窗可以采用不同的卷积核运算,这使得CNN能更好的提取输入数据的特征。其中,卷积层用于进行特征提取,得到特征图。池化层用于对输入的特征图进行压缩,使特征图变小,简化网络计算复杂度。全连接层用于将学到的“分布式特征表示”映射到样本标记空间。示例性的,图5中,判断该图像为太阳的概率为0.7,为月亮的概率是0.1,为汽车的概率是0.05,为房子的概率为0.02。
如图6所示,图6为RNN示意图。RNN是一类利用反馈时间序列信息的DNN,它的输入包括当前时刻的新的输入值和自身在前一时刻的输出值。RNN适合获取在时间上具有相关性的序列特征,特别适用于语音识别、信道编译码等应用。参考图6,一个神经元在多个时刻的输入产生多个输出。比如在第0时刻,输入是x 0和s 0,输出是y 0和s 1,在第1时刻,输入是x 1和s 1,输出是y 1和s 2,……,在第t时刻,输入是x t和s t,输出是y t和s t+1
生成型神经网络(generative neural network)是一类特殊的深度学习神经网络,不同于一般神经网络主要执行的分类任务和预测任务,生成型神经网络可以学习一组训练样本所服从的概率分布函数,从而可以用来对随机变量进行建模,也可以用来建立变量间的条件概率分布。常见的生成型神经网络包括生成对抗网络(generative adversarial network,GAN)和变分自编码器(variational autoencoder,VAE)。
请参阅图7,图7为生成对抗网络GAN的示意图。GAN网络按照功能可以包括两部分:生成器网络(简称为生成器)与判别器网络(简称为判别器)。具体的,生成器网络对输入的随机噪声进行加工处理,输出生成样本。判别器网络将生成器网络输出的生成样本与训练集中的训练样本进行比对,判断生成样本与训练样本是否近似服从相似的概率分布,输出真假的判断。例如:以生成样本和训练样本服从正态分布为例,当生成样本的均值与训练样本的均值一致,当生成样本的方差与训练样本的方差之间的差值小于阈值(如0.01) 时,认为该生成样本和训练样本服从相似的概率分布。生成器网络与判别器网络间存在博弈的关系:生成器网络希望尽可能生成服从训练集分布的样本,判别器网络希望尽可能分辨出生成样本与训练集的区别。通过联合训练生成器网络和判别器网络,可使二者达到均衡状态,即生成器网络输出的生成样本服从的概率分布与训练样本服从的概率分布相似,判别器网络认为生成样本与训练集服从相似分布。可选地,该相似分布可以被称为同分布。
请参阅图8,图8为变分自编码器的示意图。VAE按照功能可以包括三部分:编码器网络(简称为编码器)、解码器网络(简称为解码器)和判别器网络(简称为判别器)。编码器网络将输入的训练集中的样本压缩成中间变量,解码器网络试图将中间变量还原成训练集中的样本。同时,还可以对中间变量的形式做一定的约束,类似于GAN中的判别网络,VAE中也可以使用一个判别网络,判断中间变量是否服从随机噪声的分布。通过联合训练编码器网络、解码器网络和判别器网络,解码器网络可利用输入的随机噪声生成符合训练集分布的生成样本。
上述FNN、CNN、RNN、GAN与VAE为神经网络结构,这些网络结构都是以神经元为基础而构造的。
得益于机器学习在建模和提取信息特征的优势,可以设计基于机器学习的通信方案,能够取得较好性能。这包括但不限于CSI压缩反馈、自适应星座点设计、和/或鲁棒预编码等。这些方案通过将原有通信系统中的发送或接收模块替换成神经网络模型,优化传输性能或者降低处理复杂度。为了支持不同的应用场景,可以预先定义或者配置不同的神经网络模型信息,使得神经网络模型可以适应不同场景的要求。
本申请实施例中,基于神经网络的通信系统中,接入网设备和/或终端设备的部分通信模块可以采用神经网络模型。如图9所示,图9为收发联合优化星座调制/解调的神经网络架构示意图。其中,发送端(也称为发端)的星座映射神经网络和接收端(也称为收端)的星座解映射神经网络均采用神经网络模型。发端的星座映射神经网络将比特流映射成星座符号,星座解映射神经网络将接收到的星座符号解映射(或解调)成比特信息的对数似然比。通过采集信道数据训练神经网络,可以实现较优的端到端通信性能。在通信系统中,发端对待发送的比特流进行一系列处理,可以包括以下一项或多项:信道编码、星座符号映射调制、正交频分复用(orthogonal frequency division multiplexing,OFDM)调制、层映射、预编码、和上变频等。为表述方便,图9中仅列出了星座映射符号调制和OFDM调制。
需要说明的是,本申请实施例涉及的神经网络,不限于任何特定的应用场景,而是可以应用于任意通信的场景,如CSI压缩反馈、自适应星座点设计、和/或鲁棒预编码等。
上述神经网络需要进行适应性训练,以保证通信性能。例如,针对每一种不同的无线系统参数(包括无线信道类型、带宽、收端天线数、发端天线数、调制阶数、配对用户数、信道编码方法、和编码码率中的一个或多个),定义一套与之相应的神经网络模型信息(包括神经网络结构信息和神经网络参数)。以基于人工智能(artificial intelligence,AI)的自适应调制星座设计为例,当发端天线数为1,收端天线数为2时,需要一套神经网络模型信息,以生成对应的调制星座;当发端天线数为1,收端天线数为4时,需要另外一 套神经网络模型信息,以生成对应的调制星座。类似地,不同的无线信道类型、带宽、调制阶数、配对用户数、信道编码方法、和/或编码码率,对应的神经网络模型信息都可能不同。
为了使得(收发端)神经网络在实际信道情况下都取得较好的性能,在训练神经网络时需要使用实际的信道样本信息进行联合训练。通常来说,训练时使用的信道样本信息的数量越大,则训练效果越好。该信道样本信息的来源包括:1、收端根据实际测量得到;2、使用数学(信道)模型生成得到。
具体如下:1、信号(例如参考信号、或同步信号等)的接收端(例如UE)对真实的信道进行测量,得到信道样本信息。该信道样本信息能较准确的反映信道环境。如果用于利用信道样本信息来训练神经网络的网元为信号的发送端,接收端测量得到信道样本信息后,将该信道样本信息反馈给发送端(例如基站)。发送端根据该信道样本信息训练神经网络。而为了提升训练效果,使得训练得到的神经网络性能更好,信号的接收端需要向信号的发送端反馈大量的信道样本信息。而反馈大量的信道样本信息,会占用大量的空口资源,影响收发两端之间数据的传输效率。
2、一种可能的实现中,针对不同的信道类型,可以使用数学表达式对信道模型进行建模。即使用数学(信道)模型模拟真实的信道。例如:协议中可以定义抽头延迟线(tapped delay line,TDL)、簇延迟线(clustered delay line,CDL)等数学信道模型。每一种信道类型又可以细分为多个子类别,如TDL和CDL信道分别包括A、B、C、D、E五个子类;每种子类又根据具体参数再次细分出多个典型信道场景,如根据多径时延间隔细分为10纳秒(ns)、30ns、100ns、300ns或1000ns等多种典型场景。因此,在训练收发端神经网络时,可以选择与实际环境较相似的数学信道模型产生大量与实际信道近似的信道样本信息,并使用该信道样本信息进行训练。
虽然采用数学信道模型可以极大的减小获取信道样本的信令开销,但也存在数学信道模型与实际信道模型失配的缺点。例如,TDL信道模型假设了有限的反射径数目,且每个径的信道系数服从简单的瑞丽分布。而实际信道的反射径数目在不同环境中不尽相同,且瑞丽分布也不能准确的刻画每径信道系数的分布,另外多径时延间隔也往往因场景而异,粗颗粒地划分成几个典型数值必然引起建模误差。故数学信道建模方法难以准确刻画实际信道模型,进而影响神经网络的训练效果。即存在数据模型失配的问题。
综上,如何在节约空口资源的前提下,生成贴近实际信道场景的信道样本信息,成了目前亟待解决的问题。
接下来,结合附图介绍本申请提出的技术方案。请参阅图10,图10为本申请实施例提出的一种神经网络的训练方法的流程示意图。本申请实施例提出的一种神经网络的训练方法包括:
可选的,1001、第一设备向第二设备发送第一参考信号。
本实施例中,以第一设备为接入网设备,第二设备为终端设备为例进行说明。可以理解的是,第一设备可以是接入网设备或接入网设备中的芯片,第二设备可以是终端设备或终端设备中的芯片。第一设备也可以是终端设备或终端设备中的芯片,第二设备也可以是 接入网设备或接入网设备中的芯片,此处不作限制。
第一设备向第二设备发送第一参考信号,该第一参考信号可以是同步信号、同步信号块(synchronization signal and PBCH block,SSB)、解调参考信号(demodulation reference signal,DMRS)或信道状态信息参考信号(channel state information reference signal,CSI-RS)等。第一参考信号还可以称为第一信号。该第一参考信号也可以是新定义的参考信号,此处不作限制。该第一参考信号的序列类型包括但不限于:ZC(zadoff-chu)序列或黄金(glod)序列。可选的,DMRS可以是物理下行控制信道(physical downlink control channel,PDCCH)的DMRS或物理下行共享信道(physical downlink shared channel,PDSCH)的DMRS。
可选的,步骤1001之前,第二设备需要使用神经网络(第二设备使用的神经网络称为第三神经网络)时,第二设备向第一设备发送能力信息。该能力信息用于指示第二设备的以下信息中的一项或多项:
1)是否支持利用神经网络代替或实现通信模块的功能;
该通信模块包括但不限于:OFDM调制模块、OFDM解调模块、星座映射模块、星座解映射模块、信道编码模块、信道译码模块、预编码模块、均衡模块、交织模块和/或解交织模块。
2)是否支持第三神经网络的网络类型,或者指示所支持的第三神经网络的网络类型;
其中,第三神经网络的网络类型包括以下一项或多项:全连接神经网络、径向基神经网络、卷积神经网络、循环神经网络、霍普菲尔德神经网络、受限玻尔兹曼机或深度置信网络等。该第三神经网络可以是上述的任一种神经网络,该第三神经网络也可以是上述多种神经网络的结合,此处不做限制。
3)是否支持通过信令接收第三神经网络的信息;
4)存储的预配置的第三神经网络;
例如:若第二设备中已经存在预定义或预配置的第三神经网络,则在能力信息中可以携带第三神经网络的标识或者索引。
可选的,在能力信息中还可以携带第二设备中已经存在的预定义或预配置的其它神经网络的标识或者索引。
5)可用于存储第三神经网络的内存空间;
6)可用于运行神经网络的算力信息;
该算力信息指的是运行神经网络的计算能力信息,比如包括处理器的运算速度、和/或处理器能够处理的数据量大小等信息。
可选的,1002、第二设备根据接收到的来自第一设备的第一参考信号进行信道估计,确定第一信道样本信息。
本实施例中,第二设备接收第一参考信号后,对该第一参考信号进行信道估计,确定第一信道样本信息。
具体的,为了便于描述,第一参考信号经过信道传输后,在第二设备处接收到的信号称为第二参考信号。设第一参考信号为x,第二参考信号为y,则传播该第一参考信号的信 道可以理解为具有转移概率P(y|x)的函数。
第二设备可以预先通过信令为第一设备配置第一参考信号的信息,或者,可以由协议约定第一参考信号的信息,不予限制。本申请实施例中,信令的类型不予限制,例如可以是广播消息、系统信息、无线资源控制(radio resource control,RRC)信令、媒体接入控制(media access control,MAC)控制元素(control element,CE))、或下行控制信息(downlink control information,DCI)。
由于第二设备已提前知道第一设备发送的第一参考信号x的信息,当第二设备接收到第二参考信号y后,第二设备根据该第二参考信号y和被发送的第一参考信号x,进行信道估计,确定第一信道样本信息。此时,可以估计出第一参考信号从被发送到被接收之间所经历的信道,例如所经历的幅度变化和相位旋转。该第一信道样本信息包括但不限于该第二参考信号和/或信道状态信息(channel state information,CSI)。
第二设备进行信道估计时使用的信道估计算法包括但不限于:最小二乘估计法(least square,LS)或线性最小均方误差估计法(linear minimum mean square error,LMMSE)。
1003、第二设备向第一设备发送第一信道样本信息。
本实施例中,第二设备向第一设备发送该第一信道样本信息。
1004、第一设备根据第一信道样本信息训练得到第一神经网络。
本实施例中,第一设备根据第一信道样本信息训练得到第一神经网络。第一神经网络用于生成新的信道样本信息。
可选的,第一神经网络为生成型神经网络。本申请实施例中,以第一神经网络为GAN为例进行说明,可以理解的是,第一神经网络可以是其它类型的神经网络,例如VAE等,此处不作限制。按照功能划分,第一神经网络包括生成器网络和判别器网络,其中,生成器网络用于产生第二信道样本信息,判别器网络用于判断新产生的第二信道样本信息与来自第二设备的第一信道样本信息是否服从相似的概率统计特征。利用机器学习的方法,联合训练第一神经网络的生成器网络和判别器网络,可以使生成器网络输出的第二信道样本信息收敛到来自第二设备的第一信道样本信息的概率分布。下面分别进行示例说明。
示例性的,当第一信道样本信息为CSI(记CSI为h)时,第一神经网络的结构示意图请参阅图13-14。图13为本申请实施例中第一神经网络的一种生成器网络的结构示意图。图14为本申请实施例中第一神经网络的一种判别器网络的结构示意图。图13-14所示的第一神经网络中,生成器网络包括5层卷积层网络。该生成器网络的输入为随机噪声z,该随机噪声包括但不限于高斯白噪声。该生成器网络的输出为第二信道样本信息
Figure PCTCN2020142103-appb-000002
判别器网络包括3层卷积层网络和3层全连接层网络。判别器网络的一部分输入为生成器网络的输出信息
Figure PCTCN2020142103-appb-000003
该判别器网络的另一部分输入信号包括来自第二设备的第一信道样本信息h。该判别器网络的输出c是二值变量,表征生成器网络输出第二信道样本信息是否服从第一信道样本的概率分布,即
Figure PCTCN2020142103-appb-000004
是否服从h的概率分布。
示例性的,当第一信道样本信息为前文介绍的第二参考信号(y)时,第一神经网络的结构示意图请参阅图15a-15b。图15a为本申请实施例中第一神经网络的一种生成器网络结构示意图。图15b为本申请实施例中第一神经网络的一种判别器网络结构示意图。
图15a-15b所示的第一神经网络中生成器网络由7层卷积层网络构成,该生成器网络的输入包括随机噪声z和训练序列x,输出为样本
Figure PCTCN2020142103-appb-000005
判别器网络包括4层卷积层网络和4层全连接层网络,输入包括生成器网络产生的样本
Figure PCTCN2020142103-appb-000006
来自第二设备的样本第二参考信号y、和对应的第一参考信号x。判别器网络的输出c是二值变量,表征
Figure PCTCN2020142103-appb-000007
是否服从P(y|x)的概率分布。
假设第一设备发送的第一参考信号为x,经过信道传播后,在第二设备处接收的第二参考信号为y,则该信道可以理解成具有转移概率P(y|x)的函数。第一神经网络希望学到信道的这种概率转移特征,对于每个输入x,都能以P(y|x)的概率产生输出y,即第一神经网络模拟了信号经由信道传播的过程。
具体的,第一设备使用第一信道样本信息,对参考神经网络进行训练。训练完成后得到的神经网络称为第一神经网络。本申请实施例中,为了便于描述,可以将使用第一信道样本信息训练前的神经网络称为参考神经网络。示例性的,第一设备使用来自第二设备的包括5000个样本的第一信道样本信息训练参考神经网络,从而训练得到第一神经网络。
在一种可选的实现方式中,该参考神经网络可以预配置部分神经元的参数或神经网络的结构。该参考神经网络还可以预配置其它神经网络的信息,此处不作限制。例如:该参考神经网络可以是在预定义的信道环境下训练得到的用于生成信道样本信息(例如CSI或第二参考信号等)的神经网络。第一设备使用第一信道样本信息对该参考神经网络进行进一步训练,训练得到第一神经网络。
在另一种可选的实现方式中,该参考神经网络可以是初始化的神经网络。例如:该参考神经网络中各神经元的参数为零、随机值或者被约定的其它值,和/或神经网络的结构为随机的或者约定的初始结构等,此处不作限制。第一设备使用第一信道样本信息对该参考神经网络开始进行训练,从而训练得到第一神经网络。
1005、第一设备使用第一神经网络推理得到第二信道样本信息。
本实施例中,当第一设备获得训练完成的第一神经网络后,使用第一神经网络推理得到第二信道样本信息。
示例性地,第一设备将随机噪声输入第一神经网络,可以由第一神经网络推理得到第二信道样本信息。示例性地,第一神经网络为GAN时,将随机噪声输入第一神经网络的生成器网络,可以得到第二信道样本信息。再示例性地,第一神经网络为VAE时,将随机噪声输入第一神经网络的解码器网络,可以得到第二信道样本信息。
第二信道样本信息的形式与第一信道样本信息的形式相同。例如,当第一信道样本信息为CSI(h)时,第二信道样本信息为CSI
Figure PCTCN2020142103-appb-000008
当第一信道样本信息为第二参考信号(y)时,第二信道样本信息为第二参考信号
Figure PCTCN2020142103-appb-000009
可选的,1006、第一设备根据第二信道样本信息训练第二神经网络和/或第三神经网络。
示例性的,当第一信道样本信息为CSI(记CSI为h)时,第一设备根据第二信道样本信息训练第二神经网络和/或第三神经网络的一种结构示意图参见图16a。第一神经网络(GAN的生成器网络)的输入为高斯白噪声,输出为第二信道样本信息
Figure PCTCN2020142103-appb-000010
该第二样本信息
Figure PCTCN2020142103-appb-000011
和高斯白噪声输入到信号过信道模块。示例性的,在图16a的信号过信道模块中,发端信 号与第二样本信息
Figure PCTCN2020142103-appb-000012
相乘,乘积结果再与高斯白噪声相加,其结果作为接收端的输入。通过机器学习方法,可以端到端地训练接收端神经网络(例如:星座解映射神经网络)和发送端神经网络(星座映射神经网络)。
示例性的,当第一信道样本信息为前文介绍的第二参考信号(y)时,第一设备根据第二信道样本信息训练第二神经网络和/或第三神经网络的一种结构示意图参见图16b。第一神经网络(GAN的生成器网络)的输入高斯白噪声和发送信号x,该第一神经网络的输出为第二参考信号
Figure PCTCN2020142103-appb-000013
将第一神经网络连接在接收端与发送端之间,通过机器学习方法,即可端到端地训练接收端神经网络(例如:星座解映射神经网络)和发送端神经网络(星座映射神经网络)。
本实施例中,第一设备根据第一神经网络生成第二信道样本信息后,第一设备根据第二信道样本信息训练第二神经网络和/或第三神经网络。第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。
可选的,第一设备根据第一信道样本信息和第二信道样本信息,训练第二神经网络和/或第三神经网络。
关于第二神经网络与第三神经网络下面分别进行描述:
本申请实施例中,第二神经网络应用于发送端,可以理解为第二神经网络执行对数据的发送处理,例如:如图9所示的星座映射神经网络。可选地,第二神经网络可以执行速率匹配(Rate matching)和/或OFDM调制等操作。其中,速率匹配(Rate matching)是指待发送的比特被重复和/或被打孔(punctured),以匹配物理信道的承载能力,使得信道映射时达到传输格式所要求的比特速率。OFDM调制主要实现把基带频谱搬移到射频频段,从而实现无线发送功能。示例性的,第二神经网络可以执行信源编码(如加扰、和/或信号压缩等)、信道编码、星座映射、OFDM调制、预编码和/或滤波中的一种或多种功能。
该第二神经网络的网络类型包括但不限于:全连接神经网络、径向基(radial basis function,RBF)神经网络、CNN、循环神经网络、霍普菲尔德神经网络、受限玻尔兹曼机或深度置信网络等。该第二神经网络可以是上述的任一种神经网络,该第二神经网络也可以是上述多种神经网络的结合,此处不做限制。
本申请实施例中,为了便于描述,以第一设备为发送端进行说明,即第二神经网络应用于第一设备中。可以理解的是,第二设备也可以作为发送端,即第二神经网络应用于第二设备中。
本申请实施例中,第三神经网络应用于接收端,可以理解为第三神经网络对来自发送端的数据进行接收处理。例如:如图9所示的星座解映射神经网络。具体的,第三神经网络可以执行信源译码(如解扰、和/或信号解压缩)、信道译码、解星座映射、OFDM解调、信号检测和/或均衡(equalization)等处理。
该第三神经网络的网络类型包括但不限于:全连接神经网络、径向基神经网络、卷积神经网络、循环神经网络、霍普菲尔德神经网络、受限玻尔兹曼机或深度置信网络等。该第三神经网络可以是上述的任一种神经网络,该第三神经网络也可以是上述多种神经网络的结合,此处不做限制。
需要说明的是,该第二神经网络可以与第三神经网络采用相同的神经网络,也可以采用不同的神经网络,还可以采用部分相同的神经网络,此处不做限制。
本申请实施例中,为了便于描述,以第二设备为接收端进行说明,即第三神经网络应用于第二设备中。可以理解的是,第一设备也可以作为接收端,即第三神经网络应用于第一设备中。
第二神经网络和/或第三神经网络用于第一设备与第二设备间传输目标信息。基于上述描述,可以理解的是,第二神经网络与第三神经网络进行不同处理时,该目标信息存在不同。
示例性的,当第二神经网络为星座映射神经网络,第三神经网络为星座解映射神经网络时,该目标信息为调制符号序列。
示例性的,当第二神经网络为CSI压缩神经网络,第三神经网络为CSI解压缩神经网络,该目标信息为被压缩的CSI或者CSI。
示例性的,当第二神经网络为信道编码神经网络,第三神经网络为信道译码神经网络时,该目标信息为编码前的待发送的比特信息或者编码后的待发送的比特信息。
示例性的,当第二神经网络为OFDM调制神经网络,第三神经网络为OFDM解调神经网络时,该目标信息为OFDM符号。
示例性的,当第二神经网络为预编码神经网络,第三神经网络为均衡神经网络时,该目标信息为发送信号,该发送信号通过信道传输后到达接收端。
第一设备可以单独训练第二神经网络,第一设备也可以单独训练第三神经网络,第一设备还可以联合训练第二神经网络与第三神经网络。
示例性的,第二神经网络为发送端(例如第一设备)的预编码神经网络时,第一设备可以单独训练该第二神经网络。该预编码神经网络的作用是根据信道的空间特征,产生各发射天线的发射加权系数从而可以将多个发送数据流在空间上隔离,提高信号的接收信干噪比。该预编码神经网络的输入为信道的CSI,输出为各发射天线上的预编码权值。即第二神经网络的目标信息为各发射天线上的预编码权值。
示例性的,当第三神经网络为接收端(例如第二设备)的均衡器神经网络时,第一设备可以单独训练该第三神经网络。该均衡器神经网络的作用时抵消信道对传播信号造成的扭曲效果,恢复发送端发射出的信号。均衡器神经网络的输入为信道的CSI和接收端的第二参考信号,输出为均衡后恢复的信号。即第三神经网络的目标信息为均衡后恢复的信号。
第一设备可以利用第一神经网络产生的第二信道样本信息单独训练第二神经网络,也可以利用第二信道样本信息和本地产生的发送数据,单独训练第三神经网络,并将训练好的第三神经网络的信息发送给第二设备使用。第一设备还可以联合训练第二、第三神经网络,以达到最优的端到端通信性能。
当第一设备训练完成第三神经网络后,可以执行步骤1007。其中,第三神经网络的训练完成包括但不限于:训练使用的训练样本数量达到预设阈值,例如:训练第三神经网络的第二信道样本信息的数量达到45000个,则将该第三神经网络视为训练完成。或者,训练的次数达到预设阈值,例如:训练第三神经网络的次数达到30000次,则将该第三神经 网络视为训练完成。
可选的,1007、第一设备向第二设备发送第三神经网络的信息。
本实施例中,当第一设备训练完成第三神经网络后,第一设备向第二设备发送第三神经网络的信息。该第三神经网络的信息包括但不限于:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和/或神经网络中每层的网络类型。示例性的,该第三神经网络的信息中可以针对不同的神经元指示相同或不同的激活函数,不予限制。
可选的,当第二设备中预配置(或预定义)了第三神经网络时,该第三神经网络的信息还可以是第三神经网络的模型变化量。该模型变化量包括但不限于:发生变化的神经网络的权值、发生变化的激活函数、发生变化的神经网络中每层神经元的个数、发生变化的神经网络的层间级联关系,和/或发生变化的神经网络中每层的网络类型。示例性的,该预配置可以是接入网络设备通过信令预先为终端设备配置,该预定义可以是协议预先定义,例如:协议预先定义终端设备中的第三神经网络为神经网络A。
第一设备通过信令向第二设备发送第三神经网络的信息。
在本申请实施例中,信令包括但不限于:广播消息(如主信息块(master information block,MIB))、系统消息(如系统信息块(system information block,SIB))、无线资源控制(radio resource control,RRC)信令、媒体接入控制控制单元(medium access control control element,MAC CE)和/或下行控制信息(downlink control information,DCI)。其中,MAC CE和/或DCI可以是多个设备的公共消息,也可以是特定于某个设备(如第二设备)的特定消息。
第二设备根据该第三神经网络的信息,在第二设备中应用该第三神经网络。
第一设备训练完成第二神经网络后,第一设备应用该第二神经网络。
当第一设备中预配置(或预定义)了第二神经网络,第一设备根据步骤1006的训练结果更新本地的第二神经网络。当第二设备中预配置了第三神经网络,则第二设备根据步骤1007的第三神经网络的信息更新本地第三神经网络。当第二设备中无第三神经网络时,步骤1007的第三神经网络的信息可以包括完整的第三神经网络。第二设备根据该第三神经网络的信息,在本地配置经过训练的第三神经网络。
之后,如果目标为下行信息,第一设备可以利用第二神经网络处理目标信息(即第二神经网络的输入)或者得到目标信息(即第二神经网络的输出)。第二设备可以利用第三神经网络处理目标信息(即第三神经网络的输入)或者恢复目标信息(即第三神经网络的输出)。示例性地,第一设备利用第二神经网络得到调制符号,第二设备利用第三神经网络从接收到的调制符号恢复出比特。
如果目标为上行信息,第二设备可以利用第三神经网络处理目标信息(即第三神经网络的输入)或者得到目标信息(即第三神经网络的输出)。第一设备可以利用第二神经网络处理目标信息(即第二神经网络的输入)或者恢复目标信息(即第二神经网络的输出)。示例性地,第二设备利用第三神经网络压缩CSI,得到压缩CSI;第一设备利用第二神经网络从接收到的压缩CSI恢复出CSI。
本申请实施例中,第二设备(例如终端设备)可以向第一设备(例如接入网设备)发送相对少量的第一信道样本信息。第一设备根据该第一信道样本信息,训练得到第一神经网络。该第一神经网络用于推理得到第二信道样本信息。第一设备根据该第二信道样本信息,或者根据该第二信道样本信息和第一信道样本信息,训练第二神经网络和/或第三神经网络,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。通过该方法,可以有效减小空口信令开销,同时能适配所处信道环境,训练得到的第二神经网络与第三神经网络更贴近实际信道环境,提升通信性能。训练第二神经网络与第三神经网络的速度也得到大幅度提升。
类似上述图10,训练得到第一神经网络的相关步骤,第二神经网络的相关训练步骤和/或第三神经网络的相关训练步骤,可替换地可以由独立于该第一设备的其它设备实现。本申请实施例中为了便于描述,将该其它设备称为第三设备。第三设备可以是前文所述的AI节点、移动边缘计算(mobile edge computing)设备、或者云服务器等,不予限制。第三设备学习出的参考信号池可以是离线学习后约定于协议中,或者是通过第三设备和第一设备之间的接口发送给第一设备,或者是通过其他网元转发给第一设备,不予限制。第三设备执行模型训练时需要用到的样本,例如第一信道样本信息可以是由第一设备直接或间接地发送给第三设备,或者是由第二设备直接或间接地发送给第三设备,不予限制。
请参阅图11,图11为本申请实施例提出的一种神经网络的训练方法的又一种流程示意图。本申请实施例提出的一种神经网络的训练方法包括:
可选的,1101、第一设备向第二设备发送第一参考信号。
1102、第二设备根据接收到的来自第一设备的第一参考信号进行信道估计,确定第一信道样本信息。
可选的,1103、第二设备向第一设备发送第一信道样本信息。
步骤1101-1103同前述步骤1001-1003,此处不作赘述。
1104、第一设备向第三设备发送第一信道样本信息。
本实施例中,当第一设备从第二设备接收第一信道样本信息后,第一设备向第三设备发送该第一信道样本信息。
1105、第三设备根据第一信道样本信息训练得到第一神经网络。
1106、第三设备使用第一神经网络推理得到第二信道样本信息。
可选的,1106可以被替换为:第三设备向第一设备发送第一神经网络的信息。第一设备使用第一神经网络推理得到第二信道样本信息。可选的,第一设备将第二信道样本信息发送给第三设备。
1107、第三设备根据第二信道样本信息训练第二神经网络和/或第三神经网络。
步骤1105-1107与前述步骤1004-1006类似,此处不作赘述。
1108、第三设备向第一设备发送第二神经网络的信息和/或第三神经网络的信息。
本实施例中,当第三设备训练得到第三神经网络后,第三设备可以通过第一设备向第二设备发送第三神经网络的信息。可选的,第三设备也可以通过其它的设备(例如其它接入网设备)向第一设备发送第三神经网络的信息。可选的,若第三设备与第一设备之间存 在直连通信链路时,第三设备还可以直接向第一设备发送该第三神经网络的信息。
当第三设备训练得到第二神经网络后,第三设备可以向第二设备发送该第二神经网络的信息。
1109、第一设备向第二设备发送第三神经网络的信息。
步骤1109与前述步骤1007相同,此处不再赘述。
可选的,图11所示的实施例可以被替换为:用于训练第一神经网络的主体与用于训练第二神经网络和/或第二神经网络的主体不同。例如,前者为第一设备,后者为第三设备;或者,前者为第三设备,后者为第一设备。
本申请实施例中,通过上述方式训练神经网络,有效减小空口信令开销,同时能适配所处信道环境,训练得到的第二神经网络与第三神经网络更贴近实际信道环境,提升通信性能。训练第二神经网络与第三神经网络的速度也得到大幅度提升。涉及神经网络训练的步骤可以由其他设备(第三设备)执行,有效降低第一设备(例如接入网设备)的计算负载与功耗。
类似上述图10或图11,训练得到第一神经网络的相关步骤可替换地可以由第二设备(例如终端设备)实现。
请参阅图12,图12为本申请实施例提出的一种神经网络的训练方法的又一种流程示意图。在该方法中,第一神经网络是由第二设备训练的。本申请实施例提出的一种神经网络的训练方法包括:
可选的,1201、第二设备向第一设备发送能力信息。
本实施例中,与前述图10-11所示的实施例类似,以第一设备为接入网设备,第二设备为终端设备为例进行说明。第一设备可以是接入网设备、接入网设备中的芯片、接入网设备中的模块或电路等,第二设备可以是终端设备、终端设备中的芯片、终端中的模块或电路等。第一设备也可以是终端设备、终端设备中的芯片、终端中的模块或电路等,第二设备也可以是接入网设备或接入网设备中的芯片、接入网设备中的模块或电路等,此处不作限制。
本申请实施例中,为了便于描述,可以将未使用第一信道样本信息训练的第一神经网络、或者更新前的第一神经网络称为参考神经网络。使用第一信道样本信息训练参考神经网络后,得到第一神经网络。
该参考神经网络与前述步骤1004中的参考神经网络类似,第二设备使用第一信道样本信息对该参考神经网络开始进行训练,从而训练得到第一神经网络。
在步骤1201中,第二设备向第一设备发送能力信息,该能力信息用于指示以下信息中的一项或多项:
1)是否支持利用神经网络代替或实现通信模块的功能;
该通信模块包括但不限于:OFDM调制模块、OFDM解调模块、星座映射模块、星座解映射模块、信道编码模块、信道译码模块、预编码模块、均衡模块、交织模块和/或解交织模块。
2)是否支持第一神经网络的网络类型,或者所支持的第一神经网络的网络类型;
例如:能力信息指示第二设备是否支持VAE或是否支持GAN。再例如,能力信息指示第二设备支持GAN、支持VAE、支持GAN和VAE、或者GAN和VAE都不支持。
3)是否支持第三神经网络的网络类型,或者指示所支持的第三神经网络的网络类型;
其中,第三神经网络的网络类型包括以下一项或多项:全连接神经网络、径向基神经网络、卷积神经网络、循环神经网络、霍普菲尔德神经网络、受限玻尔兹曼机或深度置信网络等。该第三神经网络可以是上述的任一种神经网络,该第三神经网络也可以是上述多种神经网络的结合,此处不做限制。
4)是否支持通过信令接收参考神经网络的信息,该参考神经网络为未经第一信道样本信息训练的第一神经网络或者用于训练第一神经网络;
5)是否支持通过信令接收第三神经网络的信息;
6)存储的预配置的参考神经网络;
例如:若第二设备中已经存在预定义或预配置的参考神经网络,则在能力信息中可以携带参考神经网络的标识或者索引。
可选的,在能力信息中还可以携带第二设备中已经存在的预定义或预配置的其它神经网络的标识或者索引。
7)可用于存储第一神经网络和/或第三神经网络的内存空间;
8)可用于运行神经网络的算力信息;
该算力信息指的是运行神经网络的计算能力信息,比如包括处理器的运算速度、和/或处理器能够处理的数据量大小等信息。
9)第二设备的位置信息。
比如,上报第二设备所述的经度、纬度、和/或海拔高度。再比如,上报第二设备所述的信道环境,例如:办公室、地铁站、公园、或街道。
该能力信息包括但不限于上述信息,例如该能力信息还可以包括:第二设备支持的用于训练得到第一神经网络的信道样本信息的类型。示例性的:第二设备(终端设备)支持的用于训练得到第一神经网络的信道样本信息为CSI,或者,第二设备支持的用于训练得到第一神经网络的信道样本信息为第二参考信号(y)。
步骤1201为可选步骤。当步骤1201执行时,第二设备可以在初次接入第一设备时发送该能力信息,第二设备也可以周期性向第一设备发送该能力信息,此处不作限制。
可选的,第一设备接收来自第二设备的该能力信息后,当该能力信息中指示第二设备中未配置(未存储)参考神经网络。则第一设备可以向该第二设备发送该参考神经网络的信息。该参考神经网络的信息包括:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和/或神经网络中每层的网络类型。该参考神经网络的信息可以称为参考神经网络的具体信息。其中,不同神经元的激活函数可以相同,也可以不同,不予限制。
可选的,1202、第一设备向第二设备发送第一参考信号。
同上述步骤1001,此处不作赘述。
可选的,1203、第二设备根据接收到的来自第一设备的第一参考信号进行信道估计, 确定第一信道样本信息。
同步骤1002,此处不作赘述。
1204、第二设备根据第一信道样本信息,训练得到第一神经网络。
第二设备根据第一信道样本信息训练得到第一神经网络的方法类似1004中第一设备根据第一信道样本信息训练得到第一神经网络的方法。此处不予赘述。
示例性地,第二设备根据第一信道样本信息和参考神经网络,训练得到第一神经网络。
可选的,第二设备可以根据以下方式一至方式三中的任一项方式,确定参考神经网络:
方式一:参考神经网络的信息是协议约定的,或者第二设备确定参考神经网络的信息。
一种可能的实现中,协议约定(预定义)一个参考神经网络的信息,即约定该参考神经网络的具体信息。
本申请实施例中,参考神经网络的信息包括:神经网络的权值、神经元激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和/或神经网络中每层的网络类型。该参考神经网络的信息还可以称为参考神经网络的具体信息。其中,不同神经元的激活函数可以相同,也可以不同,不予限制。
一种可能的实现中,协议约定多个参考神经网络的信息。每个参考神经网络对应于一种信道环境。其中,信道环境包括以下一种或多种:办公室、商场室内、地铁站、城市街道、广场、和郊区等。如前文所述,第二设备向第一设备上报第二设备的位置信息,则第一设备可以确定出第二设备所处的信道环境,从而得到该信道环境对应的参考神经网络。例如,接入网设备可以维护一份地图信息,该地图信息中对应不同位置信息标记对应的信道环境。例如:该地图信息如表1所示:
表1
位置信息/{经度,纬度,海拔高度} 信道环境
{x1,y1,z1} 办公室
{x2,y2,z2} 地铁站
{x3,y3,z3} 公园
{x4,y4,z4} 街道
可选的,如果第二设备通过能力信息向第一设备上报了第二设备的位置信息,即第二设备的信道环境,则第一设备可以确定出该信道环境对应的参考神经网络的信息。
一种可能的实现中,第二设备确定参考神经网络的信息,即确定参考神经网络的具体信息。如前文所述,第二设备可以通过能力信息向第一设备上报该参考神经网络的信息。
方式二:参考神经网络是由第一设备为第二设备指示的。
一种可能的实现中,第一设备为第二设备发送参考神经网络的信息,即第一设备为第二设备发送参考神经网络的具体信息。
一种可能的实现中,协议约定多个候选参考神经网络的信息,每个候选参考神经网络对应唯一的索引。第一设备通过信令为第二设备发送标识,该标识用于指示该多个候选参考神经网络中为第二设备配置的参考神经网络的索引。
一种可能的实现中,第一设备通过第一信令为第二设备配置多个候选参考神经网络的 信息,即配置该多个候选参考神经网络的具体信息。其中,每个候选参考神经网络对应唯一的索引。第一设备通过第二信令为第二设备发送标识,该标识用于指示该多个候选参考神经网络中为第二设备配置的参考神经网络的索引。
一种可能的实现中,如前文所述,第二设备可以通过能力信息向第一设备发送第二设备存储或具有的多个候选参考神经网络的信息。其中,每个候选参考神经网络对应唯一的索引。第一设备通过信令为第二设备发送标识,该标识用于指示该多个候选参考神经网络中为第二设备配置的参考神经网络的索引。
一种可能的实现中,第一设备通过信令为第二设备发送参考神经网络的类型。协议约定各种类型的参考神经网络的信息,或者第二设备自主确定各种类型的参考神经网络的信息。
1205、第二设备向第一设备发送第一神经网络的信息。
当第二设备训练得到第一神经网络后,第二设备向第一设备发送第一神经网络的信息。
可选的,该第一神经网络的信息可以是第一神经网络的具体信息。该具体信息可以包括:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和/或神经网络中每层的网络类型。其中,不同神经元的激活函数可以相同,也可以不同,不予限制。
可选的,该第一神经网络的信息还可以是第一神经网络相对于参考神经网络的模型变化量。该模型变化量包括但不限于:发生变化的神经网络的权值、发生变化的激活函数、发生变化的神经网络中一层或多层神经元的个数、发生变化的神经网络的层间级联关系,和/或发生变化的神经网络中一层或多层的网络类型。当第一神经网络的信息为模型变化量时,可以有效减少空口信令的开销。
1206、第一设备根据第一神经网络的信息,确定第一神经网络。
第一设备根据来自第二设备的第一神经网络的信息,确定第一神经网络。
可选的,当第一神经网络的信息为第一神经网络的具体信息时,第一设备根据该具体信息确定第一神经网络。
可选的,当第一神经网络的信息是模型变化量时,第一设备根据该模型变化量和参考神经网络,确定第一神经网络。
1207、第一设备根据第一神经网络推理得到第二信道样本信息。
1208、第一设备根据第二信道样本信息训练第二神经网络和/或第三神经网络。
1209、第一设备向第二设备发送第三神经网络的信息。
步骤1207-1209同前述步骤1005-1007,此处不作赘述
本实施例中,第二设备根据第一信道样本信息和参考神经网络,训练得到第一神经网络。
本实施例中,第二设备(例如终端设备)根据该第一信道样本信息,训练得到第一神经网络。该第一神经网络用于推理得到第二信道样本信息。第一设备根据该第二信道样本信息,训练第二神经网络和/或第三神经网络,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。有效减小空口信令开销,同时能适配所处信道环境,训练 得到的第二神经网络与第三神经网络更贴近实际信道环境,提升通信性能。训练第二神经网络与第三神经网络的速度也得到大幅度提升。将训练第一神经网络的计算任务,分配至第二设备,有效减轻第一设备(如接入网设备)的计算负担,提升通信性能。
上述本申请提供的实施例中,分别从第一设备、第二设备、第三设备、以及它们之间交互的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,第一设备、第二设备、和第三设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
请参阅图17,图17为本申请实施例提供的一种通信装置示意图,该通信装置1700包括收发模块1710和处理模块1720,其中:
可选的,该通信装置1700用于实现上述方法实施例中第一设备的功能。
在一种可能的实现中,收发模块1710,用于向第二设备发送第一参考信号;
收发模块1710,还用于从第二设备接收第一信道样本信息;
处理模块1720,用于确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的,第一神经网络用于推理得到第二信道样本信息,第二信道样本信息用于训练第二神经网络和/或第三神经网络,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。
在一种可能的实现中,收发模块1710,用于向第二设备发送第一参考信号;
收发模块1710,还用于从所述第二设备接收第一神经网络的信息;
处理模块1720,用于确定第二神经网络和/或第三神经网络,所述第二神经网络和/或所述第三神经网络用于所述第一设备与所述第二设备传输目标信息,所述第二神经网络和/或所述第三神经网络是根据所述第二信道样本信息训练得到的,所述第二信道样本信息是通过所述第一神经网络推理得到的。
可选的,该通信装置1700用于实现上述方法实施例中第二设备的功能。
在一种可能的实现中,处理模块1720,用于根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息;
收发模块1710,用于向第一设备发送第一信道样本信息;
收发模块1710,还用于从第一设备接收第三神经网络的信息,第三神经网络用于第二设备与第一设备传输目标信息。
在一种可能的实现中,处理模块1720,用于根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息;
处理模块1720,还用于确定第一神经网络,所述第一神经网络是根据所述第一信道样本信息训练得到的;
收发模块1710,用于向第一设备发送所述第一神经网络的信息。
可选的,该通信装置1700用于实现上述方法实施例中第三设备的功能。
在一种可能的实现中,收发模块1710,用于从第一设备接收第一信道样本信息;
处理模块1720,用于确定第一神经网络,第一神经网络是根据第一信道样本信息训练得到的,第一神经网络用于推理得到第二信道样本信息,第二信道样本信息用于训练第二神经网络和/或第三神经网络,第二神经网络和/或第三神经网络用于第一设备与第二设备传输目标信息。
可选地,上述通信装置还可以包括存储单元,该存储单元用于存储数据和/或指令(也可以称为代码或者程序)。上述各个单元可以和存储单元交互或者耦合,以实现对应的方法或者功能。例如,处理单元1720可以读取存储单元中的数据或者指令,使得通信装置实现上述实施例中的方法。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
在一个例子中,以上任一通信装置中的单元可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital singnal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA),或这些集成电路形式中至少两种的组合。再如,当通信装置中的单元可以通过处理元件调度程序的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序的处理器。再如,这些单元可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
本申请实施例提供的技术方案可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、AI节点、接入网设备、终端设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质等。
在本申请实施例中,在无逻辑矛盾的前提下,各实施例之间可以相互引用,例如方法实施例之间的方法和/或术语可以相互引用,例如装置实施例之间的功能和/或术语可以相互引用,例如装置实施例和方法实施例之间的功能和/或术语可以相互引用。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (42)

  1. 一种神经网络的训练方法,其特征在于,包括:
    第一设备向第二设备发送第一参考信号;
    所述第一设备从所述第二设备接收第一信道样本信息;
    所述第一设备确定第一神经网络,所述第一神经网络是根据所述第一信道样本信息训练得到的,所述第一神经网络用于推理得到第二信道样本信息,所述第二信道样本信息用于训练第二神经网络和/或第三神经网络,所述第二神经网络和/或所述第三神经网络用于所述第一设备与所述第二设备传输目标信息。
  2. 根据权利要求1所述的方法,其特征在于,所述第一信道样本信息还用于训练所述第二神经网络和/或所述第三神经网络。
  3. 根据权利要求1-2中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备向所述第二设备发送所述第三神经网络的信息。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一信道样本信息包括信道状态信息CSI和/或第二参考信号,所述第二参考信号为经过信道传播的所述第一参考信号。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一神经网络为生成对抗网络或变分自编码器。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述第一参考信号包括解调参考信号DMRS或者信道状态信息参考信号CSI-RS。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一参考信号的序列类型包括ZC序列或黄金序列。
  8. 一种神经网络的训练方法,其特征在于,包括:
    第二设备根据从第一设备接收到的第一参考信号进行信道估计,确定第一信道样本信息;
    所述第二设备向所述第一设备发送所述第一信道样本信息;
    所述第二设备从所述第一设备接收第三神经网络的信息,所述第三神经网络用于所述第二设备与所述第一设备传输目标信息。
  9. 根据权利要求8所述的方法,其特征在于,所述第一信道样本信息包括信道状态信息CSI和/或第二参考信号,所述第二参考信号为所述第二设备接收到的第一参考信号。
  10. 根据权利要求8-9中任一项所述的方法,其特征在于,所述第一神经网络为生成对抗网络或变分自编码器。
  11. 根据权利要求8-10中任一项所述的方法,其特征在于,所述第一参考信号包括解调参考信号DMRS或者信道状态信息参考信号CSI-RS。
  12. 根据权利要求8-11中任一项所述的方法,其特征在于,所述第一参考信号的序列类型包括ZC序列或黄金序列。
  13. 一种神经网络的训练方法,其特征在于,包括:
    第一设备向第二设备发送第一参考信号;
    所述第一设备从所述第二设备接收第一神经网络的信息;
    所述第一设备确定第二神经网络和/或第三神经网络,所述第二神经网络和/或所述第三神经网络用于所述第一设备与所述第二设备传输目标信息,所述第二神经网络和/或所述第三神经网络是根据所述第二信道样本信息训练得到的,所述第二信道样本信息是通过所述第一神经网络推理得到的。
  14. 根据权利要求13所述的方法,其特征在于,所述第一神经网络的信息包括:所述第一神经网络相对参考神经网络的模型变化量,所述参考神经网络用于训练所述第一神经网络。
  15. 根据权利要求13所述的方法,其特征在于,所述第一神经网络的信息包括以下一项或多项:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和神经网络中每层的网络类型。
  16. 根据权利要求13-15中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备向所述第二设备发送所述第三神经网络的信息。
  17. 根据权利要求13-16中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备从所述第二设备接收能力信息,所述能力信息用于指示所述第二设备的以下信息中的一项或多项:
    是否支持利用神经网络代替或实现通信模块的功能;
    是否支持所述第一神经网络的网络类型;
    是否支持所述第三神经网络的网络类型;
    是否支持通过信令接收参考神经网络的信息,所述参考神经网络用于训练所述第一神经网络;
    是否支持通过信令接收所述第三神经网络的信息;
    存储的所述参考神经网络;
    可用于存储所述第一神经网络和/或所述第三神经网络的内存空间;
    可用于运行神经网络的算力信息;和,
    所述第二设备的位置信息。
  18. 根据权利要求13-17中任一项所述的方法,其特征在于,所述第一神经网络为生成对抗网络或变分自编码器。
  19. 根据权利要求13-18中任一项所述的方法,其特征在于,所述第一参考信号包括解调参考信号DMRS或者信道状态信息参考信号CSI-RS。
  20. 根据权利要求13-19中任一项所述的方法,其特征在于,所述第一参考信号的序列类型包括ZC序列或黄金序列。
  21. 一种神经网络的训练方法,其特征在于,包括:
    所述第二设备根据从第一设备接收的第一参考信号进行信道估计,确定第一信道样本信息;
    所述第二设备确定第一神经网络,所述第一神经网络是根据所述第一信道样本信息训练得到的;
    所述第二设备向所述第一设备发送所述第一神经网络的信息。
  22. 根据权利要求21所述的方法,其特征在于,所述第一神经网络的信息包括:所述第一神经网络相对参考神经网络的模型变化量,所述参考神经网络用于训练所述第一神经网络。
  23. 根据权利要求21所述的方法,其特征在于,所述第一神经网络的信息包括以下一项或多项:神经网络的权值、神经元的激活函数、神经网络中每层神经元的个数、神经网络的层间级联关系,和/或神经网络中每层的网络类型。
  24. 根据权利要求21-23中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二设备从所述第一设备接收所述第三神经网络的信息。
  25. 根据权利要求21-24中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二设备向所述第一设备发送能力信息,所述能力信息用于指示所述第二设备的以下信息中的一项或多项:
    是否支持利用神经网络代替或实现通信模块的功能;
    是否支持所述第一神经网络的网络类型;
    是否支持所述第三神经网络的网络类型;
    是否支持通过信令接收参考神经网络的信息,所述参考神经网络用于训练所述第一神经网络;
    是否支持通过信令接收所述第三神经网络的信息;
    存储的所述参考神经网络;
    可用于存储所述第一神经网络和/或所述第三神经网络的内存空间;
    可用于运行神经网络的算力信息;和,
    所述第二设备的位置信息。
  26. 根据权利要求21-25中任一项所述的方法,其特征在于,所述第一神经网络为生成对抗网络或变分自编码器。
  27. 根据权利要求21-26中任一项所述的方法,其特征在于,所述第一参考信号包括解调参考信号DMRS或者信道状态信息参考信号CSI-RS。
  28. 根据权利要求21-26中任一项所述的方法,其特征在于,所述第一参考信号的序列类型包括ZC序列或黄金序列。
  29. 一种神经网络的训练方法,其特征在于,包括:
    第三设备从第一设备接收第一信道样本信息;
    所述第三设备根据所述第一信道样本信息训练得到第一神经网络,所述第一神经网络用于推理得到第二信道样本信息,所述第二信道样本信息用于训练第二神经网络和/或第三神经网络,所述第二神经网络和/或所述第三神经网络用于所述第一设备与第二设备传输目标信息。
  30. 一种通信装置,其特征在于,用于实现权利要求1-7任一项所述的方法。
  31. 一种通信装置,其特征在于,包括处理器和存储器,所述存储器和所述处理器耦合,所述处理器用于执行权利要求1-7任一项所述的方法。
  32. 一种通信装置,其特征在于,用于实现权利要求8-12任一项所述的方法。
  33. 一种通信装置,其特征在于,包括处理器和存储器,所述存储器和所述处理器耦合,所述处理器用于执行权利要求8-12任一项所述的方法。
  34. 一种通信装置,其特征在于,用于实现权利要求29所述的方法。
  35. 一种通信装置,其特征在于,包括处理器和存储器,所述存储器和所述处理器耦合,所述处理器用于执行权利要求29所述的方法。
  36. 一种通信装置,其特征在于,用于实现权利要求13-20任一项所述的方法。
  37. 一种通信装置,其特征在于,包括处理器和存储器,所述存储器和所述处理器耦合,所述处理器用于执行权利要求13-20任一项所述的方法。
  38. 一种通信装置,其特征在于,用于实现权利要求21-28任一项所述的方法。
  39. 一种通信装置,其特征在于,包括处理器和存储器,所述存储器和所述处理器耦合,所述处理器用于执行权利要求21-28任一项所述的方法。
  40. 一种通信系统,其特征在于,包括:
    权利要求30或31所述的通信装置、和权利要求32或33所述的通信装置;
    权利要求30或31所述的通信装置、权利要求32或33所述的通信装置、和权利要求34或35所述的通信装置;或,
    权利要求36或37所述的通信装置、和权利要求38或39所述的通信装置。
  41. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,当所述指令在计算机上运行时,使得计算机执行权利要求1-29任一项所述的方法。
  42. 一种计算机程序产品,其特征在于,包括指令,当所述指令在计算机上运行时,使得计算机执行权利要求1-29任一项所述的方法。
PCT/CN2020/142103 2020-12-31 2020-12-31 一种神经网络的训练方法以及相关装置 WO2022141397A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202080107089.5A CN116458103A (zh) 2020-12-31 2020-12-31 一种神经网络的训练方法以及相关装置
PCT/CN2020/142103 WO2022141397A1 (zh) 2020-12-31 2020-12-31 一种神经网络的训练方法以及相关装置
EP20967736.8A EP4262121A4 (en) 2020-12-31 2020-12-31 TRAINING METHOD FOR NEURONAL NETWORK AND ASSOCIATED APPARATUS
US18/345,904 US20230342593A1 (en) 2020-12-31 2023-06-30 Neural network training method and related apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/142103 WO2022141397A1 (zh) 2020-12-31 2020-12-31 一种神经网络的训练方法以及相关装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/345,904 Continuation US20230342593A1 (en) 2020-12-31 2023-06-30 Neural network training method and related apparatus

Publications (1)

Publication Number Publication Date
WO2022141397A1 true WO2022141397A1 (zh) 2022-07-07

Family

ID=82258815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/142103 WO2022141397A1 (zh) 2020-12-31 2020-12-31 一种神经网络的训练方法以及相关装置

Country Status (4)

Country Link
US (1) US20230342593A1 (zh)
EP (1) EP4262121A4 (zh)
CN (1) CN116458103A (zh)
WO (1) WO2022141397A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116404760A (zh) * 2023-06-09 2023-07-07 西安新视空间信息科技有限公司 基于数字孪生地图的分布式电网暂稳态运行方法及装置
WO2024032606A1 (zh) * 2022-08-12 2024-02-15 维沃移动通信有限公司 信息传输方法、装置、设备、系统及存储介质
WO2024051564A1 (zh) * 2022-09-07 2024-03-14 维沃移动通信有限公司 信息传输方法、ai网络模型训练方法、装置和通信设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230031702A1 (en) * 2021-07-14 2023-02-02 Google Llc Neural Networks based Multimodal Transformer for Multi-Task User Interface Modeling

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198282A (zh) * 2018-02-27 2019-09-03 上海诺基亚贝尔股份有限公司 用于信道均衡的方法、设备和计算机可读介质
US20200052757A1 (en) * 2018-08-09 2020-02-13 At&T Intellectual Property I, L.P. Generic reciprocity based channel state information acquisition frameworks for advanced networks
CN111342867A (zh) * 2020-02-28 2020-06-26 西安交通大学 一种基于深度神经网络的mimo迭代检测方法
CN111628946A (zh) * 2019-02-28 2020-09-04 华为技术有限公司 一种信道估计的方法以及接收设备
CN111954309A (zh) * 2019-05-17 2020-11-17 株式会社Ntt都科摩 终端和基站

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102174659B1 (ko) * 2017-06-19 2020-11-05 버지니아 테크 인터렉추얼 프라퍼티스, 인크. 다중 안테나 송수신기를 이용한 무선 송신을 위한 정보의 인코딩 및 디코딩
US10531415B2 (en) * 2018-03-02 2020-01-07 DeepSig Inc. Learning communication systems using channel approximation
US11533115B2 (en) * 2019-05-15 2022-12-20 Huawei Technologies Co., Ltd. Systems and methods for wireless signal configuration by a neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198282A (zh) * 2018-02-27 2019-09-03 上海诺基亚贝尔股份有限公司 用于信道均衡的方法、设备和计算机可读介质
US20200052757A1 (en) * 2018-08-09 2020-02-13 At&T Intellectual Property I, L.P. Generic reciprocity based channel state information acquisition frameworks for advanced networks
CN111628946A (zh) * 2019-02-28 2020-09-04 华为技术有限公司 一种信道估计的方法以及接收设备
CN111954309A (zh) * 2019-05-17 2020-11-17 株式会社Ntt都科摩 终端和基站
CN111342867A (zh) * 2020-02-28 2020-06-26 西安交通大学 一种基于深度神经网络的mimo迭代检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4262121A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024032606A1 (zh) * 2022-08-12 2024-02-15 维沃移动通信有限公司 信息传输方法、装置、设备、系统及存储介质
WO2024051564A1 (zh) * 2022-09-07 2024-03-14 维沃移动通信有限公司 信息传输方法、ai网络模型训练方法、装置和通信设备
CN116404760A (zh) * 2023-06-09 2023-07-07 西安新视空间信息科技有限公司 基于数字孪生地图的分布式电网暂稳态运行方法及装置
CN116404760B (zh) * 2023-06-09 2023-10-20 西安新视空间信息科技有限公司 基于数字孪生地图的分布式电网暂稳态运行方法及装置

Also Published As

Publication number Publication date
EP4262121A4 (en) 2024-03-13
US20230342593A1 (en) 2023-10-26
EP4262121A1 (en) 2023-10-18
CN116458103A (zh) 2023-07-18

Similar Documents

Publication Publication Date Title
WO2022141397A1 (zh) 一种神经网络的训练方法以及相关装置
US20210158151A1 (en) Machine-Learning Architectures for Broadcast and Multicast Communications
US11689940B2 (en) Machine-learning architectures for simultaneous connection to multiple carriers
EP2594034B1 (en) Mobility in a distributed antenna system
JP2022544096A (ja) ディープニューラルネットワークに関する基地局とユーザ機器とのメッセージング
EP3852326B1 (en) Transmitter
Zhang et al. An intelligent wireless transmission toward 6G
TW202135499A (zh) 用於聯合收發機神經網路訓練的梯度回饋框架
CN115136730A (zh) 广播已知数据来训练人工神经网络
US11412521B1 (en) Machine learning aided location-based downlink interference assistance information
US20230413070A1 (en) Gradient dataset aware configuration for over-the-air (ota) model aggregation in federated learning
US11929853B2 (en) Data-driven probabilistic modeling of wireless channels using conditional variational auto-encoders
US11456834B2 (en) Adaptive demodulation reference signal (DMRS)
WO2023126007A1 (zh) 信道信息传输方法及装置
Fu et al. Power allocation intelligent optimization for mobile NOMA communication system
CN115118566A (zh) 改进无线通信中信息的发送
WO2022082463A1 (zh) 通信方法、装置及系统
TW202201935A (zh) 用於無線通道估計和追蹤的神經網路增強
Sanjana et al. Deep learning approaches used in downlink MIMO-NOMA system: a survey
WO2023116655A9 (zh) 一种通信方法及装置
WO2022126580A1 (zh) 一种参考信号的配置方法
WO2024051789A1 (zh) 一种波束管理方法
WO2023036280A1 (zh) 一种模型测试方法及装置
US11617183B2 (en) Demapping received data
WO2023179577A1 (zh) 一种通信方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967736

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080107089.5

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2020967736

Country of ref document: EP

Effective date: 20230713

NENP Non-entry into the national phase

Ref country code: DE