WO2022073496A1 - 神经网络信息传输方法、装置、通信设备和存储介质 - Google Patents

神经网络信息传输方法、装置、通信设备和存储介质 Download PDF

Info

Publication number
WO2022073496A1
WO2022073496A1 PCT/CN2021/122765 CN2021122765W WO2022073496A1 WO 2022073496 A1 WO2022073496 A1 WO 2022073496A1 CN 2021122765 W CN2021122765 W CN 2021122765W WO 2022073496 A1 WO2022073496 A1 WO 2022073496A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
neural network
communication device
target unit
following
Prior art date
Application number
PCT/CN2021/122765
Other languages
English (en)
French (fr)
Inventor
杨昂
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to EP21877013.9A priority Critical patent/EP4228217A4/en
Publication of WO2022073496A1 publication Critical patent/WO2022073496A1/zh
Priority to US18/129,247 priority patent/US20230244911A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present application belongs to the field of communication technologies, and in particular relates to a neural network information transmission method, device, communication device and storage medium.
  • a large amount of information often needs to be transmitted between communication devices, for example, information is transmitted between a terminal and a network device, information is transmitted between a terminal and a terminal, and information is transmitted between a network device and a network device.
  • neural networks are introduced into communication devices in some communication systems, these neural networks are all trained in the terminal development stage and are trained using experimental data, which results in low communication performance of the terminal.
  • the embodiments of the present application provide a neural network information transmission method, device, communication device, and storage medium, which can solve the problem of low communication performance of the communication device.
  • an embodiment of the present application provides a neural network information transmission method, which is applied to a second communication device, including:
  • the third information is information obtained based on the first information.
  • an embodiment of the present application provides a neural network information transmission method, which is applied to a first communication device, including:
  • Receive second information sent by the second communication device where the second information is information obtained by training the first information or the third information as an input of a second neural network of the second communication device , and the third information is information obtained based on the first information.
  • an embodiment of the present application provides a neural network information transmission apparatus, which is applied to a second communication device, including:
  • a receiving module configured to receive first information sent by a first communication device, wherein the first information is output information of a first neural network of the first communication device;
  • a sending module configured to send second information to the first communication device, wherein the second information is to use the first information or the third information as the input of the second neural network of the second communication device.
  • Information obtained by training, and the third information is information obtained based on the first information.
  • an embodiment of the present application provides a neural network information transmission apparatus, which is applied to a first communication device, including:
  • a sending module configured to send first information to a second communication device, wherein the first information is output information of a first neural network of the first communication device;
  • a receiving module configured to receive second information sent by the second communication device, wherein the second information is the input of the second neural network using the first information or the third information as the second communication device Information obtained through training, and the third information is information obtained based on the first information.
  • an embodiment of the present application provides a communication device, where the communication device is a second communication device, including: a memory, a processor, and a program or instruction stored on the memory and executable on the processor , when the program or instruction is executed by the processor, the steps in the neural network information transmission method on the second communication device side provided by the embodiment of the present application are implemented.
  • an embodiment of the present application provides a communication device, where the communication device is a first communication device, including: a memory, a processor, and a program or instruction stored on the memory and executable on the processor , when the program or instruction is executed by the processor, the steps in the neural network information transmission method on the side of the first communication device provided by the embodiment of the present application are implemented.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the second communication device provided by the embodiment of the present application is implemented
  • a chip in an eighth aspect, includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a network-side device program or instruction, and implements the method described in the first aspect. the method described, or implement the method described in the second aspect.
  • a computer program product is provided, the computer program product is stored in a non-volatile storage medium, the computer program product is executed by at least one processor to implement the method according to the first aspect, Or implement the method as described in the second aspect.
  • the first information sent by the first communication device is received, where the first information is output information of the first neural network of the first communication device; the second information is sent to the first communication device.
  • information wherein the second information is information obtained by training the first information or the third information as the input of the second neural network of the second communication device, and the third information is based on the first information information obtained by an information.
  • FIG. 1 shows a block diagram of a wireless communication system to which an embodiment of the present application can be applied
  • FIG. 2 is a flowchart of a neural network information transmission method provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of another neural network information transmission method provided by an embodiment of the present application.
  • FIG. 4 is a structural diagram of a neural network information transmission device provided by an embodiment of the present application.
  • FIG. 5 is a structural diagram of another neural network information transmission device provided by an embodiment of the present application.
  • FIG. 6 is a structural diagram of a communication device provided by an embodiment of the present application.
  • FIG. 7 is a structural diagram of another communication device provided by an embodiment of the present application.
  • first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and that "first”, “second” distinguishes Usually it is a class, and the number of objects is not limited.
  • the first object may be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
  • LTE Long Term Evolution
  • LTE-Advanced LTE-Advanced
  • LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency-Division Multiple Access
  • system and “network” in the embodiments of the present application are often used interchangeably, and the described technology can be used not only for the above-mentioned systems and radio technologies, but also for other systems and radio technologies.
  • NR New Radio
  • 6G most Generation
  • FIG. 1 shows a block diagram of a wireless communication system to which the embodiments of the present application can be applied.
  • the wireless communication system includes a terminal 11 and a network device 12 .
  • the terminal 11 may also be called a terminal device or a user terminal (User Equipment, UE), and the terminal 11 may be a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer) or a notebook computer, a personal digital computer Assistant (Personal Digital Assistant, PDA), PDA, Netbook, Ultra-mobile Personal Computer (UMPC), Mobile Internet Device (Mobile Internet Device, MID) or Vehicle Equipment (VUE), Pedestrian Terminal (PUE) , RedCap UE and other terminal-side devices, where RedCap UE can include: wearable devices, industrial sensors, video monitoring devices, etc., and wearable devices include: bracelets, headphones, glasses, etc. It should be noted that, the embodiment of the present application does not limit the specific type of the terminal 11 .
  • the network device 12 may be a base station or a core network, where the base station may be referred to as a Node B, an evolved Node B, an access point, a Base Transceiver Station (BTS), a radio base station, a radio transceiver, a basic service set (Basic Service Set, BSS), Extended Service Set (Extended Service Set, ESS), Node B, Evolved Node B (eNB), Home Node B, Home Evolved Node B, WLAN Access Point, WiFi Node, Transmission and Reception Point (Transmitting Receiving Point, TRP) or some other suitable term in the field, as long as the same technical effect is achieved, the base station is not limited to specific technical vocabulary, it should be noted that in the embodiment of this application, only NR is used The base station in the system is taken as an example, but the specific type of the base station is not limited.
  • the embodiments of this application can be applied to scenarios that support broadcast/multicast features, such as: public safety and mission critical, V2X applications, transparent IPv4/ IPv6 multicast delivery (transparent IPv4/IPv6 multicast delivery), IPTV, wireless software delivery (software delivery over wireless), group communications and IoT applications (group communications and IoT applications) and other scenarios that support broadcast/multicast features.
  • broadcast/multicast features such as: public safety and mission critical, V2X applications, transparent IPv4/ IPv6 multicast delivery (transparent IPv4/IPv6 multicast delivery), IPTV, wireless software delivery (software delivery over wireless), group communications and IoT applications (group communications and IoT applications) and other scenarios that support broadcast/multicast features.
  • this embodiment of the present application is not limited, for example, other unicast scenarios may also be used.
  • FIG. 2 is a flowchart of a neural network information transmission method provided by an embodiment of the present application. The method is applied to a second communication device. As shown in FIG. 2, the method includes the following steps:
  • Step 201 Receive first information sent by a first communication device, where the first information is output information of a first neural network of the first communication device.
  • the first communication device may be a terminal, the second communication device may be a network device, or the first communication device may be a terminal, the second communication device may be another terminal, or the first communication device may be a network device, the second communication device may be another network device.
  • the above-mentioned first communication device may also be one or more devices, that is, receiving first information sent by one or more devices, and sending second information to one or more devices. For example, receiving the first information sent by a terminal group, and then sending the second information to the terminal group after obtaining the second information.
  • the above-mentioned first neural network can be a convolutional neural network (Convolutional Neural Networks, CNN) or a recurrent neural network (Recurrent Neural Network, RNN), which is not limited in the embodiment of the present invention, for example: it can also be other deep neural networks, such as Generative Adversarial Network (GAN) or Long Short-Term Memory (LSTM) and so on.
  • CNN convolutional Neural Networks
  • RNN recurrent neural network
  • GAN Generative Adversarial Network
  • LSTM Long Short-Term Memory
  • the information output by the first neural network may be information sent by the first communication device to the second communication device, for example: information sent by the terminal to the network device, information sent by the network device to the terminal, information transmitted between terminals, or Information passed between network devices.
  • receiving the first information may include at least one of demodulation, channel decoding, source decoding, decompression, verification, etc., to obtain the first information, because in On the side of the first communication device, the first communication device may perform at least one process of source coding, compression, channel coding, modulation, etc. on the first information, so as to convert it into a signal and send it to the second communication device.
  • Step 202 Send second information to the first communication device, wherein the second information is obtained by training the first information or the third information as the input of the second neural network of the second communication device information, and the third information is information obtained based on the first information.
  • the second communication device may input the first information into the above-mentioned second neural network for training to obtain the above-mentioned second information, or may, after receiving the above-mentioned first information, Perform relevant operations to obtain third information, and then input the third information into the second neural network for training to obtain the second information.
  • the above-mentioned second information since the above-mentioned second information is obtained by training and is used for sending to the first communication device, the above-mentioned second information may also be referred to as training interaction information.
  • the first communication device may update the first neural network based on the second information, that is, the second information may be used by the first communication device to update the first neural network.
  • the neural network may also be understood as an artificial intelligence (Artificial Intelligence, AI) network.
  • AI Artificial Intelligence
  • the first information or the third information may be used as the input of the second neural network for training, and the second information obtained by training may be sent to the first communication device, so as to realize the real-time training of the second neural network.
  • Training is to optimize the second neural network in real time, so as to improve the service performance of the second neural network, thereby improving the communication performance of the communication device.
  • the first information includes at least one of the following:
  • the above-mentioned signals include but are not limited to at least one of the following:
  • the first signal carried on the reference signal resource and the second signal carried on the channel are identical to the first signal carried on the reference signal resource and the second signal carried on the channel.
  • the above-mentioned first signal may include at least one of the following:
  • DMRS Demodulation Reference Signal
  • SRS Sounding Reference Signal
  • SSB Synchronization Signal Block
  • TRS Tracking Reference Signal
  • TRS Phase Tracking Reference Signal
  • PTRS Phase-tracking reference signal
  • CSI-RS channel state information reference signal
  • first signal may be used for signal processing, such as signal detection, filtering, equalization, and the like.
  • the above-mentioned second signal may include at least one of the following:
  • PDCCH Physical Downlink Control Channel
  • PDSCH Physical Downlink Shared Channel
  • PUCCH Physical Uplink Control Channel
  • PUSCH Physical Uplink Shared Channel
  • PRACH Physical Random Access Channel
  • PBCH Physical Broadcast Channel
  • the above information includes but is not limited to at least one of the following:
  • Channel state information beam information, channel prediction information, interference information, positioning information, trajectory information, service prediction information, service management information, parameter prediction information, and parameter management information.
  • the above channel state information may include channel state information feedback information. For example: including channel related information, channel matrix related information, channel characteristic information, channel matrix characteristic information, Precoding Matrix Indicator (PMI), rank indication (RI), CSI-RS resource indicator (CSI) -RS resource indicator, CRI), channel quality indicator (Channel Quality Indicator, CQI), layer indicator (layer indicator, LI), etc.
  • PMI Precoding Matrix Indicator
  • RI rank indication
  • CSI-RS resource indicator CSI-RS resource indicator
  • CRI channel quality indicator
  • CQI Channel Quality Indicator
  • layer indicator layer indicator
  • the above channel state information may also include frequency division duplex (Frequency-division Duplex, FDD) uplink and downlink part reciprocity channel state information.
  • FDD Frequency-division Duplex
  • the network device obtains the angle and delay information according to the uplink channel, and can use the CSI-RS precoding or direct indication method to combine the angle information and the delay information, or other parts with partial reciprocity.
  • the terminal can report the information according to the instruction of the base station or select and report within the range indicated by the base station, thereby reducing the calculation amount of the terminal and the overhead of CSI reporting.
  • the above beam information may include beam quality, beam indication information (for example: reference signal ID), beam failure indication information, and new beam indication information in beam failure recovery.
  • beam indication information for example: reference signal ID
  • beam failure indication information for example: new beam indication information in beam failure recovery.
  • the above beam information can be used for beam management, including beam measurement, beam reporting, beam prediction, beam failure detection, beam failure recovery, and new beam indication in beam failure recovery.
  • the above-mentioned channel prediction information may include at least one of channel state information prediction, beam prediction, and the like.
  • the above interference information may include interference information such as intra-cell interference, inter-cell interference, out-of-band interference, and intermodulation interference.
  • the above-mentioned positioning information may be information about the specific position of the terminal estimated through a reference signal (eg, SRS), such as including at least one of a horizontal position and a vertical position, or information used to assist position estimation.
  • a reference signal eg, SRS
  • the above-mentioned trajectory information may be a possible future trajectory of the terminal estimated through a reference signal (eg, SRS), or information used to assist trajectory estimation.
  • a reference signal eg, SRS
  • the above-mentioned service prediction information may be prediction information of high-level services, such as: predicted throughput, required data packet size, service requirements, etc.
  • the above-mentioned service management information may be management information of high-level services, such as moving speed, noise information, and the like.
  • the above-mentioned parameter prediction information can be predicted moving speed, noise information, etc.
  • the above-mentioned parameter management information may be moving speed, noise information, and the like.
  • control signaling includes but is not limited to: control signaling.
  • related signaling of power control related signaling of beam management, etc.
  • the third information includes information obtained by operating at least one of the signal, information and signaling.
  • the above operations may be operations such as signal detection, signal processing, equalization, modulation, and demodulation, which are not specifically limited. Further, the above operation may be a corresponding operation performed according to the influence of noise, interference, fading, time delay, etc. of the wireless channel on the first information.
  • part or all of the first information can be input to the second neural network after performing operations.
  • some data and information can be directly used as the input of the second neural network.
  • the above-mentioned second information includes:
  • the second information includes the information of the at least one target unit, and the second information is a combination of information of multiple target units.
  • the target unit may include at least one of the following:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • multiplicative coefficients may also be referred to as weights, and the above-mentioned additive coefficients may also be referred to as biases.
  • the parameters of the above activation function for example, leaky Rectified Linear Unit (leaky ReLU) or leaky parameters of Parametric Rectified Linear Unit (PReLU).
  • the above-mentioned neurons may include at least one of the following:
  • the above-mentioned neurons may include convolution kernels
  • the above-mentioned target unit may include: convolution kernels, weight coefficients of convolution kernels (also called multiplicative coefficients of convolution kernels), and deviations of convolution kernels quantity (also known as the additive coefficient of the convolution kernel).
  • the above-mentioned neurons may include pooling units, and the above-mentioned target units may include: convolution kernels, pooling methods, and parameters of the convolution kernels.
  • the above-mentioned neurons may include recurrent units
  • the target unit may include: the recurrent unit and the weighting coefficient of the recurrent unit, wherein the weighting coefficient of the recurrent unit may include The multiplicative weighting coefficient of the recurrent unit (eg, including the weight of the previous state's influence on the current state, the weight of the previous state's effect on the current input), and the additive weighting coefficient (ie bias) of the recurrent unit.
  • the recurrent unit in the RNN is a special neuron whose input includes not only the current input, but also the last input and/or the last intermediate information.
  • the information of the target unit includes: the information of the loss function on the target unit; or
  • the information of the target unit includes: the identification of the target unit and the target information of the loss function to the target unit.
  • the target unit may be any target unit corresponding to the second information, that is, the information of each target unit may include the information of the loss function to the target unit, or may include the identification of the target unit and the loss function to the target unit. target information.
  • the information about the target unit by the above-mentioned loss function may include at least one of the following items:
  • the gradient information may include: a combination of gradient and at least one of the following:
  • the historical information, learning rate, learning step size, exponential decay rate, and constant of the target unit are the historical information, learning rate, learning step size, exponential decay rate, and constant of the target unit.
  • the above-mentioned combinations may include combinations of various common mathematical operations such as addition and subtraction multipliers, N-th power, N-fold root, logarithm, derivation, and partial derivation, where N is any number, for example, N can be Positive or negative or 0, real or complex.
  • the above-mentioned gradient includes at least one of the following: the current gradient for obtaining the second information through training, and the gradient before the second information is obtained through training;
  • the gradient before the above-mentioned training to obtain the second information may be, or may be the gradient obtained from one or more times of training before the above-mentioned second information is obtained through training.
  • the above-mentioned partial derivative information may include: a combination of partial derivative and at least one of the following:
  • the historical information, learning rate, learning step size, exponential decay rate, and constant of the target unit are the historical information, learning rate, learning step size, exponential decay rate, and constant of the target unit.
  • the above-mentioned partial derivative may include at least one of the following: the current partial derivative of the second information obtained by training, and the partial derivative before the second information is obtained by training.
  • the above derivative information includes: a combination of the derivative and at least one of the following:
  • the historical information, learning rate, learning step size, exponential decay rate, and constant of the target unit are the historical information, learning rate, learning step size, exponential decay rate, and constant of the target unit.
  • the above derivative may include at least one of the following: the current derivative of the second information obtained by training, and the derivative before the second information is obtained by training.
  • the history information of the target unit is: the information of the target unit included in the fourth information sent to the first communication device before the second information is sent, wherein the fourth information is Information obtained by training the second neural network before obtaining the second information.
  • the above-mentioned fourth information may be information obtained in the first K times of training based on the above-mentioned first information or the third information, where K is a positive integer.
  • the above-mentioned first information is the n-th information sent by the first communication device
  • the above-mentioned exponential decay rate may include the first estimated exponential decay rate, the second estimated exponential decay rate, the Nth estimated exponential decay rate, etc. in the training process, where N is a positive integer.
  • the above constants may be one or more predefined constants.
  • the loss function includes: a combined function of at least one of the output of the second neural network and the label error, mean square error, normalized mean square error, correlation, entropy, and mutual information; or
  • the loss function includes: a combination function of at least one of the output of the second neural network and the label, mean square error, normalized mean square error, correlation, entropy, mutual information, and a constant; or
  • the loss function includes: a loss function obtained by weighted combination of loss information of multiple parts of the output of the second neural network, where the loss information includes at least one of the following: a loss value and a loss-related function.
  • the above-mentioned combinations may include combinations of various common mathematical operations such as addition and subtraction multipliers, N-th power, N-fold root, logarithm, derivation, partial derivation, etc.
  • N is any number, for example, N can be Positive or negative or 0, real or complex.
  • the function associated with the loss of each part may include the error between the output and the label, the mean square error , a combined function of at least one of the normalized mean square error, correlation, entropy, and mutual information; or the function associated with the loss of each part can include the error between the output and the label, the mean square error, the normalized mean A combined function of at least one of square error, correlation, entropy, and mutual information and a constant.
  • weighting of the weighted combination of loss information may include a combination of linear averaging, multiplicative averaging, and other common averaging methods.
  • the loss function included in the second information includes a loss function obtained by weighted combination of multiple parts of the loss information, this can make the first neural network updated by the first communication device more accurate.
  • the multiple parts of the output include:
  • the output of the second neural network can be sorted by antenna, antenna element, antenna panel, Transmitter and Receiver Unit (TXRU), beam (analog beam, digital beam), layer, rank ), antenna angle (such as tilt angle), etc.
  • TXRU Transmitter and Receiver Unit
  • the output of the second neural network can be divided into different orthogonal or non-orthogonal code domains.
  • code domain such as Code Division Multiple Access (CDMA).
  • CDMA Code Division Multiple Access
  • the output of the second neural network may be divided according to a resource block (Resource Block, RB), a subband, or a physical resource group (physical resource group, PRG) or the like.
  • a resource block Resource Block, RB
  • a subband resource block
  • PRG physical resource group
  • the output of the second neural network may be divided into sub-carriers, symbols, time slots or half time slots.
  • the second information includes information of multiple target units of the second neural network:
  • the information of the plurality of target units is sorted according to the identifier of the target unit in the second information;
  • the information of the multiple target units is sorted in the second information according to at least one of the following items:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the second information can be sorted according to the target unit ID. For example, the smaller the ID is, the higher the order is, or the larger the ID is, the higher the order is.
  • the sorting of the information content can also be independent of the ID.
  • the multiplicative coefficients of all neurons can be listed first, and then the multiplicative coefficients of all neurons can be listed.
  • the multiplicative coefficient can be in the front, the additive coefficient can be in the back, or the additive coefficient can be in the front, and the multiplicative coefficient can be in the back, or there is no fixed requirement before and after the multiplicative coefficient and the additive coefficient.
  • the order of the multiplicative coefficients may be the IDs of the neurons in the previous layer associated with the multiplicative coefficients, or the IDs of the neurons in the current layer associated with the multiplicative coefficients, or any order.
  • the order of the additive coefficients may be the ID of the current layer neuron associated with the additive coefficients.
  • the second information includes:
  • the gradient information of the target unit whose gradient is greater than or equal to the preset threshold is the gradient information of the target unit whose gradient is greater than or equal to the preset threshold.
  • only the gradient information of the target unit whose gradient is greater than or equal to the preset threshold can be sent to the first communication device, so as to realize the compression of the second information.
  • the second information may also be compressed by other compression methods, such as entropy coding, Huffman coding and other lossy and lossless methods.
  • the second information may also indicate a compression method, for example, one or more bits are used to indicate a compression method of the second information.
  • the at least one target unit is determined by configuration on the network side or reported by the terminal.
  • Dropout is a means of preventing overfitting in deep learning, and has a very high Good fault tolerance.
  • the principle of Dropout is to let the activation value of a neuron stop working with a certain probability p during training.
  • the above network-side configuration may be configured in advance, or the protocol defaults to a set of some partial target units. And during training, you can only configure or report the ID of the set.
  • the gradient of the target unit in the second information uses the same bit cost or different bit cost.
  • different target units can be divided into different bit costs according to neurons, multiplicative coefficients, and additive coefficients.
  • the gradient bit overhead of multiplicative coefficients is higher than that of additive coefficients.
  • the second information may be carried in downlink control information (Downlink Control Information, DCI), media access control control element (media access control control element, MAC CE), radio resource control (Radio Resource Control, RRC), PUSCH, PDSCH, PDCCH or PUCCH.
  • DCI Downlink Control Information
  • media access control control element media access control control element
  • MAC CE media access control control element
  • RRC Radio Resource Control
  • the second information may be sent periodically, semi-persistently or aperiodically.
  • the period when the period is sent, the period can be configured and updated by RRC or MAC CE; in addition, once configured, it is sent periodically according to the configured parameters.
  • MAC CE MAC CE/or DCI, and is periodically sent after activation until MAC CE/or DCI is deactivated, or deactivated after the configured number of cycles.
  • the second information may be automatically sent or neural network training may be performed according to a network instruction, according to a terminal report, or when idle.
  • the above-mentioned free time includes at least one of the following:
  • the power exceeds a certain threshold, or is in a charging state
  • the information of the target unit is not limited to include: the identifier of the target unit and the target information of the loss function to the target unit, for example, it may also be specific parameters of the target unit.
  • the first information sent by the first communication device is received, where the first information is output information of the first neural network of the first communication device; the second information is sent to the first communication device.
  • information wherein the second information is information obtained by training the first information or the third information as the input of the second neural network of the second communication device, and the third information is based on the first information information obtained by an information.
  • FIG. 3 is a flowchart of another neural network information transmission method provided by an embodiment of the present application. The method is applied to a first communication device, as shown in FIG. 3, and includes the following steps:
  • Step 301 Send first information to a second communication device, wherein the first information is output information of a first neural network of the first communication device;
  • Step 302 Receive the second information sent by the second communication device, wherein the second information is to use the first information or the third information as the input of the second neural network of the second communication device for training
  • the obtained information, the third information is information obtained based on the first information.
  • the method further includes:
  • the first neural network is updated according to the second information.
  • the above-mentioned updating of the first neural network according to the second information may be, updating the output layer of the first neural network according to the input layer of the second neural network in the second information, such as according to the target unit of the input layer of the second neural network update the information of the corresponding unit of the output layer of the first neural network; or it can be based on the derivative or gradient or partial derivative of the output loss function of the second neural network relative to the input layer neurons of the second neural network, update the first neural network the neuron parameters of the output layer of the network (biases, weights, parameters of the activation function, etc.), or update the derivative or gradient of the output loss function of the second neural network with respect to the neurons of the output layer of the first neural network or partial guide.
  • the above-mentioned neurons may not be limited to the output layer of the first neural network or the input layer of the second neural network, but may also be the input layer or hidden layer of the first neural network, or the output layer or hidden layer of the second neural network. , such as updating the information of the corresponding unit of the hidden layer of the first neural network according to the information of the target unit of the input layer of the second neural network, or updating the input layer of the first neural network according to the information of the target unit of the input layer of the second neural network The information of the corresponding unit, other combinations will not be repeated.
  • the first information includes at least one of the following:
  • the signal includes at least one of the following:
  • the information includes at least one of the following:
  • Channel state information beam information, channel prediction information, interference information, positioning information, trajectory information, service prediction information, service management information, parameter prediction information, and parameter management information.
  • the signaling includes: control signaling.
  • the third information includes information obtained by operating at least one of the signal, information and signaling.
  • the second information includes:
  • the target unit includes at least one of the following:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the neuron includes at least one of the following:
  • the information of the target unit includes: the information of the loss function on the target unit; or
  • the information of the target unit includes: the identification of the target unit and the target information of the loss function to the target unit.
  • the information about the target unit by the loss function includes at least one of the following:
  • the gradient information includes: a combination of the gradient and at least one of the following:
  • the partial derivative information includes: a combination of the partial derivative and at least one of the following:
  • the derivative information includes: a combination of the derivative and at least one of the following:
  • the history information of the target unit is: the information of the target unit included in the fourth information sent to the first communication device before the second information is sent, wherein the fourth information is Information obtained by training the second neural network before obtaining the second information.
  • the gradient includes at least one of the following: the current gradient for obtaining the second information through training, and the gradient before the second information is obtained through training;
  • the partial derivation includes at least one of the following: the current partial derivation of the second information obtained by training, and the partial derivation before the second information is obtained through training;
  • the derivative includes at least one of the following: a current derivative of the second information obtained by training, and a derivative before the second information is obtained by training.
  • the loss function includes: a combined function of at least one of the output of the second neural network and the label error, mean square error, normalized mean square error, correlation, entropy, and mutual information; or
  • the loss function includes: a combination function of at least one of the output of the second neural network and the label, mean square error, normalized mean square error, correlation, entropy, mutual information, and a constant; or
  • the loss function includes: a loss function obtained by weighted combination of loss information of multiple parts of the output of the second neural network, where the loss information includes at least one of the following: a loss value and a loss-related function.
  • the multiple parts of the output include:
  • the second information includes information of multiple target units of the second neural network:
  • the information of the plurality of target units is sorted according to the identifier of the target unit in the second information;
  • the information of the multiple target units is sorted in the second information according to at least one of the following items:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the second information includes:
  • the gradient information of the target unit whose gradient is greater than or equal to the preset threshold is the gradient information of the target unit whose gradient is greater than or equal to the preset threshold.
  • the at least one target unit is determined by configuration on the network side or reported by the terminal.
  • this embodiment is an implementation on the network device side corresponding to the embodiment shown in FIG. 2 , and reference may be made to the relevant description of the embodiment shown in FIG. 2 for the specific implementation, so as to avoid repeated descriptions. This embodiment will not be repeated here.
  • the communication performance of the device can also be communicated.
  • FIG. 4 is a structural diagram of a neural network information transmission apparatus provided by an embodiment of the present invention.
  • the apparatus is applied to a second communication device.
  • the neural network information transmission apparatus 400 includes:
  • a receiving module 401 configured to receive first information sent by a first communication device, wherein the first information is output information of a first neural network of the first communication device;
  • a sending module 402 configured to send second information to the first communication device, wherein the second information is the input of the second neural network using the first information or the third information as the second communication device Information obtained through training, and the third information is information obtained based on the first information.
  • the first information includes at least one of the following:
  • the signal includes at least one of the following:
  • the information includes at least one of the following:
  • Channel state information beam information, channel prediction information, interference information, positioning information, trajectory information, service prediction information, service management information, parameter prediction information, parameter management information;
  • the signaling includes: control signaling.
  • the third information includes information obtained by operating at least one of the signal, information and signaling.
  • the second information includes:
  • the target unit includes at least one of the following:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the neuron includes at least one of the following:
  • the information of the target unit includes: the information of the loss function on the target unit; or
  • the information of the target unit includes: the identification of the target unit and the target information of the loss function to the target unit.
  • the information about the target unit by the loss function includes at least one of the following:
  • the gradient information includes: a combination of the gradient and at least one of the following:
  • the partial derivative information includes: a combination of the partial derivative and at least one of the following:
  • the derivative information includes: a combination of the derivative and at least one of the following:
  • the history information of the target unit is: the information of the target unit included in the fourth information sent to the first communication device before the second information is sent, wherein the fourth information is Information obtained by training the second neural network before obtaining the second information.
  • the gradient includes at least one of the following: the current gradient for obtaining the second information through training, and the gradient before the second information is obtained through training;
  • the partial derivation includes at least one of the following: the current partial derivation of the second information obtained by training, and the partial derivation before the second information is obtained through training;
  • the derivative includes at least one of the following: a current derivative of the second information obtained by training, and a derivative before the second information is obtained by training.
  • the loss function includes: a combined function of at least one of the output of the second neural network and the label error, mean square error, normalized mean square error, correlation, entropy, and mutual information; or
  • the loss function includes: a combination function of at least one of the output of the second neural network and the label, mean square error, normalized mean square error, correlation, entropy, mutual information, and a constant; or
  • the loss function includes: a loss function obtained by weighted combination of loss information of multiple parts of the output of the second neural network, where the loss information includes at least one of the following: a loss value and a loss-related function.
  • the multiple parts of the output include:
  • the second information includes information of multiple target units of the second neural network:
  • the information of the plurality of target units is sorted according to the identifier of the target unit in the second information;
  • the information of the multiple target units is sorted in the second information according to at least one of the following items:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the second information includes:
  • the gradient information of the target unit whose gradient is greater than or equal to the preset threshold is the gradient information of the target unit whose gradient is greater than or equal to the preset threshold.
  • the at least one target unit is determined by configuration on the network side or reported by the terminal.
  • the neural network information transmission apparatus provided in the embodiment of the present application can implement each process in the method embodiment of FIG. 2 , which is not repeated here to avoid repetition, and can improve the communication performance of the communication device.
  • the neural network information transmission apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in the second communication device.
  • FIG. 5 is a structural diagram of another neural network information transmission apparatus provided by an embodiment of the present invention.
  • the apparatus is applied to a first communication device.
  • the neural network information transmission apparatus 500 includes:
  • a sending module 501 configured to send first information to a second communication device, wherein the first information is output information of a first neural network of the first communication device;
  • the receiving module 502 is configured to receive the second information sent by the second communication device, wherein the second information is the first information or the third information as the second neural network of the second communication device. Information obtained through training is input, and the third information is information obtained based on the first information.
  • the first information includes at least one of the following:
  • the signal includes at least one of the following:
  • the information includes at least one of the following:
  • Channel state information beam information, channel prediction information, interference information, positioning information, trajectory information, service prediction information, service management information, parameter prediction information, and parameter management information.
  • the signaling includes: control signaling.
  • the third information includes information obtained by operating at least one of the signal, information and signaling.
  • the second information includes:
  • the target unit includes at least one of the following:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the neuron includes at least one of the following:
  • the information of the target unit includes: the information of the loss function on the target unit; or
  • the information of the target unit includes: the identification of the target unit and the target information of the loss function to the target unit.
  • the information about the target unit by the loss function includes at least one of the following:
  • the gradient information includes: a combination of the gradient and at least one of the following:
  • the partial derivative information includes: a combination of the partial derivative and at least one of the following:
  • the derivative information includes: a combination of the derivative and at least one of the following:
  • the history information of the target unit is: the information of the target unit included in the fourth information sent to the first communication device before the second information is sent, wherein the fourth information is Information obtained by training the second neural network before obtaining the second information.
  • the gradient includes at least one of the following: the current gradient for obtaining the second information through training, and the gradient before the second information is obtained through training;
  • the partial derivation includes at least one of the following: the current partial derivation of the second information obtained by training, and the partial derivation before the second information is obtained through training;
  • the derivative includes at least one of the following: a current derivative of the second information obtained by training, and a derivative before the second information is obtained by training.
  • the loss function includes: a combined function of at least one of the output of the second neural network and the label error, mean square error, normalized mean square error, correlation, entropy, and mutual information; or
  • the loss function includes: a combination function of at least one of the output of the second neural network and the label, mean square error, normalized mean square error, correlation, entropy, mutual information, and a constant; or
  • the loss function includes: a loss function obtained by weighted combination of loss information of multiple parts of the output of the second neural network, where the loss information includes at least one of the following: a loss value and a loss-related function.
  • the multiple parts of the output include:
  • the second information includes information of multiple target units of the second neural network:
  • the information of the plurality of target units is sorted according to the identifier of the target unit in the second information;
  • the information of the multiple target units is sorted in the second information according to at least one of the following items:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the second information includes:
  • the gradient information of the target unit whose gradient is greater than or equal to the preset threshold is the gradient information of the target unit whose gradient is greater than or equal to the preset threshold.
  • the at least one target unit is determined by configuration on the network side or reported by the terminal.
  • the device further includes:
  • An update module configured to update the first neural network according to the second information.
  • the neural network information transmission apparatus provided in the embodiment of the present application can implement each process in the method embodiment of FIG. 3 , which is not repeated here to avoid repetition, and can improve the communication performance of the communication device.
  • the neural network information transmission apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in the first communication device.
  • FIG. 6 is a schematic diagram of a hardware structure of a communication device implementing an embodiment of the present application.
  • the communication device 600 includes but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 608, and a processor 610, etc. part.
  • the communication device 600 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 610 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions.
  • a power source such as a battery
  • the structure of the electronic device shown in FIG. 6 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than those shown in the figure, or combine some components, or arrange different components, which will not be repeated here. .
  • a radio frequency unit 601 configured to receive first information sent by a first communication device, where the first information is output information of a first neural network of the first communication device;
  • the radio frequency unit 601 is further configured to send second information to the first communication device, where the second information is the input of the second neural network using the first information or the third information as the second communication device Information obtained through training, and the third information is information obtained based on the first information.
  • the first information includes at least one of the following:
  • the signal includes at least one of the following:
  • the information includes at least one of the following:
  • Channel state information beam information, channel prediction information, interference information, positioning information, trajectory information, service prediction information, service management information, parameter prediction information, parameter management information;
  • the signaling includes: control signaling.
  • the third information includes information obtained by operating at least one of the signal, information and signaling.
  • the second information includes:
  • the target unit includes at least one of the following:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the neuron includes at least one of the following:
  • the information of the target unit includes: the information of the loss function on the target unit; or
  • the information of the target unit includes: the identification of the target unit and the target information of the loss function to the target unit.
  • the information about the target unit by the loss function includes at least one of the following:
  • the gradient information includes: a combination of the gradient and at least one of the following:
  • the partial derivative information includes: a combination of the partial derivative and at least one of the following:
  • the derivative information includes: a combination of the derivative and at least one of the following:
  • the history information of the target unit is: the information of the target unit included in the fourth information sent to the first communication device before the second information is sent, wherein the fourth information is Information obtained by training the second neural network before obtaining the second information.
  • the gradient includes at least one of the following: the current gradient for obtaining the second information through training, and the gradient before the second information is obtained through training;
  • the partial derivation includes at least one of the following: the current partial derivation of the second information obtained by training, and the partial derivation before the second information is obtained through training;
  • the derivative includes at least one of the following: a current derivative of the second information obtained by training, and a derivative before the second information is obtained by training.
  • the loss function includes: a combined function of at least one of the output of the second neural network and the label error, mean square error, normalized mean square error, correlation, entropy, and mutual information; or
  • the loss function includes: a combination function of at least one of the output of the second neural network and the label, mean square error, normalized mean square error, correlation, entropy, mutual information, and a constant; or
  • the loss function includes: a loss function obtained by weighted combination of loss information of multiple parts of the output of the second neural network, where the loss information includes at least one of the following: a loss value and a loss-related function.
  • the multiple parts of the output include:
  • the second information includes information of multiple target units of the second neural network:
  • the information of the plurality of target units is sorted according to the identifier of the target unit in the second information;
  • the information of the multiple target units is sorted in the second information according to at least one of the following items:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the second information includes:
  • the gradient information of the target unit whose gradient is greater than or equal to the preset threshold is the gradient information of the target unit whose gradient is greater than or equal to the preset threshold.
  • the at least one target unit is determined by configuration on the network side or reported by the terminal.
  • This embodiment can improve the communication performance of the communication device.
  • an embodiment of the present invention further provides a communication device, where the communication device is a second communication device, including a processor 610, a memory 608, and a program stored in the memory 608 and executable on the processor 610 or instruction, when the program or instruction is executed by the processor 610, each process of the above-mentioned embodiment of the neural network information transmission method can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • FIG. 7 is a structural diagram of a communication device provided by an embodiment of the present invention.
  • the communication device 700 includes: a processor 701, a transceiver 702, a memory 703, and a bus interface, wherein:
  • a transceiver 702 configured to: send first information to a second communication device, where the first information is output information of a first neural network of the first communication device;
  • the transceiver 702 is further configured to: receive second information sent by the second communication device, where the second information is a second neural network that uses the first information or the third information as the second communication device
  • the information obtained by training the input of the third information is information obtained based on the first information.
  • the first information includes at least one of the following:
  • the signal includes at least one of the following:
  • the information includes at least one of the following:
  • Channel state information beam information, channel prediction information, interference information, positioning information, trajectory information, service prediction information, service management information, parameter prediction information, and parameter management information.
  • the signaling includes: control signaling.
  • the third information includes information obtained by operating at least one of the signal, information and signaling.
  • the second information includes:
  • the target unit includes at least one of the following:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the neuron includes at least one of the following:
  • the information of the target unit includes: the information of the loss function on the target unit; or
  • the information of the target unit includes: the identification of the target unit and the target information of the loss function to the target unit.
  • the information about the target unit by the loss function includes at least one of the following:
  • the gradient information includes: a combination of the gradient and at least one of the following:
  • the partial derivative information includes: a combination of the partial derivative and at least one of the following:
  • the derivative information includes: a combination of the derivative and at least one of the following:
  • the history information of the target unit is: the information of the target unit included in the fourth information sent to the first communication device before the second information is sent, wherein the fourth information is Information obtained by training the second neural network before obtaining the second information.
  • the gradient includes at least one of the following: the current gradient for obtaining the second information through training, and the gradient before the second information is obtained through training;
  • the partial derivation includes at least one of the following: the current partial derivation of the second information obtained by training, and the partial derivation before the second information is obtained through training;
  • the derivative includes at least one of the following: a current derivative of the second information obtained by training, and a derivative before the second information is obtained by training.
  • the loss function includes: a combined function of at least one of the output of the second neural network and the label error, mean square error, normalized mean square error, correlation, entropy, and mutual information; or
  • the loss function includes: a combination function of at least one of the output of the second neural network and the label, mean square error, normalized mean square error, correlation, entropy, mutual information, and a constant; or
  • the loss function includes: a loss function obtained by weighted combination of loss information of multiple parts of the output of the second neural network, where the loss information includes at least one of the following: a loss value and a loss-related function.
  • the multiple parts of the output include:
  • the second information includes information of multiple target units of the second neural network:
  • the information of the plurality of target units is sorted according to the identifier of the target unit in the second information;
  • the information of the multiple target units is sorted in the second information according to at least one of the following items:
  • Neurons multiplicative coefficients of neurons, additive coefficients of neurons, deviations of neurons, weighting coefficients of neurons, parameters of activation functions.
  • the second information includes:
  • the gradient information of the target unit whose gradient is greater than or equal to the preset threshold is the gradient information of the target unit whose gradient is greater than or equal to the preset threshold.
  • the at least one target unit is determined by configuration on the network side or reported by the terminal.
  • the processor 701 is configured to: an update module, configured to update the first neural network according to the second information.
  • This embodiment can improve the communication performance of the communication device.
  • the transceiver 702 is used for receiving and transmitting data under the control of the processor 701, and the transceiver 702 includes at least two antenna ports.
  • the bus architecture may include any number of interconnected buses and bridges, in particular one or more processors represented by processor 701 and various circuits of memory represented by memory 703 linked together.
  • the bus architecture may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be described further herein.
  • the bus interface provides the interface.
  • Transceiver 702 may be a number of elements, including a transmitter and a receiver, that provide means for communicating with various other devices over a transmission medium.
  • the user interface 704 may also be an interface capable of externally connecting the required equipment, and the connected equipment includes but is not limited to a keypad, a display, a speaker, a microphone, a joystick, and the like.
  • the processor 701 is responsible for managing the bus architecture and general processing, and the memory 703 may store data used by the processor 701 in performing operations.
  • an embodiment of the present invention further provides a communication device, where the communication device is a first communication device, including a processor 701, a memory 703, a program stored in the memory 703 and executable on the processor 701, or instruction, when the program or instruction is executed by the processor 701, each process of the above embodiment of the neural network information transmission method is implemented, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the second communication device may also have the structure shown in FIG. 7
  • the first communication device may also have the structure shown in FIG. 6 , which will not be repeated here.
  • Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, implements the neural network on the side of the second communication device provided by the embodiment of the present application.
  • the embodiments of the present application further provide a computer program product, where the computer program product is stored in a non-volatile storage medium, and the computer program product is executed by at least one processor to implement the second communication provided by the embodiments of the present application.
  • the steps in the device-side neural network information transmission method, or the computer program product is executed by at least one processor to implement the steps in the first communication device-side neural network information transmission method provided by the embodiments of the present application.
  • the processor is the processor in the terminal or the network device described in the foregoing embodiment.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a program or an instruction to implement the first step provided by the embodiment of the present application.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is used to run a program or an instruction to implement the first step provided by the embodiment of the present application.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请提供一种神经网络信息传输方法、装置、通信设备和存储介质,该方法包括:接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。

Description

神经网络信息传输方法、装置、通信设备和存储介质
相关申请的交叉引用
本申请主张在2020年10月09日在中国提交的中国专利申请No.202011074715.3的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信技术领域,具体涉及一种神经网络信息传输方法、装置、通信设备和存储介质。
背景技术
在通信系统中通信设备之间往往需要传递大量的信息,例如:终端与网络设备之间传递信息,终端与终端之间传递信息,网络设备与网络设备之间传递信息。虽然在一些通信系统中在通信设备中引入了神经网络,但这些神经网络都是在终端研发阶段训练的,且是采用实验数据进行训练的,这样导致终端的通信性能比较低。
发明内容
本申请实施例提供一种神经网络信息传输方法、装置、通信设备和存储介质,能够解决通信设备的通信性能比较低的问题。
第一方面,本申请实施例提供一种神经网络信息传输方法,应用于第二通信设备,包括:
接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
第二方面,本申请实施例提供一种神经网络信息传输方法,应用于第一通信设备,包括:
向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
接收所述第二通信设备发送的第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
第三方面,本申请实施例提供一种神经网络信息传输装置,应用于第二通信设备,包括:
接收模块,用于接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
发送模块,用于向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
第四方面,本申请实施例提供一种神经网络信息传输装置,应用于第一通信设备,包括:
发送模块,用于向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
接收模块,用于接收所述第二通信设备发送的第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
第五方面,本申请实施例提供一种通信设备,所述通信设备为第二通信设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序或者指令,所述程序或者指令被所述处理器执行时实现本申请实施例提供的第二通信设备侧的神经网络信息传输方法中的步骤。
第六方面,本申请实施例提供一种通信设备,所述通信设备为第一通信设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序或者指令,所述程序或者指令被所述处理器执行时实现本申请实施例提供的第一通信设备侧的神经网络信息传输方法中的步骤。
第七方面,本申请实施例提供一种可读存储介质,所述可读存储介质上 存储有程序或指令,所述程序或指令被处理器执行时实现本申请实施例提供的第二通信设备侧的神经网络信息传输方法中的步骤,或者,所述程序或指令被处理器执行时实现本申请实施例提供的第一通信设备侧的神经网络信息传输方法中的步骤。
第八方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行网络侧设备程序或指令,实现如第一方面所述的方法,或实现如第二方面所述的方法。
第九方面,提供了一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现如第一方面所述的方法,或实现如第二方面所述的方法。
本申请实施例中,接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。这样,由于可以将所述第一信息或者第三信息作为第二神经网络的输入进行训练,并向第一通信设备发送训练得到的第二信息,从而实现实时对第二神经网络进行训练,以提高通信设备的通信性能。
附图说明
图1示出本申请实施例可应用的一种无线通信系统的框图;
图2是本申请实施例提供的一种神经网络信息传输方法的流程图;
图3是本申请实施例提供的另一种神经网络信息传输方法的流程图;
图4是本申请实施例提供的一种神经网络信息传输装置的结构图;
图5是本申请实施例提供的另一种神经网络信息传输装置的结构图;
图6是本申请实施例提供的一种通信设备的结构图;
图7是本申请实施例提供的另一种通信设备的结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency-Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。然而,以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,这些技术也可应用于NR系统应用以外的应用,如第6代(6th Generation,6G)通信系统。
图1示出本申请实施例可应用的一种无线通信系统的框图。无线通信系统包括终端11和网络设备12。其中,终端11也可以称作终端设备或者用户终端(User Equipment,UE),终端11可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)或称为笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、掌上电脑、上网本、超级移动个人计算机(ultra-mobile personal computer,UMPC)、移动上网装置(Mobile Internet  Device,MID)或车载设备(VUE)、行人终端(PUE)、RedCap UE等终端侧设备,其中,RedCap UE可以包括:穿戴设备、工业传感器、视频监控设备等,穿戴设备包括:手环、耳机、眼镜等。需要说明的是,在本申请实施例并不限定终端11的具体类型。
网络设备12可以是基站或核心网,其中,基站可被称为节点B、演进节点B、接入点、基收发机站(Base Transceiver Station,BTS)、无线电基站、无线电收发机、基本服务集(Basic Service Set,BSS)、扩展服务集(Extended Service Set,ESS)、B节点、演进型B节点(eNB)、家用B节点、家用演进型B节点、WLAN接入点、WiFi节点、发送接收点(Transmitting Receiving Point,TRP)或所述领域中其他某个合适的术语,只要达到相同的技术效果,所述基站不限于特定技术词汇,需要说明的是,在本申请实施例中仅以NR系统中的基站为例,但是并不限定基站的具体类型。
另外,本申请实施例中,可以应用于支持广播/多播(broadcast/multicast)特性的场景,例如:公共安全和关键任务(public safety and mission critical)、V2X应用(V2X applications),透明IPv4/IPv6多播传送(transparent IPv4/IPv6 multicast delivery)、IPTV、无线软件传送(software delivery over wireless)、组通信和物联网应用(group communications and IoT applications)等支持中broadcast/multicast特性的场景。当然,对此本申请实施例不作限定,例如:还可以其他单播的场景。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的神经网络信息传输方法、装置、通信设备和存储介质进行详细地说明。
请参见图2,图2是本申请实施例提供的一种神经网络信息传输方法的流程图,该方法应用于第二通信设备,如图2所示,包括以下步骤:
步骤201、接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息。
上述方法应用于第二通信设备可以理解为该方法由第二通信设备执行。
本申请实施例中,第一通信设备可以是终端,第二通信设备可以是网络设备,或者第一通信设备可以是终端,第二通信设备可以是另一终端,或者第一通信设备可以是网络设备,第二通信设备可以是另一网络设备。
进一步,上述第一通信设备还可以是一个或者多个设备,即接收一个或者多个设备发送的第一信息,并向一个或者多个设备发送第二信息。例如:接收一个终端组发送的第一信息,在得到第二信息之后,再向该终端组发送第二信息。
上述第一神经网络可以是卷积神经网络(Convolutional Neural Networks,CNN)或者循环神经网络(Recurrent Neural Network,RNN),本发明实施例对此不作限定,例如:还可以是其他深度神经网络,如生成式对抗网络(Generative Adversarial Network,GAN)或者长短期记忆网络(Long Short-Term Memory,LSTM)等等。
另外,第一神经网络输出的信息可以是第一通信设备向第二通信设备发送的信息,例如:终端向网络设备发送的信息,网络设备向终端发送的信息,终端之间传递的信息,或者网络设备之间传递的信息。
需要说明的是,本申请实施例中,接收第一信息可以包括解调、信道译码、信源译码、解压缩、校验等中至少一项处理,以得到第一信息,因为,在第一通信设备端,第一通信设备可能会对第一信息进行信源编码、压缩、信道编码、调制等至少一项处理,以转化为信号发送给第二通信设备。
步骤202、向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
第二通信设备接收到上述第一信息之后,可以将第一信息输入至上述第二神经网络进行训练,以得到上述第二信息,或者可以是在接收到上述第一信息后,对第一信息执行相关操作得到第三信息,再将第三信息输入至上述第二神经网络进行训练,以得到上述第二信息。另外,由于上述第二信息是训练得到的,且用于发送给第一通信设备,从而上述第二信息也可以称作训练交互信息。进一步的,上述第一通信设备接收到上述第二信息后,可以基于第二信息更新第一神经网络,即上述第二信息可以用于第一通信设备更新第一神经网络。
本申请实施例中,神经网络也可以理解为人工智能(Artificial Intelligence,AI)网络。
本申请实施例中,可以将所述第一信息或者第三信息作为第二神经网络的输入进行训练,并向第一通信设备发送训练得到的第二信息,从而实现实时对第二神经网络进行训练,即实时优化第二神经网络,以提高第二神经网络的业务性能,进而提高通信设备的通信性能。
作为一种可选的实施方式,所述第一信息包括如下至少一项:
信号、信息和信令。
其中,上述信号包括但不限于如下至少一项:
参考信号资源上承载的第一信号、信道承载的第二信号。
例如:上述第一信号可以包括如下至少一项:
解调参考信号(Demodulation Reference Signal,DMRS)、信道探测参考信号(Sounding Reference Signal,SRS)、同步信号块(synchronization signal block,SSB)、跟踪参考信号(tracking reference signal TRS)、相位跟踪参考信号(Phase-tracking reference signal,PTRS)、信道状态信息参考信号(Channel State Information reference signal,CSI-RS)等。
另外,上述第一信号可以是用于信号处理,例如:信号检测、滤波、均衡等。
上述第二信号可以包括如下至少一项:
物理下行控制信道(Physical Downlink Control Channel,PDCCH)、物理下行共享信道(Physical Downlink Shared Channel,PDSCH)、物理上行控制信道(Physical Uplink Control Channel,PUCCH)、物理上行共享信道(Physical Uplink Shared Channel,PUSCH)、物理随机接入信道(Physical Random Access Channel,PRACH)、物理广播信道(Physical Broadcast Channel,PBCH)等。
其中,上述信息包括但不限于如下至少一项:
信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息。
上述信道状态信息可以包括信道状态信息反馈信息。例如:包括信道相关信息、信道矩阵相关信息、信道特征信息、信道矩阵特征信息、预编码矩阵指示(Precoding Matrix Indicator,PMI)、秩指示(rank indication,RI)、CSI-RS资源指示符(CSI-RS resource indicator,CRI)、信道质量指示(Channel  Quality Indicator,CQI)、层指示(layer indicator,LI)等。
上述信道状态信息也可以包括频分双工(Frequency-division Duplex,FDD)上下行部分互易性的信道状态信息。例如:对于FDD系统,根据部分互异性,网络设备根据上行信道获取角度和时延信息,可以通过CSI-RS预编码或者直接指示的方法,将角度信息和时延信息、或其它具有部分互易性的信道状态信息、或直接估计的下行信道信息通知终端,终端可以根据基站的指示上报或者在基站的指示范围内选择并上报,从而减少终端的计算量和CSI上报的开销。
上述波束信息可以包括波束质量、波束的指示信息(例如:参考信号ID)、波束失败指示信息、波束失败恢复中的新波束指示信息。
进一步的,上述波束信息可以用于波束管理,如包括波束测量、波束上报、波束预测、波束失败检测、波束失败恢复、波束失败恢复中的新波束指示。
上述信道预测信息可以包括信道状态信息的预测和波束预测等中的至少一项。
上述干扰信息可以包括小区内干扰、小区间干扰、带外干扰、交调干扰等干扰信息。
上述定位信息可以是通过参考信号(例如SRS)估计出的终端的具体位置的信息,如包括水平位置和垂直位置中的至少一项,或者用于辅助位置估计的信息。
上述轨迹信息可以是通过参考信号(例如SRS),估计出的终端未来可能的轨迹,或用于辅助轨迹估计的信息。
上述业务预测信息可以是高层业务的预测信息,例如:预测的吞吐量、所需数据包大小、业务需求等等
上述业务管理信息可以是高层业务的管理信息,例如:移动速度、噪声信息等等。
上述参数预测信息可以是预测的移动速度、噪声信息等等
上述参数管理信息可以是移动速度、噪声信息等等。
其中,上述信令但不限于包括:控制信令。例如:功率控制的相关信令, 波束管理的相关信令等。
可选的,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
上述操作可以是信号检测、信号处理、均衡、调制、解调等操作,具体对此不作限定。进一步的,上述操作可以是根据第一信息受到的噪声、干扰、无线信道的衰落、时延等影响执行的相应操作。
该实施方式中,可以实现将第一信息中的部分或者全部执行操作后,再输入至第二神经网络。
需要说明的是,本申请实施例中,一些数据和信息(例如:信道状态信息、波束信息等),可以直接作为第二神经网络的输入。
作为一种可选的实施方式,上述第二信息包括:
所述第二神经网络的至少一个目标单元的信息。
第二信息包括上述至少一个目标单元的信息可以理解为,第二信息是多个目标单元的信息的组合。
其中,目标单元可以包括如下至少一项:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
上述乘性系数也可以称作权值(weight),上述加性系数也可以称作偏置(bias)。
上述激活函数的参数,例如,泄露修正线性单元(leaky Rectified Linear Unit,leaky ReLU)或参数化修正线性单元(Parametric Rectified Linear Unit,PReLU)的泄露参数。
上述神经元可以包括如下至少一项:
卷积核、池化(pooling)单元、循环单元。
例如:对于CNN,上述神经元可以包括卷积核,如上述目标单元可以包括:卷积核、卷积核的权重系数(也可以称作卷积核的乘性系数)、卷积核的偏差量(也可以称作卷积核的加性系数)。
对于CNN,上述神经元可以包括池化单元,而上述目标单元可以包括:卷积核、池化方法、及卷积核的参数。
例如:对于循环递归网络/循环神经网络(Recurrent Neural Network,RNN),上述神经元可以包括循环单元,如目标单元可以包括:循环单元和循环单元的加权系数,其中,循环单元的加权系数可以包括循环单元的乘性加权系数(如包括之前的状态对现在状态影响的权重、之前的状态对现在的输入影响的权重),以及循环单元的加性加权系数(即偏置)。需要说明的是,在RNN中循环单元为某个特殊的神经元,其输入不仅包括当前输入,还可以包括上一次的输入,和/或上一次的中间信息。
可选的,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
其中,上述目标单元可以是上述第二信息对应的任一目标单元,即每个目标单元的信息均可以包括损失函数对目标单元的信息,或者可以包括目标单元的标识和损失函数对目标单元的目标信息。
可选的,上述损失函数对所述目标单元的信息,可以包括如下至少一项:
损失函数对所述目标单元的梯度信息;
损失函数对所述目标单元的偏导信息;
损失函数对所述目标单元的导数信息。
其中,所述梯度信息可以包括:梯度与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数。
上述组合的方式可以包括加减乘数、N次方、N次开根号、对数、求导、求偏导等各种常见数学操作的组合,其中,N为任意数,例如,N可以为正数或负数或0、实数或复数。
其中,上述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
上述训练得到所述第二信息之前的梯度可以是,可以是训练得到上述第二信息之前的一次或者多次训练得到的梯度。
其中,上述偏导信息可以包括:偏导与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数。
上述组合方式可以参见上述梯度的组合方式,此处不作赘述。
进一步的,上述偏导可以包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导。
其中,上述导数信息包括:导数与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数。
上述组合方式可以参见上述梯度的组合方式,此处不作赘述。
进一步的,上述导数可以包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
可选的,目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
上述第四信息可以是在基于上述第一信息或者第三信息进行训练的前K次训练得到的信息,K为正整数。例如:上述第一信息为第一通信设备发送的第n个信息,而上述第四信息可以是第一通信设备发送第n-k(k=1,2,3,..,K)个信息进行训练得到的信息;又例如,上述第四信息可以是第一通信设备发送第n-k(k=1,2,3,..,K)个信息、第n-k+1个信息、…、第n-k+L(L=1,2,…,k-1)个信息进行训练得到的信息。
该实施方式中,由于包括目标单元的历史信息的组合,这样使得第一通信使用第二信息更新第一神经网络时更加准确。
另外,上述指数衰减率可以包括训练过程中的第一次估计的指数衰减率、第二次估计的指数衰减率、第N次估计的指数衰减率等等,其中N为正整数。
而上述常数可以是预先定义的一个或者多个常数。
可选的,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加 权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
其中,上述组合的方式可以包括加减乘数、N次方、N次开根号、对数、求导、求偏导等各种常见数学操作的组合,N为任意数,例如,N可以为正数或负数或0、实数或复数。
在损失函数包括所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数的实施方式中,每个部分的损失关联的函数均可以包括输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者每个部分的损失关联的函数均可以包括输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数。
另外,损失信息加权组合的加权可以包括线性平均、乘性平均及其它常见平均方法的组合。
由于第二信息包括的损失函数包括多个部分的损失信息加权组合得到的损失函数,这样可以使得第一通信设备更新的第一神经网络更加准确。
可选的,所述输出的多个部分包括:
按照空域资源、码域资源、频域资源和时域资源中的至少一项进行划分的多个部分。
例如:第二神经网络的输出可以按照天线、天线元、天线面板(panel)、发送接收单元(Transmitter and Receiver Unit,TXRU)、波束(模拟波束、数字波束)、层(Layer)、秩(Rank)、天线角度(如倾角)等方式划分。
例如:第二神经网络的输出可以按照不同的正交或非正交的码域等方式划分。码域有多种划分方法,如码分多址(Code Division Multiple Access,CDMA)等。
例如:第二神经网络的输出可以按照资源块(Resource Block,RB)、子带或者物理资源组(physical resource group,PRG)等方式划分。
例如:第二神经网络的输出可以按照子载波、符号、时隙或者半时隙等方式划分。
作为一种可选的实施方式,在所述第二信息包括所述第二神经网络的多 个目标单元的信息的情况下:
所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
例如:第二信息中可以根据目标单元ID进行排序,如ID越小,顺序越在前面,或ID越大,顺序越在前面,当然,信息内容的排序也可以与ID无关。
上述按照上述至少一项进行排序时,可以是在目标单元包括乘性系数、加性系数时,对于多个神经元可以是先列出所有神经元的乘性系数,再列出所有神经元的加性系数,或者先列出所有神经元的加性系数,再列出所有神经元的乘性系数,或者一个神经元的乘性系数和加性系数列完,再列下一个个神经元的乘性系数和加性系数。而对于单个神经元可以是乘性系数在前,加性系数在后,或者加性系数在前,乘性系数在后,或者乘性系数和加性系数前后没有固定要求。
而对于乘性系数的顺序可以是乘性系数关联的上一层神经元的ID,或者乘性系数关联的当前层神经元的ID,或者任意顺序。
而对于加性系数的顺序可以是加性系数关联的当前层神经元的ID。
需要说明的是,上述顺序仅是举例说明,本申请实施例中对第二信息中的内容的排序不作限定。
作为一种可选的实施方式,所述第二信息包括:
梯度大于或者等于预设门限的目标单元的梯度信息。
该实施方式中,可以实现只向第一通信设备发送梯度大于或者等于预设门限的目标单元的梯度信息,以实现对第二信息的压缩。
当然,第二信息还可以采用其他压缩方法,例如:使用熵编码、霍夫曼编码等各类有损、无损的方式等方式进行压缩。
进一步的,第二信息还可以指示压缩方法,例如:采用一个或者多个比特指示第二信息的压缩方法。
作为一种可选的实施方式,所述至少一个目标单元为网络侧配置或者终端上报确定的。
该实施方式中,可以实现仅发送部分目标单元的信息给第一通信设备,以达到随机释放(Dropout)的效果,其中,Dropout为在深度学习中是一种防止过拟合的手段,具有很好的容错能力。Dropout的原理是在训练中,让某个神经元的激活值以一定的概率p停止工作。
当然,上述网络侧配置可以是提前配置,或协议默认一些部分目标单元的集合。且在训练时,可以仅配置或上报集合的ID即可。
可选的,在第二信息中目标单元的梯度使用相同的比特开销,或不同的比特开销,例如:不同的目标单元可以根据神经元、乘性系数、加性系数,分不同的比特开销。例如,乘性系数的梯度比特开销高于加性系数的梯度。
可选的,第二信息可以承载于下行控制信息(Downlink Control Information,DCI)、媒体接入控制控制单元(media access control control element,MAC CE)无线资源控制(Radio Resource Control,RRC)、PUSCH,PDSCH,PDCCH或者PUCCH中。
可选的,第二信息可以为周期、半持续或者非周期发送。
其中,周期发送时,周期可以由RRC或者MAC CE配置和更新;另外,一旦配置,则根据配置的参数,周期性发送。
半持续发送时,由RRC或者MAC CE配置和更新;另外,配置后,由MAC CE或者DCI激活,激活后周期性发送,直至MAC CE/或者DCI去激活,或配置的周期数目之后去激活。
非周期发送时,由RRC或者MAC CE配置和更新;另外,配置后,由MAC CE或者DCI激活,激活后发送L次,L=1,具体的,L可以由网络配置。
可选的,可以根据网络指示、根据终端上报或者闲时自动发送第二信息或者进行神经网络训练。
其中,上述闲时包括如下至少一项:
处于wifi状态;
电量超过某个门限值,或处于充电状态;
支持或处于某种特殊流量模式(例如第二信息的发送不算流量开销);
业务数据没有需求,或需求少于某个门限值。
需要说明的是,本申请实施例中,目标单元的信息并不限定包括:目标单元的标识和损失函数对目标单元的目标信息,例如:还可以是目标单元的具体参数。
本申请实施例中,接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。这样由于可以将所述第一信息或者第三信息作为第二神经网络的输入进行训练,并向第一通信设备发送训练得到的第二信息,从而实现实时对第二神经网络进行训练,以提高通信设备的通信性能。
请参见图3,图3是本申请实施例提供的另一种神经网络信息传输方法的流程图,该方法应用于第一通信设备,如图3所示,包括以下步骤:
步骤301、向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
步骤302、接收所述第二通信设备发送的第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
可选的,所述接收所述第二通信设备发送的第二信息之后,所述方法还包括:
依据所述第二信息更新所述第一神经网络。
上述依据所述第二信息更新所述第一神经网络可以是,依据第二信息中第二神经网络的输入层更新第一神经网络的输出层,如根据第二神经网络的输入层的目标单元的信息更新第一神经网络的输出层相应单元的信息;或者可以是根据第二神经网络的输出损失函数相对于第二神经网络的输入层神经元的导数或梯度或偏导,更新第一神经网络的输出层的神经元参数(偏置、权值、激活函数的参数等等),或更新第二神经网络的输出损失函数相对于第 一神经网络的输出层的神经元的导数或梯度或偏导。其中,上述神经元也可以不限于第一神经网络的输出层或第二神经网络的输入层,也可以为第一神经网络的输入层或隐藏层,或第二神经网络的输出层或隐藏层,如根据第二神经网络的输入层的目标单元的信息更新第一神经网络的隐藏层相应单元的信息,或根据第二神经网络的输入层的目标单元的信息更新第一神经网络的输入层相应单元的信息,其它组合不再赘述。
可选的,所述第一信息包括如下至少一项:
信号、信息和信令。
可选的,所述信号包括如下至少一项:
参考信号资源上承载的第一信号、信道承载的第二信号;
和/或,
所述信息包括如下至少一项:
信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息。
和/或,
所述信令包括:控制信令。
可选的,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
可选的,所述第二信息包括:
所述第二神经网络的至少一个目标单元的信息。
可选的,所述目标单元包括如下至少一项:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述神经元包括如下至少一项:
卷积核、池化单元、循环单元。
可选的,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
可选的,所述损失函数对所述目标单元的信息,包括如下至少一项:
损失函数对所述目标单元的梯度信息;
损失函数对所述目标单元的偏导信息;
损失函数对所述目标单元的导数信息。
可选的,所述梯度信息包括:梯度与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述偏导信息包括:偏导与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述导数信息包括:导数与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
可选的,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
和/或,
所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
和/或,
所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
可选的,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
可选的,所述输出的多个部分包括:
按照频域资源和时域资源中的至少一项进行划分的多个部分。
可选的,在所述第二信息包括所述第二神经网络的多个目标单元的信息的情况下:
所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述第二信息包括:
梯度大于或者等于预设门限的目标单元的梯度信息。
可选的,所述至少一个目标单元为网络侧配置或者终端上报确定的。
需要说明的是,本实施例作为与图2所示的实施例中对应的网络设备侧的实施方式,其具体的实施方式可以参见图2所示的实施例的相关说明,以为避免重复说明,本实施例不再赘述。本实施例中,同样可以通信设备的通信性能。
请参见图4,图4是本发明实施例提供一种神经网络信息传输装置的结构图,该装置应用于第二通信设备,如图4所示,神经网络信息传输装置400包括:
接收模块401,用于接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
发送模块402,用于向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
可选的,所述第一信息包括如下至少一项:
信号、信息和信令。
可选的,所述信号包括如下至少一项:
参考信号资源上承载的第一信号、信道承载的第二信号;
和/或,
所述信息包括如下至少一项:
信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息;
和/或,
所述信令包括:控制信令。
可选的,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
可选的,所述第二信息包括:
所述第二神经网络的至少一个目标单元的信息。
可选的,所述目标单元包括如下至少一项:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述神经元包括如下至少一项:
卷积核、池化单元、循环单元。
可选的,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
可选的,所述损失函数对所述目标单元的信息,包括如下至少一项:
损失函数对所述目标单元的梯度信息;
损失函数对所述目标单元的偏导信息;
损失函数对所述目标单元的导数信息。
可选的,所述梯度信息包括:梯度与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述偏导信息包括:偏导与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述导数信息包括:导数与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
可选的,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
和/或,
所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
和/或,
所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
可选的,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
可选的,所述输出的多个部分包括:
按照空域资源、码域资源、频域资源和时域资源中的至少一项进行划分的多个部分。
可选的,在所述第二信息包括所述第二神经网络的多个目标单元的信息 的情况下:
所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述第二信息包括:
梯度大于或者等于预设门限的目标单元的梯度信息。
可选的,所述至少一个目标单元为网络侧配置或者终端上报确定的。
本申请实施例提供的神经网络信息传输装置能够实现图2的方法实施例中的各个过程,为避免重复,这里不再赘述,且可以提高通信设备的通信性能。
需要说明的是,本申请实施例中的神经网络信息传输装置可以是装置,也可以是第二通信设备中的部件、集成电路、或芯片。
请参见图5,图5是本发明实施例提供的另一种神经网络信息传输装置的结构图,该装置应用于第一通信设备,如图5所示,神经网络信息传输装置500包括:
发送模块501,用于向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
接收模块502,用于接收所述第二通信设备发送的第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
可选的,所述第一信息包括如下至少一项:
信号、信息和信令。
可选的,所述信号包括如下至少一项:
参考信号资源上承载的第一信号、信道承载的第二信号;
和/或,
所述信息包括如下至少一项:
信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息。
和/或,
所述信令包括:控制信令。
可选的,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
可选的,所述第二信息包括:
所述第二神经网络的至少一个目标单元的信息。
可选的,所述目标单元包括如下至少一项:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述神经元包括如下至少一项:
卷积核、池化单元、循环单元。
可选的,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
可选的,所述损失函数对所述目标单元的信息,包括如下至少一项:
损失函数对所述目标单元的梯度信息;
损失函数对所述目标单元的偏导信息;
损失函数对所述目标单元的导数信息。
可选的,所述梯度信息包括:梯度与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述偏导信息包括:偏导与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述导数信息包括:导数与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
可选的,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
和/或,
所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
和/或,
所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
可选的,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
可选的,所述输出的多个部分包括:
按照频域资源和时域资源中的至少一项进行划分的多个部分。
可选的,在所述第二信息包括所述第二神经网络的多个目标单元的信息的情况下:
所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述第二信息包括:
梯度大于或者等于预设门限的目标单元的梯度信息。
可选的,所述至少一个目标单元为网络侧配置或者终端上报确定的。
可选的,所述装置还包括:
更新模块,用于依据所述第二信息更新所述第一神经网络。
本申请实施例提供的神经网络信息传输装置能够实现图3的方法实施例中的各个过程,为避免重复,这里不再赘述,且可以提高通信设备的通信性能。
需要说明的是,本申请实施例中的神经网络信息传输装置可以是装置,也可以是第一通信设备中的部件、集成电路、或芯片。
图6为实现本申请实施例的一种通信设备的硬件结构示意图。
该通信设备600包括但不限于:射频单元601、网络模块602、音频输出单元603、输入单元604、传感器605、显示单元606、用户输入单元607、接口单元608、存储器608、以及处理器610等部件。
本领域技术人员可以理解,通信设备600还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器610逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图6中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
射频单元601,用于接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
射频单元601还用于向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
可选的,所述第一信息包括如下至少一项:
信号、信息和信令。
可选的,所述信号包括如下至少一项:
参考信号资源上承载的第一信号、信道承载的第二信号;
和/或,
所述信息包括如下至少一项:
信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息;
和/或,
所述信令包括:控制信令。
可选的,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
可选的,所述第二信息包括:
所述第二神经网络的至少一个目标单元的信息。
可选的,所述目标单元包括如下至少一项:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述神经元包括如下至少一项:
卷积核、池化单元、循环单元。
可选的,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
可选的,所述损失函数对所述目标单元的信息,包括如下至少一项:
损失函数对所述目标单元的梯度信息;
损失函数对所述目标单元的偏导信息;
损失函数对所述目标单元的导数信息。
可选的,所述梯度信息包括:梯度与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述偏导信息包括:偏导与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述导数信息包括:导数与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
可选的,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
和/或,
所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
和/或,
所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
可选的,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
可选的,所述输出的多个部分包括:
按照空域资源、码域资源、频域资源和时域资源中的至少一项进行划分的多个部分。
可选的,在所述第二信息包括所述第二神经网络的多个目标单元的信息的情况下:
所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述第二信息包括:
梯度大于或者等于预设门限的目标单元的梯度信息。
可选的,所述至少一个目标单元为网络侧配置或者终端上报确定的。
本实施例可以提高通信设备的通信性能。
可选的,本发明实施例还提供一种通信设备,所述通信设备为第二通信设备,包括处理器610,存储器608,存储在存储器608上并可在所述处理器610上运行的程序或指令,该程序或指令被处理器610执行时实现上述神经网络信息传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
参见图7,图7是本发明实施例提供的一种通信设备的结构图,该通信设备700包括:处理器701、收发机702、存储器703和总线接口,其中:
收发机702,用于:向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
收发机702还用于:接收所述第二通信设备发送的第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
可选的,所述第一信息包括如下至少一项:
信号、信息和信令。
可选的,所述信号包括如下至少一项:
参考信号资源上承载的第一信号、信道承载的第二信号;
和/或,
所述信息包括如下至少一项:
信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息。
和/或,
所述信令包括:控制信令。
可选的,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
可选的,所述第二信息包括:
所述第二神经网络的至少一个目标单元的信息。
可选的,所述目标单元包括如下至少一项:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述神经元包括如下至少一项:
卷积核、池化单元、循环单元。
可选的,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
可选的,所述损失函数对所述目标单元的信息,包括如下至少一项:
损失函数对所述目标单元的梯度信息;
损失函数对所述目标单元的偏导信息;
损失函数对所述目标单元的导数信息。
可选的,所述梯度信息包括:梯度与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述偏导信息包括:偏导与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
和/或,
所述导数信息包括:导数与如下至少一项的组合:
所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
可选的,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
和/或,
所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
和/或,
所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
可选的,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
可选的,所述输出的多个部分包括:
按照频域资源和时域资源中的至少一项进行划分的多个部分。
可选的,在所述第二信息包括所述第二神经网络的多个目标单元的信息的情况下:
所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
可选的,所述第二信息包括:
梯度大于或者等于预设门限的目标单元的梯度信息。
可选的,所述至少一个目标单元为网络侧配置或者终端上报确定的。
可选的,处理器701用于:更新模块,用于依据所述第二信息更新所述第一神经网络。
本实施例可以提高通信设备的通信性能。
其中,收发机702,用于在处理器701的控制下接收和发送数据,所述收发机702包括至少两个天线端口。
在图7中,总线架构可以包括任意数量的互联的总线和桥,具体由处理器701代表的一个或多个处理器和存储器703代表的存储器的各种电路链接在一起。总线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。收发机702可以是多个元件,即包括发送机和接收机,提供用于在传输介质上与各种其他装置通信的单元。针对不同的用户设备,用户接口704还可以是能够外接内接需要设备的接口,连接的设备包括但不限于小键盘、显示器、扬声器、麦克风、操纵杆等。
处理器701负责管理总线架构和通常的处理,存储器703可以存储处理器701在执行操作时所使用的数据。
优选的,本发明实施例还提供一种通信设备,所述通信设备为第一通信设备,包括处理器701,存储器703,存储在存储器703上并可在所述处理器701上运行的程序或者指令,该程序或者指令被处理器701执行时实现上述神经网络信息传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中第二通信设备也可以是如图7所示的结构,第一通信设备也可以是如图6所示的结构,此处不作赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,所述程序或指令被处理器执行时实现本申请实施例提供的第二通信设备侧的神经网络信息传输方法中的步骤,或者,所述程序或指令被处理器执行时实现本申请实施例提供的第一通信设备侧的神经网络信息传输方法中的步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现 本申请实施例提供的第二通信设备侧的神经网络信息传输方法中的步骤,或者,所述计算机程序产品被至少一个处理器执行以实现本申请实施例提供的第一通信设备侧的神经网络信息传输方法中的步骤。
其中,所述处理器为上述实施例中所述的终端或者网络设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现本申请实施例提供的第一通信设备侧的神经网络信息传输方法或者本申请实施例提供的第二通信设备侧的神经网络信息传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体 现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (40)

  1. 一种神经网络信息传输方法,应用于第二通信设备,其中,所述神经网络信息传输方法包括:
    接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
    向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
  2. 如权利要求1所述的方法,其中,所述第一信息包括如下至少一项:
    信号、信息和信令。
  3. 如权利要求2所述的方法,其中,所述信号包括如下至少一项:
    参考信号资源上承载的第一信号、信道承载的第二信号;
    和/或,
    所述信息包括如下至少一项:
    信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息;
    和/或,
    所述信令包括:控制信令。
  4. 如权利要求2所述的方法,其中,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
  5. 如权利要求1所述的方法,其中,所述第二信息包括:
    所述第二神经网络的至少一个目标单元的信息。
  6. 如权利要求5所述的方法,其中,所述目标单元包括如下至少一项:
    神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
  7. 如权利要求6所述的方法,其中,所述神经元包括如下至少一项:
    卷积核、池化单元、循环单元。
  8. 如权利要求5所述的方法,其中,所述目标单元的信息包括:损失函 数对所述目标单元的信息;或者
    所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
  9. 如权利要求8所述的方法,其中,所述损失函数对所述目标单元的信息,包括如下至少一项:
    损失函数对所述目标单元的梯度信息;
    损失函数对所述目标单元的偏导信息;
    损失函数对所述目标单元的导数信息。
  10. 如权利要求9所述的方法,其中,所述梯度信息包括:梯度与如下至少一项的组合:
    所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
    和/或,
    所述偏导信息包括:偏导与如下至少一项的组合:
    所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
    和/或,
    所述导数信息包括:导数与如下至少一项的组合:
    所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
    其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
  11. 如权利要求10所述的方法,其中,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
    和/或,
    所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
    和/或,
    所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
  12. 如权利要求8所述的方法,其中,所述损失函数包括:所述第二神 经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
    所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
    所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
  13. 如权利要求12所述的方法,其中,所述输出的多个部分包括:
    按照空域资源、码域资源、频域资源和时域资源中的至少一项进行划分的多个部分。
  14. 如权利要求5所述的方法,其中,在所述第二信息包括所述第二神经网络的多个目标单元的信息的情况下:
    所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
    所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
    神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
  15. 如权利要求9所述的方法,其中,所述第二信息包括:
    梯度大于或者等于预设门限的目标单元的梯度信息。
  16. 如权利要求5所述的方法,其中,所述至少一个目标单元为网络侧配置或者终端上报确定的。
  17. 一种神经网络信息传输方法,应用于第一通信设备,其中,所述神经网络信息传输方法包括:
    向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
    接收所述第二通信设备发送的第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
  18. 如权利要求17所述的方法,其中,所述第一信息包括如下至少一项:
    信号、信息和信令。
  19. 如权利要求18所述的方法,其中,所述信号包括如下至少一项:
    参考信号资源上承载的第一信号、信道承载的第二信号;
    和/或,
    所述信息包括如下至少一项:
    信道状态信息、波束信息、信道预测信息、干扰信息、定位信息、轨迹信息、业务预测信息、业务管理信息、参数预测信息、参数管理信息;
    和/或,
    所述信令包括:控制信令。
  20. 如权利要求18所述的方法,其中,所述第三信息包括对所述信号、信息和信令中的至少一项操作得到的信息。
  21. 如权利要求17所述的方法,其中,所述第二信息包括:
    所述第二神经网络的至少一个目标单元的信息。
  22. 如权利要求21所述的方法,其中,所述目标单元包括如下至少一项:
    神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
  23. 如权利要求22所述的方法,其中,所述神经元包括如下至少一项:
    卷积核、池化单元、循环单元。
  24. 如权利要求21所述的方法,其中,所述目标单元的信息包括:损失函数对所述目标单元的信息;或者
    所述目标单元的信息包括:所述目标单元的标识和损失函数对所述目标单元的目标信息。
  25. 如权利要求24所述的方法,其中,所述损失函数对所述目标单元的信息,包括如下至少一项:
    损失函数对所述目标单元的梯度信息;
    损失函数对所述目标单元的偏导信息;
    损失函数对所述目标单元的导数信息。
  26. 如权利要求25所述的方法,其中,所述梯度信息包括:梯度与如下 至少一项的组合:
    所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
    和/或,
    所述偏导信息包括:偏导与如下至少一项的组合:
    所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
    和/或,
    所述导数信息包括:导数与如下至少一项的组合:
    所述目标单元的历史信息、学习速率、学习步长、指数衰减率、常数;
    其中,所述目标单元的历史信息为:在发送所述第二信息之前向所述第一通信设备发送的第四信息中包括的所述目标单元的信息,其中,所述第四信息为在获得到所述第二信息之前对所述第二神经网络进行训练得到的信息。
  27. 如权利要求26所述的方法,其中,所述梯度包括如下至少一项:训练得到所述第二信息的本次梯度、训练得到所述第二信息之前的梯度;
    和/或,
    所述偏导包括如下至少一项:训练得到所述第二信息的本次偏导、训练得到所述第二信息之前的偏导;
    和/或,
    所述导数包括如下至少一项:训练得到所述第二信息的本次导数、训练得到所述第二信息之前的导数。
  28. 如权利要求24所述的方法,其中,所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项的组合函数;或者
    所述损失函数包括:所述第二神经网络的输出与标签的误差、均方误差、归一化均方误差、相关性、熵、互信息中的至少一项与常数的组合函数;或者
    所述损失函数包括:所述第二神经网络的输出的多个部分的损失信息加权组合得到的损失函数,所述损失信息包括如下至少一项:损失值、损失关联的函数。
  29. 如权利要求28所述的方法,其中,所述输出的多个部分包括:
    按照频域资源和时域资源中的至少一项进行划分的多个部分。
  30. 如权利要求21所述的方法,其中,在所述第二信息包括所述第二神经网络的多个目标单元的信息的情况下:
    所述多个目标单元的信息在所述第二信息中按照目标单元的标识进行排序;或者
    所述多个目标单元的信息在所述第二信息中按照如下至少一项进行排序:
    神经元、神经元的乘性系数、神经元的加性系数、神经元的偏差量、神经元的加权系数、激活函数的参数。
  31. 如权利要求25所述的方法,其中,所述第二信息包括:
    梯度大于或者等于预设门限的目标单元的梯度信息。
  32. 如权利要求17所述的方法,其中,所述第二神经网络的至少一个目标单元为网络侧配置或者终端上报确定的。
  33. 如权利要求17所述的方法,其中,所述接收所述第二通信设备发送的第二信息之后,所述方法还包括:
    依据所述第二信息更新所述第一神经网络。
  34. 一种神经网络信息传输装置,应用于第二通信设备,其中,所述神经网络信息传输装置包括:
    接收模块,用于接收第一通信设备发送的第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
    发送模块,用于向所述第一通信设备发送第二信息,其中,所述第二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
  35. 如权利要求34所述的装置,其中,所述第二信息包括:
    所述第二神经网络的至少一个目标单元的信息。
  36. 一种神经网络信息传输装置,应用于第一通信设备,其中,所述神经网络信息传输装置包括:
    发送模块,用于向第二通信设备发送第一信息,其中,所述第一信息为所述第一通信设备的第一神经网络的输出信息;
    接收模块,用于接收所述第二通信设备发送的第二信息,其中,所述第 二信息为将所述第一信息或者第三信息作为所述第二通信设备的第二神经网络的输入进行训练得到的信息,所述第三信息为基于所述第一信息获得的信息。
  37. 如权利要求36所述的装置,其中,所述第二信息包括:
    所述第二神经网络的至少一个目标单元的信息。
  38. 一种通信设备,所述通信设备为第二通信设备,其中,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序或者指令,所述程序或者指令被所述处理器执行时实现如权利要求1至16中任一项所述的神经网络信息传输方法中的步骤。
  39. 一种通信设备,所述通信设备为第一通信设备,其中,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序或者指令,所述程序或者指令被所述处理器执行时实现如权利要求17至33中任一项所述的神经网络信息传输方法中的步骤。
  40. 一种可读存储介质,其中,所述可读存储介质上存储有程序或指令,所述程序或指令被处理器执行时实现如权利要求1至16中任一项所述的神经网络信息传输方法中的步骤,或者,所述程序或指令被处理器执行时实现如权利要求17至33中任一项所述的神经网络信息传输方法中的步骤。
PCT/CN2021/122765 2020-10-09 2021-10-09 神经网络信息传输方法、装置、通信设备和存储介质 WO2022073496A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21877013.9A EP4228217A4 (en) 2020-10-09 2021-10-09 METHOD AND DEVICE FOR TRANSMITTING INFORMATION VIA A NEURAL NETWORK, COMMUNICATIONS DEVICE AND STORAGE MEDIUM
US18/129,247 US20230244911A1 (en) 2020-10-09 2023-03-31 Neural network information transmission method and apparatus, communication device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011074715.3 2020-10-09
CN202011074715.3A CN114422380B (zh) 2020-10-09 2020-10-09 神经网络信息传输方法、装置、通信设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/129,247 Continuation US20230244911A1 (en) 2020-10-09 2023-03-31 Neural network information transmission method and apparatus, communication device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022073496A1 true WO2022073496A1 (zh) 2022-04-14

Family

ID=81125632

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122765 WO2022073496A1 (zh) 2020-10-09 2021-10-09 神经网络信息传输方法、装置、通信设备和存储介质

Country Status (4)

Country Link
US (1) US20230244911A1 (zh)
EP (1) EP4228217A4 (zh)
CN (1) CN114422380B (zh)
WO (1) WO2022073496A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134761A1 (zh) * 2022-01-17 2023-07-20 维沃移动通信有限公司 波束信息交互方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117544973A (zh) * 2022-08-01 2024-02-09 维沃移动通信有限公司 模型更新方法、装置、通信设备及可读存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109246048A (zh) * 2018-10-30 2019-01-18 广州海格通信集团股份有限公司 一种基于深度学习的物理层安全通信方法和系统
US10510002B1 (en) * 2019-02-14 2019-12-17 Capital One Services, Llc Stochastic gradient boosting for deep neural networks

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9870537B2 (en) * 2014-01-06 2018-01-16 Cisco Technology, Inc. Distributed learning in a computer network
CN113627458A (zh) * 2017-10-16 2021-11-09 因美纳有限公司 基于循环神经网络的变体致病性分类器
KR102034955B1 (ko) * 2018-03-27 2019-10-21 경상대학교산학협력단 무선 통신 시스템에서 신경망 기반의 송신전력 제어 방법 및 장치
US10911266B2 (en) * 2018-05-18 2021-02-02 Parallel Wireless, Inc. Machine learning for channel estimation
CN110874550A (zh) * 2018-08-31 2020-03-10 华为技术有限公司 数据处理方法、装置、设备和系统
CN111106864B (zh) * 2018-11-16 2023-02-24 维沃移动通信有限公司 上行波束训练方法、终端设备和网络侧设备
CN111490798B (zh) * 2019-01-29 2022-04-22 华为技术有限公司 译码的方法和译码装置
CN112054863B (zh) * 2019-06-06 2021-12-21 华为技术有限公司 一种通信方法及装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109246048A (zh) * 2018-10-30 2019-01-18 广州海格通信集团股份有限公司 一种基于深度学习的物理层安全通信方法和系统
US10510002B1 (en) * 2019-02-14 2019-12-17 Capital One Services, Llc Stochastic gradient boosting for deep neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
3GPP: "Technical Specification Group Services and System Aspects; Study on traffic characteristics and performance requirements for AI/ML model transfer in 5GS (Release 18)", 3GPP TR 22.874 V0.1.0, 30 September 2020 (2020-09-30), pages 1 - 55, XP051961408 *
OPPO: "FS_AMMT: Use case – General principle of split AI/ML operation", 3GPP TSG-SA WG1 E-MEETING #91 S1-203144, 2 September 2020 (2020-09-02), XP051920776 *
See also references of EP4228217A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134761A1 (zh) * 2022-01-17 2023-07-20 维沃移动通信有限公司 波束信息交互方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114422380B (zh) 2023-06-09
CN114422380A (zh) 2022-04-29
US20230244911A1 (en) 2023-08-03
EP4228217A4 (en) 2024-04-03
EP4228217A1 (en) 2023-08-16

Similar Documents

Publication Publication Date Title
WO2022078276A1 (zh) Ai网络参数的配置方法和设备
JP2018107832A (ja) 動的非直交多元接続通信のためのユーザ機器および方法
US20230244911A1 (en) Neural network information transmission method and apparatus, communication device, and storage medium
WO2022184009A1 (zh) 量化的方法、装置、设备及可读存储介质
WO2022105913A1 (zh) 通信方法、装置及通信设备
CN112806050B (zh) 信道状态信息报告计算
US20240088970A1 (en) Method and apparatus for feeding back channel information of delay-doppler domain, and electronic device
US20230299910A1 (en) Communications data processing method and apparatus, and communications device
WO2014129945A1 (en) Determination of network parameters in mobile communication networks
WO2019084733A1 (zh) 用于传输信号的方法、网络设备和终端设备
WO2023040887A1 (zh) 信息上报方法、装置、终端及可读存储介质
WO2022083619A1 (zh) 通信信息的发送、接收方法及通信设备
WO2022105907A1 (zh) Ai网络部分输入缺失的处理方法和设备
JP2018534840A (ja) 重み値取得方法及び装置
WO2022116875A1 (zh) 传输方法、装置、设备及可读存储介质
CN115866776A (zh) 用于上行链路频率选择性预编码的控制信令
US20180316463A1 (en) Methods and Apparatus for Control Bit Detection
WO2018027804A1 (en) Apparatus and method for unified csi feedback framework for control and data channel
WO2024032469A1 (zh) 测量参数的反馈方法、装置、终端和存储介质
WO2024041362A1 (zh) 信道状态信息处理方法、装置、通信节点及存储介质
US20240188006A1 (en) Power control for transmissions with time-based artificial noise
WO2024140578A1 (zh) 基于ai模型的csi反馈方法、终端及网络侧设备
WO2024078405A1 (zh) 传输方法、装置、通信设备及可读存储介质
WO2023213239A1 (zh) 参考信号的配置方法、状态信息的上报方法及相关设备
WO2024032694A1 (zh) Csi预测处理方法、装置、通信设备及可读存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021877013

Country of ref document: EP

Effective date: 20230509