WO2024016974A1 - Procédé de transmission d'informations et appareil de communication - Google Patents

Procédé de transmission d'informations et appareil de communication Download PDF

Info

Publication number
WO2024016974A1
WO2024016974A1 PCT/CN2023/103288 CN2023103288W WO2024016974A1 WO 2024016974 A1 WO2024016974 A1 WO 2024016974A1 CN 2023103288 W CN2023103288 W CN 2023103288W WO 2024016974 A1 WO2024016974 A1 WO 2024016974A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
wireless frame
transmission information
information
wireless
Prior art date
Application number
PCT/CN2023/103288
Other languages
English (en)
Chinese (zh)
Inventor
刘鹏
郭子阳
罗嘉俊
杨讯
颜敏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024016974A1 publication Critical patent/WO2024016974A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports

Definitions

  • the present application relates to the field of Wi-Fi technology, and more specifically, to an information transmission method and communication device.
  • the existing design and management of AI-based wireless networks cannot be applied to more complex wireless network environments. This is because the wireless network environment has extremely high variability, and it is difficult for an existing set of neural network parameters to cope with all wireless network environments. For example, after the terminal sleeps for a period of time or switches to another cell, the wireless network environment in which it is located may have changed. Therefore, neural network parameters need to be continuously updated as the wireless network environment changes.
  • This application provides an information transmission method and communication device.
  • the AP can obtain the wireless frame sent by the station before the first moment.
  • the transmission information of the frame can be learned based on the transmission information of the wireless frame to obtain neural network data (which can be understood as neural network data), and the neural network data can be delivered.
  • the AP can help the first station through the neural network data.
  • you can also obtain richer data for neural network training and better complete the learning and updating of neural network data (including neural network parameters) after reducing the frequency of wireless frame loss.
  • a method of information transmission including: a first station sending a first wireless frame at a first moment, and the first wireless frame includes transmission of a wireless frame sent by the first station before the first moment. Information; the first station receives neural network data from the access point, the neural network data being related to the transmission information of the wireless frame.
  • the above-mentioned wireless frames sent by the first station before the first moment can be understood as wireless frames that the first station failed to send, or wireless frames that were lost due to collisions by the first station.
  • the AP can obtain the transmission information of the wireless frame sent by the first station before the first moment, and can based on The transmission information of the wireless frame is learned to obtain neural network data (which can be understood as neural network data), and the neural network data is delivered.
  • the station can reduce the frequency of wireless frame loss based on the neural network data delivered by the AP. It can also report richer data for neural network training to the AP after reducing the frequency of wireless frame loss to better help the AP complete the neural network training. Learning and updating of network data (including neural network parameters).
  • the transmission information includes at least one of the following: action information, time information, or action characteristic information.
  • the AP can learn and update the neural network data based on the reported information, and deliver the learned and updated neural network data to the site.
  • the site can based on the neural network data delivered by the AP.
  • the neural network data adjusts or maintains actions, thereby reducing the frequency of wireless frame loss and reporting richer data for neural network training to the AP.
  • the action includes at least one of the following: channel access, rate adaptation, channel bonding, or channel aggregation.
  • the characteristic information when the action includes accessing the channel, includes access duration.
  • the AP can determine more appropriate neural network data based on the information reported by the station.
  • the station can adjust or maintain channel access actions based on the neural network data sent by the AP. , which can reduce the frequency of wireless frame loss caused by unreasonable channel access, and then report more abundant data for neural network training to the AP.
  • the neural network data includes at least one of the following:
  • Neural network parameters or neural network training data are Neural network parameters or neural network training data.
  • the wireless frames sent by the first station before the first moment include multiple wireless frames sent continuously by the first station.
  • STA#A can indicate the actions implied by the remaining multiple radio frames by only indicating the action implied by one radio frame in radio frame S1. In this way, it can Save signaling overhead.
  • the neural network training data is determined by the access point based on the transmission information of the wireless frame and the transmission information of the wireless frame sent by the second station.
  • the embodiments of the present application can enable the AP to determine more reasonable neural network data for STA#A by integrating the transmission information of wireless frames reported by other STAs, thereby guiding or coordinating the actions of STA#A, thereby reducing the number of wireless frames. Furthermore, after reducing the frequency of wireless frame loss, it can obtain richer training data and better complete the learning and updating of neural network data (including neural network parameters).
  • an information transmission method including: an access point receiving a first wireless frame sent by a first station at a first moment, where the first wireless frame includes a wireless frame sent by the first station before the first moment.
  • the transmission information of the frame the access point sends neural network data to the first station, and the neural network data is related to the transmission information of the wireless frame.
  • the AP can obtain the transmission information of the wireless frame sent by the station before the first moment, and can transmit the wireless frame based on the The AP learns the information to obtain neural network data and delivers the neural network data. In this way, the AP can use the neural network data to help the first station reduce the frequency of wireless frame loss. It can also obtain more information after reducing the frequency of wireless frame loss. Rich data for neural network training to better complete the learning and updating of neural network data.
  • the transmission information includes at least one of the following: action information, time information, or action characteristic information.
  • the action includes at least one of the following: channel access, rate adaptation, channel bonding, or channel aggregation.
  • the characteristic information when the action includes accessing the channel, includes access duration.
  • the neural network data includes at least one of the following: neural network parameters or neural network training data.
  • the wireless frames sent by the first station before the first moment include multiple wireless frames sent continuously by the first station.
  • STA#A can indicate the actions implied by the remaining multiple radio frames by only indicating the action implied by one radio frame in radio frame S1. In this way, it can Save signaling overhead.
  • the neural network training data is determined by the access point based on the transmission information of the wireless frame and the transmission information of the wireless frame sent by the second station.
  • the embodiments of the present application can enable the AP to determine more reasonable neural network data for STA#A by integrating the transmission information of wireless frames reported by other STAs, thereby guiding or coordinating the actions of STA#A, thereby reducing the number of wireless frames. Furthermore, after reducing the frequency of wireless frame loss, it can obtain richer training data and better complete the learning and updating of neural network data (including neural network parameters).
  • a communication device including: a transceiver unit configured to send a first wireless frame at a first moment, the first wireless frame being The wire frame includes the transmission information of the wireless frame sent by the communication device before the first moment; the transceiver unit is also used to receive neural network data from the access point, the neural network data is related to the transmission information of the wireless frame.
  • the transmission information includes at least one of the following: action information, time information, or action characteristic information.
  • the action includes at least one of the following: channel access, rate adaptation, channel bonding, or channel aggregation.
  • the characteristic information when the action includes accessing the channel, includes access duration.
  • the neural network data includes at least one of the following: neural network parameters or neural network training data.
  • the wireless frames sent by the communication device before the first moment include multiple wireless frames sent continuously by the communication device.
  • the neural network training data is determined by the access point based on the transmission information of the wireless frame and the transmission information of the wireless frame sent by the second station.
  • a communication device including: a transceiver unit configured to receive a first wireless frame sent by a first station at a first time, where the first wireless frame includes a wireless frame sent by the first station before the first time. Transmission information of the frame; a transceiver unit configured to send neural network data to the first station, where the neural network data is related to the transmission information of the wireless frame.
  • the transmission information includes at least one of the following: action information, time information, or action characteristic information.
  • the action includes at least one of the following: channel access, rate adaptation, channel bonding, or channel aggregation.
  • the characteristic information when the action includes accessing the channel, includes access duration.
  • the neural network data includes at least one of the following: neural network parameters or neural network training data.
  • the wireless frames sent by the first station before the first moment include multiple wireless frames sent continuously by the first station.
  • the neural network training data is determined by the communication device based on the transmission information of the wireless frame and the transmission information of the wireless frame sent by the second station.
  • a communication device including a processor, the processor is coupled to a memory, and the processor is configured to execute a computer program or instructions, so that the communication device performs the first aspect and any of the first aspects.
  • a communication device including a logic circuit and an input-output interface.
  • the logic circuit is used to execute a computer program or instructions, so that the communication device executes the first aspect and any possibility of the first aspect.
  • a computer-readable storage medium including a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on a computer, the computer is caused to perform the first aspect and any of the first aspects.
  • a computer program product which includes instructions. When the instructions are run on a computer, they cause the computer to perform the method described in the first aspect and any possible implementation of the first aspect. ; Or, causing the computer to execute the method described in the second aspect and any possible implementation of the second aspect.
  • Figure 1 is a schematic diagram of an application scenario 100 according to an embodiment of the present application.
  • Figure 2 is a schematic diagram of the deep neural network model.
  • Figure 3 is a schematic diagram of a neuron calculating output based on input.
  • Figure 4 is a schematic diagram of deep reinforcement learning.
  • Figure 5 is a schematic diagram of a learning method 500 for neural network parameters.
  • Figure 6 is a schematic diagram of wireless frame collision according to an embodiment of the present application.
  • Figure 7 is an interactive flow chart of the information transmission method 700 according to the embodiment of the present application.
  • Figure 8 is an interactive flow chart of the information transmission method 800 according to the embodiment of the present application.
  • Figure 9 is a schematic block diagram of a communication device 900 according to an embodiment of the present application.
  • Figure 10 is a schematic block diagram of a communication device 1000 according to an embodiment of the present application.
  • Figure 11 is a schematic block diagram of a communication device 1100 according to an embodiment of the present application.
  • WLAN wireless local area network
  • IEEE 802.11 system standards such as 802.11a/b/g standards, 802.11n standards, 802.11ac standards, and 802.11ax standard
  • its next generation such as the 802.11be standard or its next generation standard.
  • GSM global system for mobile communication
  • CDMA code division multiple access
  • WCDMA broadband code division multiple access Address
  • GPRS general packet radio service
  • LTE long term evolution
  • FDD frequency division duplex
  • TDD LTE time division duplex
  • UMTS universal mobile telecommunication system
  • WiMAX global interoperability for microwave access
  • the terminal in the embodiment of this application may refer to user equipment (UE), access terminal, user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication Device, user agent, or user device.
  • the terminal may also be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), or a device with wireless communication capabilities
  • Handheld devices computing devices or other processing devices connected to wireless modems, vehicle-mounted devices, wearable devices, terminals in 5G networks, terminals in future 6G networks or public land mobile networks (PLMN) Terminals, etc. are not limited by the embodiments of this application.
  • the network device in the embodiment of this application may be a device used to communicate with a terminal.
  • the network device may be a global system of mobile communication (GSM) system or a code division multiple access (code division multiple access, CDMA) system.
  • the base station base transceiver station, BTS), or the base station (nodeB, NB) in the wideband code division multiple access (WCDMA) system, or the evolutionary base station (evolutional nodeB) in the LTE system , eNB or eNodeB), or it can be a wireless controller in a cloud radio access network (CRAN) scenario, or the network device can be a relay station, access point, vehicle-mounted device, wearable device, 5G network
  • the network equipment in the network as well as the network equipment in the future 6G network or the network equipment in the PLMN network are not limited by the embodiments of this application.
  • FIG 1 is a schematic diagram of an application scenario 100 according to an embodiment of the present application.
  • an access point can be a communication server, a router, a switch, or any of the above network devices.
  • the station may be a mobile phone, a computer, or any of the above-mentioned terminals, which are not limited in the embodiments of this application.
  • the technical solutions of the embodiments of the present application are not only applicable to communication between an AP and one or more STAs, but also to mutual communication between APs, and also to mutual communication between STAs.
  • the embodiment of the present application only takes the communication between an AP and one or more STAs as an example for description.
  • this description method does not have any limiting effect on the actual application scope of the embodiment of the present application. This is explained in a unified manner and will not be repeated in the following paragraphs.
  • the access point can be an access point for a terminal (such as a mobile phone) to enter a wired (or wireless) network. It is mainly deployed inside homes, buildings and campuses. The typical coverage radius is tens of meters to hundreds of meters. Of course, it can also Deployed outdoors.
  • the access point is equivalent to a bridge connecting the wired network and the wireless network. Its main function is to connect various wireless network clients together, and then connect the wireless network to the Ethernet.
  • the access point can be a terminal with a Wi-Fi chip (such as a mobile phone) or a network device (such as a router).
  • the access point can be a device that supports the 802.11be standard.
  • the access point can also be a device that supports multiple WLAN standards of the 802.11 family such as 802.11ax, 802.11ac, 802.11n, 802.11g, 802.11b, 802.11a, and 802.11be next generation.
  • the access point in this application can be a HE AP or an EHT AP, or an access point suitable for a certain future generation of Wi-Fi standards.
  • the site can be a wireless communication chip, wireless sensor or wireless communication terminal, etc., and can also be called a user.
  • the site can be a mobile phone that supports Wi-Fi communication function, a tablet computer that supports Wi-Fi communication function, a set-top box that supports Wi-Fi communication function, a smart TV that supports Wi-Fi communication function, or a smart TV that supports Wi-Fi communication function.
  • the site can support the 802.11be standard.
  • the site can also support 802.11 family WLAN standards such as 802.11ax, 802.11ac, 802.11n, 802.11g, 802.11b, 802.11a, and 802.11be next generation.
  • access points and sites can be devices used in the Internet of Vehicles, Internet of Things nodes and sensors in IoT, smart cameras, smart remote controls, smart water meters and sensors in smart cities, etc.
  • Neural network is composed of neurons.
  • a neuron can refer to an arithmetic unit that takes x s as input, and the output of this arithmetic unit can be:
  • s 1, 2,...n, n is a natural number greater than 1
  • W s is the weight of x s
  • b is the bias of the neuron.
  • f is the activation function of a neuron, which is used to introduce nonlinear characteristics into the neural network to transform the input signal in the neuron into an output signal. The output signal of this activation function can be used as the input of the next layer.
  • NN is a network formed by connecting multiple single neurons mentioned above, that is, the output of one neuron can be the input of another neuron.
  • the input of each neuron can be connected to the local receptive field of the previous layer to extract the features of the local receptive field.
  • the local receptive field can be an area composed of several neurons.
  • Deep neural network also known as multi-layer neural network
  • DNN can be understood as a neural network with multiple hidden layers.
  • Figure 2 is a schematic diagram of the deep neural network model.
  • DNN is divided according to the positions of different layers.
  • the neural network inside the DNN can be divided into three categories: input layer, intermediate layer, and output layer.
  • the first layer is the input layer
  • the last layer is the output layer
  • the layers in between are all intermediate layers (can also be understood as hidden layers).
  • the layers are fully connected, that is to say, any neuron in the i-th layer must be connected to any neuron in the i+1-th layer.
  • each neuron may have multiple input connections, and each neuron can calculate an output based on the input. See Figure 3 for details.
  • each neuron may have multiple output connections, and the output of one neuron serves as the input of the next neuron.
  • the input layer only has output connections.
  • Each neuron in the input layer is the value of the input neural network.
  • the output value of each neuron can be directly used as the input of all output connections.
  • the output layer only has input connections, and the output of the output layer can be calculated using formula (2).
  • x represents the input of the neural network
  • y represents the output of the neural network
  • w i represents the weight of the i-th layer neural network
  • b i represents the bias of the i-th layer neural network
  • f i represents the activation function of the i-th layer neural network.
  • each layer of DNN can be expressed through a linear relationship: means, among them, is the input vector, is the output vector, is the offset vector, W is the weight matrix (also called coefficient), and ⁇ () is the activation function. Since there are multiple DNN layers, the coefficient W and offset vector The quantity is multiple.
  • the definitions of these parameters in DNN are as follows: Taking the coefficient W as an example: Assume that in a three-layer DNN, the linear coefficient from the 4th neuron in the second layer to the 2nd neuron in the third layer is defined as The superscript 3 represents the number of layers where the coefficient W is located, and the subscript corresponds to the output third layer index 2 and the input second layer index 4.
  • the coefficient from the k-th neuron in layer L-1 to the j-th neuron in layer L is defined as
  • Training DNN is the process of learning the weight matrix. The ultimate goal is to obtain the weight matrix of all layers of the trained DNN (a weight matrix formed by the vector W of many layers).
  • the input layer has no W parameter.
  • DNN more hidden layers make the network more capable of depicting complex situations in the real world. Theoretically, a model with more parameters has higher complexity and greater "capacity", which means it can complete more complex learning tasks.
  • Training DNN is the process of learning the weight matrix. The ultimate goal is to obtain the weight matrix of all layers of the trained DNN (a weight matrix formed by the vector W of many layers).
  • examples of AI application in wireless network design and management can include: channel access, rate adaptation, channel aggregation, and channel prediction, etc.
  • f( ⁇ ) is no longer based on rules, but based on the neural network structure and neural network parameters, which can be expressed as f( ⁇ , ⁇ ), where ⁇ represents the neural network parameters. Therefore, designers can achieve design goals by designing the neural network structure and training neural network parameters.
  • AI applied in the design and management of wireless networks can be used to perform prediction task types and decision-making task types.
  • prediction tasks may include: traffic prediction, channel quality prediction, etc.
  • Decision task types may include: channel access, rate adaptation, power control, channel bonding, etc.
  • AI applied in wireless networks can interact with the wireless network environment and accumulate experience through deep reinforcement learning (DRL) algorithms, and then complete the training and update of neural network parameters.
  • DRL deep reinforcement learning
  • Figure 4 is a schematic diagram of deep reinforcement learning.
  • the network node determines actions based on environmental observations obtained from observing the wireless network environment, and can assign corresponding reward values to each action. Specifically, network nodes make decision-making actions based on environmental observations.
  • a behavior is related to the neural network parameters, that is: the neural network parameters determine the mapping from environmental observation S t to action A t , that is: the network node implements the decision A t from S t according to the neural network parameters. For example, for channel access, the network node can decide whether to perform channel access at the current moment based on environmental observations such as received signal energy and historical access success.
  • the network node can evaluate each action, that is, calculate the reward value R t (reward) of the action.
  • R t reward value
  • the learning of neural network parameters is the process of obtaining experience from sample parameters (for example, S t , A t , R t , S t+1 , A t+1 ,...) in a series of environments.
  • neural network parameters need to be continuously updated as the wireless network environment changes. This can be achieved by training nodes to obtain information reported by other nodes. Please refer to Figure 5 for details.
  • FIG. 5 is a schematic diagram of a learning method 500 for neural network parameters.
  • the AP serves as a training node and completes the learning and updating of neural network parameters by obtaining information reported by STA#A and STA#B respectively.
  • the information includes S t and A t of the STA.
  • the AP learns and updates the neural network parameters based on the information reported by STA#A and STA#B respectively, and sends the neural network parameters obtained after training to the corresponding STA.
  • the AP delivers NN A (can be understood as neural network parameters) to STA#A, and delivers NN B (can be understood as neural network parameters) to STA#B.
  • STA#A can make a more reasonable selection of A t, A in the environment of S t, A based on the NN A issued by the AP.
  • STA #B can make more reasonable selections of A t, A based on the NN B issued by the AP in the environment of S t, B.
  • Reasonable A t, B choice can be understood as neural network parameters.
  • FIG. 6 is a schematic diagram of wireless frame collision according to an embodiment of the present application.
  • the wireless frame A1 sent by STA#A to the AP collides with the wireless frame B1 sent by STA#B to the AP.
  • the wireless frame A2 sent by STA#A to the AP conflicts with the wireless frame sent by STA#B to the AP.
  • B2 conflicts. Therefore, STA#A cannot successfully send wireless frame A1 and wireless frame A2 to the AP, and STA#B cannot successfully send wireless frame B1 and wireless frame B2 to the AP.
  • the AP cannot obtain the respective transmission information of wireless frame A1, wireless frame A2, wireless frame B1, and wireless frame B2.
  • STA#A successfully sends wireless frame A3 to the AP, and the AP can obtain the transmission information of wireless frame A3.
  • wireless frames may be lost due to conflicts between STA#A and STA#B, and the AP cannot obtain the transmission information of the lost wireless frames.
  • the transmission information of these lost wireless frames is very important for the learning of neural network parameters, which can better coordinate or guide different STAs by learning the transmission information of these lost wireless frames, thereby reducing the site's How often wireless frames are lost.
  • it can obtain richer data for neural network training after reducing the frequency of wireless frame loss at the site, and better complete the learning and updating of neural network parameters. Therefore, how to enable the AP to obtain the transmission information of the lost wireless frames is a technical problem that needs to be solved urgently.
  • this application provides an information transmission method and communication device.
  • the AP can obtain the information of the station at the first moment.
  • the AP can learn the transmission information of the wireless frame sent before the moment, obtain the neural network data based on the transmission information of the wireless frame, and deliver the neural network data.
  • the AP can use the neural network data to help the site reduce the risk of wireless frame loss. frequency, it can also obtain richer data for neural network training after reducing the frequency of wireless frame loss, and better complete the learning and updating of neural network data (including neural network parameters).
  • Figure 7 is an interactive flow chart of the information transmission method 700 according to the embodiment of the present application.
  • the method flow in Figure 7 can be executed by the STA/AP, or by modules and/or devices (for example, chips or integrated circuits) with corresponding functions installed in the STA/AP, which are not limited by the embodiments of this application.
  • the following uses STA/AP as an example for explanation.
  • method 700 includes:
  • STA#A sends the wireless frame S1 at the first time.
  • the wireless frame S1 includes the transmission information of the wireless frame sent by STA#A before the first time.
  • the AP receives the wireless frame S1 sent by STA#A at the first moment.
  • STA#A can carry the transmission information of the wireless frame sent by STA#A before the first moment in the wireless frame S1 sent by STA#A at the first moment, so that the AP can obtain the information of STA#A based on the wireless frame S1. Transmission information of wireless frames sent before the first moment.
  • the transmission information may include at least one of the following:
  • Action information time information, or action characteristic information.
  • the AP can learn and update the neural network data based on the reported information, and deliver the learned and updated neural network data to the site.
  • the site can based on the neural network data delivered by the AP.
  • the neural network data adjusts or maintains actions, thereby reducing the frequency of wireless frame loss and reporting richer data for neural network training to the AP.
  • actions may include at least one of the following:
  • the embodiment of the present application takes the actions of channel access and rate adaptation as an example for description, but does not limit the application of the technical solutions disclosed in the embodiment of the present application to other actions.
  • the action information is used to indicate the action implicit in the wireless frame.
  • the action information may indicate one action implicit in the wireless frame, or may indicate multiple actions implicit in the wireless frame.
  • the actions implicit in the wireless frame Y1 may include channel access, may include channel access and rate adaptation, may also include channel access, rate selection, power control, etc., which are not limited by the embodiments of this application.
  • Time information is used to indicate when an action occurred.
  • action information and time information may be correlated with each other.
  • the action information is used to indicate the specific action of the time information
  • the time information is used to indicate the occurrence time of the action.
  • the characteristic information of the action is used to indicate the characteristics of the action.
  • the characteristic information includes the access channel duration, which can be indicated using the transmission opportunity (TXOP) in the IEEE 802.11 protocol.
  • TXOP transmission opportunity
  • the action is rate adaptive, its characteristic information includes specific rate values.
  • the action is channel aggregation, its characteristic information includes specific bandwidth values, etc.
  • the AP can determine more appropriate neural network data based on the information reported by the station.
  • the station can adjust or maintain actions based on the neural network data sent by the AP, thus reducing the number of wireless frames.
  • the frequency of loss can then be reported to the AP with richer data for neural network training.
  • the AP can determine more appropriate neural network data based on the information reported by the station.
  • the station can adjust or adjust based on the neural network data sent by the AP. Maintaining the channel access action can reduce the frequency of wireless frame loss caused by unreasonable channel access, and can report richer data for neural network training to the AP.
  • the transmission information may be indicated in the form of a table.
  • Table 1 to Table 4 For details, please refer to Table 1 to Table 4.
  • a A,1 represents the action m1 of STA#A.
  • a A, 2 represents the action m2 of STA#A.
  • "" indicates other information not displayed.
  • the transmission information includes action information.
  • Table 2 Indicates that at time t-1, the action of STA#A is m1. Indicates the relative time when action m1 occurs. T(t-1) represents the absolute time when action m1 occurs. Indicates that at time t-2, the action of STA#A is m2. Indicates the relative time when action m2 occurs. T(t-2) represents the absolute time when action m2 occurs. Among them, "" indicates other information not displayed.
  • the transmission information includes action information and time information.
  • the absolute time refers to the sending time of the wireless frame that implies action m1
  • the relative time refers to the interval between the sending time of the wireless frame that implies action m1 and the first moment.
  • T(t-1) indicates the absolute time when channel access occurs
  • 5s refers to the access channel duration of channel access.
  • T(t-2) indicates the absolute time when channel access occurs
  • 1s refers to the access channel duration of channel access.
  • T(t-3) represents the absolute time when rate adaptation occurs
  • 30000b/s refers to the rate value of rate adaptation. Among them, "" indicates other information not displayed.
  • the transmission information includes action information, time information and action characteristic information.
  • the transmission information may also include the reward value. Please refer to Table 4 for details.
  • Table 4 Indicates the return value of STA#A's action m1. Indicates the return value of STA#A's action m2. Among them, the reward value is determined by STA#A by itself, and is used to assist the AP in obtaining neural network data.
  • the AP sends neural network data to STA#A.
  • the neural network data is related to the transmission information of the wireless frame.
  • STA#A receives the neural network data from the AP.
  • the AP after the AP obtains the transmission information of the wireless frame sent by STA#A before the first moment based on the wireless frame S1 sent by STA#A, it can learn based on the transmission information of the wireless frame to obtain neural network data.
  • the neural network data includes at least one of the following:
  • Neural network parameters or neural network training data are Neural network parameters or neural network training data.
  • STA#A can adjust or maintain the original action based on the neural network parameters. Please refer to Table 5 for details.
  • STA#A can realize the mapping from environmental observation S t to action A t according to the neural network parameters.
  • the environmental observation is S t1 , and its decision-making action A t is to initiate channel access;
  • the environmental observation is S t2 , Its decision-making action A t is to initiate channel access;
  • the environment observation is S t3 , its decision-making action A t is rate adaptation A, but the action decided by the above-mentioned environment observation causes STA#A to send before the first moment The wireless frame cannot be received by the AP.
  • STA#A can implement the mapping of environmental observation S t and decision-making action A t according to the neural network parameters sent by the AP. For example, when the same or similar environment observation is S t1 , the decision-making action A t is to initiate channel access; when the same or similar environment observation is S t2 , the decision-making action A t is not to initiate channel access. ; When the same or similar environmental observation is S t3 , the decision-making action A t is rate adaptive B.
  • STA#A can adjust or maintain actions according to the neural network parameters issued by the AP, thereby reducing the frequency of wireless frame loss, and then reporting richer data for neural network training to the AP to help the AP complete the task better. Learning and updating of neural network data (including neural network parameters).
  • STA#A can perform training based on the neural network training data and the transmission information in the wireless frame S1, and obtain the neural network parameters based on The learned neural network parameters are maintained or adjusted for action. Please refer to Table 6 for details.
  • the neural network training data can be understood as a reward value.
  • the embodiment of the present application takes the reward value as an example for description.
  • the AP also needs to train based on the transmission information in the wireless frame S1 reported by STA#A and the return value to obtain the neural network parameters. In other words, the AP first determines the corresponding return value based on the transmission information reported by STA#A, and performs training based on the return value and the transmission information to obtain the neural network parameters.
  • a unified explanation is given here and will not be repeated in the following paragraphs.
  • the AP assigns different reward values to each action in the transmission information of the wireless frame reported by STA#A.
  • the reward value for channel access at time T1 is -100; the reward value for channel access at time T2 is 10; the reward value for channel access at time T3 is 500, and the reward value for rate adaptation is -200.
  • a negative reward value represents the AP's punishment for the action, and a positive reward value represents the AP's reward or incentive for the action.
  • T1 to T3 are before the first time or may include the first time.
  • STA#A can perform training based on the transmission information and the reward value sent by the AP, and obtain the corresponding neural network parameters.
  • STA#A can adjust or maintain the original action based on the neural network parameters learned by itself. Please refer to Table 7 for details.
  • the action decided by the above-mentioned environmental observation makes the wireless frame sent by STA#A before the first moment unable to be received by the AP.
  • STA#A After receiving the reward value sent by the AP, STA#A can train according to the reward value sent by the AP, obtain the neural network parameters, and implement the mapping of the environmental observation S t decision-making action A t based on the neural network parameters. For example, when the same or similar environment observation is S t1 , the decision-making action A t is not to initiate channel access; when the same or similar environment observation is S t2 , the decision-making action A t is to initiate channel access. ; When the same or similar environment observation is S t3 , its decision-making action A t is to initiate channel access and rate adaptation B.
  • STA#A trains based on the reward value sent by the AP, obtains the neural network parameters, and implements adjustments or maintenance actions, thereby reducing the frequency of wireless frame loss. Furthermore, STA#A can report richer user information to the AP. numbers for neural network training It helps AP to better complete the learning and updating of neural network data (including neural network parameters).
  • the AP can obtain the transmission information of the wireless frame sent by the station before the first moment, and can transmit the wireless frame based on the The AP learns the information to obtain neural network data and delivers the neural network data. In this way, the AP can use the neural network data to help the first station reduce the frequency of wireless frame loss. It can also obtain more information after reducing the frequency of wireless frame loss. Rich data for neural network training to better complete the learning and updating of neural network data (including neural network parameters).
  • the wireless frames sent by STA#A before the first moment include multiple wireless frames sent continuously by STA#A.
  • STA#A can indicate the actions implied by the remaining multiple radio frames by only indicating the action implied by one radio frame in radio frame S1. In this way, it can Save signaling overhead.
  • the neural network data sent by the AP to STA#A is determined by the AP based on the above transmission information of the wireless frame and the transmission information of the wireless frame sent by STA#B.
  • the AP also receives the wireless frame S2 sent from STA#B.
  • the wireless frame S2 includes the transmission information of the wireless frame sent by STA#B.
  • the time of the wireless frame S2 sent by STA#B may be the first time, may be earlier than the first time, or may be later than the first time, which is not limited by the embodiment of the present application.
  • the wireless frames sent by STA#B include wireless frames sent by STA#B before the first moment.
  • the wireless frame sent by STA#B before the first moment can be understood as the wireless frame that STA#B failed to send before the first moment, or it can also be understood as the wireless frame that STA#B successfully sent before the first moment.
  • the embodiments of this application are not limited. For the convenience of description, the embodiment of this application is described by taking the radio frame that STA #B failed to send before the first moment as an example. See Figure 8 for details.
  • Figure 8 is an interactive flow chart of the information transmission method 800 according to the embodiment of the present application.
  • the method flow in Figure 8 can be executed by the STA/AP, or by modules and/or devices (for example, chips or integrated circuits) with corresponding functions installed in the STA/AP, which are not limited by the embodiments of this application.
  • the following uses STA/AP as an example for explanation.
  • method 800 includes:
  • STA#A sends the wireless frame S1 at the first moment.
  • the wireless frame S1 includes the transmission information of the wireless frame sent by STA#A at the first moment.
  • STA#B sends the wireless frame S2.
  • the wireless frame S2 includes the transmission information of the wireless frame sent by STA#B. Transmission information of wireless frames.
  • the AP receives the wireless frame S1 from STA#A and the wireless frame S2 from STA#B.
  • the AP determines neural network data based on the transmission information of the wireless frame included in the wireless frame S1 and the transmission information of the wireless frame included in the wireless frame S2.
  • the AP may also determine the neural network data to be trained for STA#B based on the transmission information of the wireless frame included in the wireless frame S1 and the transmission information of the wireless frame included in the wireless frame S2.
  • the general process for the AP to determine the neural network data based on the transmission information of the wireless frame included in the wireless frame S1 and the transmission information of the wireless frame included in the wireless frame S2 can be seen in Table 8.
  • both STA#A and STA#B initiate channel access; at time T2, both STA#A and STA#B initiate channel access; at time T3, STA#A initiates channel access and rate adaptation; at time T4, both STA#A and STA#B initiate rate adaptation.
  • the AP can combine Table 8 to assign different values to the reward values of STA#A and STA#B's respective actions. Please refer to Table 9 for details.
  • the AP assigns -100 to the reward value of STA#A's channel access at time T1, and the AP assigns -500 to the reward value of STA#B's channel access at time T1.
  • the AP assigns a value of -100 to the report value of STA#A's channel access at time T2, and the AP assigns a value of 10 to the report value of STA#B's channel access at time T2.
  • the AP assigns a value of 100 to the reward value of STA#A's channel access at time T3, and assigns a value of -100 to the reward value of STA#A's rate adaptation at time T3.
  • the AP assigns a value of 400 to the reward value of STA#A's rate adaptation at time T4, and the AP assigns a value of -500 to the reward value of STA#B's rate adaptation at time T4.
  • the reason why the AP assigns a positive value of 100 to the return value of STA#A's channel access at time T3 is that STA#B did not initiate channel access at time T3.
  • the AP can think that STA#A's wireless frame loss at time T3 is due to It is caused by unstable link/channel conditions, rather than channel access conflicts. Therefore, the AP can assign a positive value to the return value of STA#A's channel access at time T3, and automatically set the rate at time T3. Returns to adaptation are assigned negative values.
  • the AP sends neural network data to STA#A.
  • STA#A receives the neural network data sent from the AP.
  • the embodiments of the present application can enable the AP to determine more reasonable neural network data for STA#A by integrating the transmission information of wireless frames reported by other STAs, thereby guiding or coordinating the actions of STA#A, thereby reducing the number of wireless frames. Furthermore, after reducing the frequency of wireless frame loss, it can obtain richer training data and better complete the learning and updating of neural network data (including neural network parameters).
  • FIG 9 is a schematic diagram of a communication device 900 according to an embodiment of the present application.
  • the communication device 900 includes a processor 901 and a communication interface 903.
  • the communication device 900 may also include a memory 902 and a bus 904.
  • the processor 901, the memory 902 and the communication interface 903 are connected to each other through the bus 904.
  • the communication device 900 shown in Figure 9 may be an access point or a station.
  • Memory 902 includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM), or Portable read-only memory (compact disc read-only memory, CD-ROM), the memory 902 is used for related instructions and data.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read only memory
  • CD-ROM Compact disc read-only memory
  • the processor 901 may be one or more central processing units (CPUs).
  • CPUs central processing units
  • the processor 901 is a CPU
  • the CPU may be a single-core CPU or a multi-core CPU.
  • the processor 901 in the communication device is used to read the program code stored in the memory 902 and perform the following operations:
  • the wireless frame S1 includes the transmission information of the wireless frame sent by STA#A before the first moment;
  • Neural network data is sent to the station through the communication interface, and the neural network data is related to the transmission information of the wireless frame.
  • the processor 901 in the communication device is used to read the program code stored in the memory 902 and perform the following operations:
  • the neural network data sent by the access point is received through the communication interface, and the neural network data is related to the transmission information of the wireless frame.
  • FIG 10 is a schematic diagram of a communication device 1000 according to an embodiment of the present application.
  • the communication device 1000 is applied to an access point and can also be applied to a site, and can be used to implement the above method embodiments.
  • the communication device 1000 includes a transceiver unit 1001.
  • the transceiver unit 1001 is introduced below.
  • the transceiver unit 1001 is configured to receive the wireless frame S1 sent by the station.
  • the wireless frame S1 includes the transmission information of the wireless frame sent by STA#A before the first moment; the transceiver unit 1001 is also configured to The station sends neural network data related to the transmission information of the wireless frame.
  • the communication device 1000 may also include a processing unit 1002, which is configured to perform actions related to decision-making, judgment, etc. in the above method embodiments.
  • a processing unit 1002 which is configured to perform actions related to decision-making, judgment, etc. in the above method embodiments.
  • access points determine neural network data, etc.
  • the transceiver unit 1001 When the communication device 1000 is a station, the transceiver unit 1001 is used to send the wireless frame S1 to the access point.
  • the wireless frame S1 includes the transmission information of the wireless frame sent by STA#A before the first moment; the transceiver unit 1001 is also used to receive from Neural network data of the access point, which is related to the transmission information of the wireless frame.
  • the communication device 1000 may also include a processing unit 1002, which is configured to perform actions related to decision-making, judgment, etc. in the above method embodiments.
  • the station determines the neural network parameters based on the neural network training data delivered by the access point.
  • FIG 11 is a schematic diagram of a communication device 1100 according to an embodiment of the present application.
  • the communication device 1100 application and access point can also be applied to the site and can be used to implement the above method embodiments.
  • the communication device 1100 includes a central processor, a media access control (MAC) unit, a transceiver, an antenna, and a neural network processing unit (NPU).
  • MAC media access control
  • NPU neural network processing unit
  • NPU includes inference module.
  • the NPU may also include a training module.
  • the input of the training module is the aforementioned neural network training data
  • the output of the training module is the neural network parameters.
  • the training module will feed back its trained neural network parameters to the inference module.
  • the NPU can act on various other modules of the network node, including central processing units, MAC units, transceivers and antennas. The NPU can be responsible for the decision-making tasks of each module, such as interacting with the transceiver.
  • the switch of the decision-making transceiver is used to save energy, such as interacting with the antenna, controlling the orientation of the antenna, such as interacting with the MAC unit, controlling channel access, channel selection and Spatial reuse decisions, etc.
  • the training module is optional.
  • An embodiment of the present application also provides a chip, including a processor, configured to call from a memory and run instructions stored in the memory, so that the communication device installed with the chip executes the methods in each of the above examples.
  • An embodiment of the present application also provides another chip, including: an input interface, an output interface, a processor, and a memory.
  • the input interface, the output interface, the processor, and the memory are connected through an internal connection path, and the The processor is configured to execute the code in the memory.
  • the processor is configured to execute the methods in each of the above examples.
  • Embodiments of the present application also provide a processor, coupled to a memory, for executing methods and functions involving satellites or user equipment in any of the above embodiments.
  • a computer program product is provided.
  • the method of the aforementioned embodiment is implemented.
  • a computer-readable storage medium stores a computer program.
  • the computer program is executed by a computer, the method described in the previous embodiment is implemented.
  • plural means two or more than two.
  • At least one of the following or similar expressions refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • at least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • words such as “first” and “second” are used to distinguish identical or similar items with basically the same functions and effects. Those skilled in the art can understand that words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not limit the number and execution order.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or explanations.
  • A/B can represent A or B; "and/or” in this application "It is just an association relationship that describes related objects. It means that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. Among them, A , B can be singular or plural.
  • the appearances of "in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment.
  • the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be determined by the implementation process of the embodiments of the present invention. constitute any limitation.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separate.
  • a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or it may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • Functions may be stored in a computer-readable storage medium when implemented in the form of software functional units and sold or used as independent products.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods of various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne un procédé de transmission d'informations et un appareil de communication. Le procédé comprend les étapes suivantes : une première station envoie une première trame radio à un premier moment, la première trame radio comprenant des informations de transmission d'une trame radio envoyée par la première station avant le premier moment ; et la première station reçoit des données de réseau neuronal en provenance d'un point d'accès (AP), les données de réseau neuronal étant associées aux informations de transmission de la trame radio. Des informations de transmission d'une trame radio envoyée avant un premier moment sont portées dans une trame radio envoyée au premier moment, de sorte qu'un AP puisse acquérir les informations de transmission de la trame radio envoyée par une station avant le premier moment, et puisse mettre en œuvre un apprentissage sur la base des informations de transmission de la trame radio afin d'obtenir des données de réseau neuronal, et de délivrer les données de réseau neuronal ; et l'AP peut aider la première station à réduire la fréquence de perte de trame radio au moyen des données de réseau neuronal, et peut également acquérir des données plus riches pour un apprentissage de réseau neuronal après la réduction de la fréquence de perte de trame radio, ce qui permet de mieux achever l'apprentissage et la mise à jour de données de réseau neuronal.
PCT/CN2023/103288 2022-07-22 2023-06-28 Procédé de transmission d'informations et appareil de communication WO2024016974A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210867601.7A CN117479182A (zh) 2022-07-22 2022-07-22 信息传输的方法与通信装置
CN202210867601.7 2022-07-22

Publications (1)

Publication Number Publication Date
WO2024016974A1 true WO2024016974A1 (fr) 2024-01-25

Family

ID=89617021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/103288 WO2024016974A1 (fr) 2022-07-22 2023-06-28 Procédé de transmission d'informations et appareil de communication

Country Status (2)

Country Link
CN (1) CN117479182A (fr)
WO (1) WO2024016974A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106376093A (zh) * 2015-07-24 2017-02-01 中兴通讯股份有限公司 一种避免数据碰撞的传输控制方法及装置
EP3883149A1 (fr) * 2020-03-20 2021-09-22 Volkswagen Ag Procédé, appareil et programme informatique permettant de prédire une qualité de service future d'un lien de communication sans fil
US11290977B1 (en) * 2020-07-21 2022-03-29 Amazon Technolgies, Inc. System for localizing wireless transmitters with an autonomous mobile device
CN114679355A (zh) * 2020-12-24 2022-06-28 华为技术有限公司 通信方法和装置
CN114764610A (zh) * 2021-01-15 2022-07-19 华为技术有限公司 一种基于神经网络的信道估计方法及通信装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106376093A (zh) * 2015-07-24 2017-02-01 中兴通讯股份有限公司 一种避免数据碰撞的传输控制方法及装置
EP3883149A1 (fr) * 2020-03-20 2021-09-22 Volkswagen Ag Procédé, appareil et programme informatique permettant de prédire une qualité de service future d'un lien de communication sans fil
US11290977B1 (en) * 2020-07-21 2022-03-29 Amazon Technolgies, Inc. System for localizing wireless transmitters with an autonomous mobile device
CN114679355A (zh) * 2020-12-24 2022-06-28 华为技术有限公司 通信方法和装置
CN114764610A (zh) * 2021-01-15 2022-07-19 华为技术有限公司 一种基于神经网络的信道估计方法及通信装置

Also Published As

Publication number Publication date
CN117479182A (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
WO2021233053A1 (fr) Procédé de délestage de calcul et appareil de communication
CN110809306B (zh) 一种基于深度强化学习的终端接入选择方法
CN112737842B (zh) 空地一体化车联网中基于最小化时延的任务安全卸载方法
WO2023020502A1 (fr) Procédé et appareil de traitement de données
US20240179603A1 (en) Communication method and apparatus
US20070197214A1 (en) Encapsulation techniques for handling information services messages for wireless networks
Zong et al. Cross-regional transmission control for satellite network-assisted vehicular ad hoc networks
CN116419257A (zh) 一种通信方法及装置
WO2024016974A1 (fr) Procédé de transmission d'informations et appareil de communication
Jere et al. Distributed learning meets 6G: A communication and computing perspective
WO2020134713A1 (fr) Procédé d'élection de nœud de réseau et dispositif de nœud
Li et al. A green DDPG reinforcement learning-based framework for content caching
CA3224511A1 (fr) Procede d'acces a un canal et appareil associe
Abir et al. Digital Twin-based Aerial Mobile Edge Computing System for Next Generation 6G Networks
WO2023169389A1 (fr) Procédé de communication et appareil de communication
CN106454940A (zh) 一种无线mesh网络中的负载均衡方法
WO2022237865A1 (fr) Procédé et appareil de traitement de données
Wang et al. Airborne Computing Platform Based Joint Offload and Deployment Optimization Algorithm
WO2022135288A1 (fr) Procédé et appareil de traitement d'informations
WO2023185890A1 (fr) Procédé de traitement de données et appareil associé
WO2024065696A1 (fr) Procédé de communication sans fil, dispositif terminal et dispositif de réseau
WO2023236986A1 (fr) Procédé de communication et appareil de communication
WO2024027511A1 (fr) Procédé d'accès à un canal et appareil associé
WO2024026846A1 (fr) Procédé de traitement de modèle d'intelligence artificielle et dispositif associé
WO2023125598A1 (fr) Procédé de communication et appareil de communication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842040

Country of ref document: EP

Kind code of ref document: A1