WO2024022007A1 - Procédé et appareil de communication dans un réseau local sans fil - Google Patents
Procédé et appareil de communication dans un réseau local sans fil Download PDFInfo
- Publication number
- WO2024022007A1 WO2024022007A1 PCT/CN2023/104158 CN2023104158W WO2024022007A1 WO 2024022007 A1 WO2024022007 A1 WO 2024022007A1 CN 2023104158 W CN2023104158 W CN 2023104158W WO 2024022007 A1 WO2024022007 A1 WO 2024022007A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- information
- site
- request
- manufacturer
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 171
- 238000004891 communication Methods 0.000 title claims abstract description 104
- 238000013528 artificial neural network Methods 0.000 claims abstract description 595
- 230000004044 response Effects 0.000 claims abstract description 116
- 238000003062 neural network model Methods 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 11
- 208000019116 sleep disease Diseases 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 37
- 210000002569 neuron Anatomy 0.000 description 23
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 20
- 238000013473 artificial intelligence Methods 0.000 description 17
- 238000012549 training Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000001537 neural effect Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- VYLDEYYOISNGST-UHFFFAOYSA-N bissulfosuccinimidyl suberate Chemical compound O=C1C(S(=O)(=O)O)CC(=O)N1OC(=O)CCCCCCC(=O)ON1C(=O)C(S(O)(=O)=O)CC1=O VYLDEYYOISNGST-UHFFFAOYSA-N 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007958 sleep Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/10—Scheduling measurement reports ; Arrangements for measurement reports
Definitions
- the present application relates to the field of communication technology, and more specifically, to a method and device for communication in a wireless local area network.
- AI artificial intelligence
- NN neural networks
- wireless local area network due to the high mobility of most sites, the wireless LAN network environment where they are located often changes. For example, if a site has been dormant for a period of time, the wireless network environment where it is located may have changed. . For another example, when a non-access point station switches from the current access point to a new access point, the wireless network environment in which it is located may also change. It is difficult for a set of neural networks to be applied to all scenarios. In view of the changing wireless network environment, if an unupdated or other inappropriate neural network is used, it will affect the communication decision-making of the site, for example, affecting the site to select inappropriate channels or transmission rate, etc., which will further affect the communication performance of the site.
- WLAN wireless local area network
- This application provides a method and device for communication in a wireless local area network, which associates neural network information with manufacturer information, so that the site can obtain appropriate neural network information for communication decisions, and ensure that the site can ensure the site's performance in a changing wireless network environment. communication performance.
- a communication method in a wireless local area network is provided, and the method is executed by a station (station, STA).
- the site may be a terminal, or a chip, circuit or module configured in the terminal, which is not limited in this application.
- the site could be the requesting site.
- the method includes: the requesting site sends a request, the request is used to request neural network information; the requesting site receives a response from the responding site, the response includes the requested neural network information, and the neural network information is associated with the manufacturer information.
- the requesting site can request neural network information from the responding site through a request, and then the responding site can send the requested neural network information to the requesting site, and the neural network information is associated with the manufacturer information. In this way, This enables the site to obtain appropriate neural network information for communication decisions and ensure the site's communication performance.
- this method avoids the site from obtaining inappropriate neural network information from the cloud or server, and also prevents the site from spending a long time training the neural network, which helps reduce communication delays.
- this method avoids the site from continuously training the neural network, which helps to reduce the power consumption of the site, thereby helping the site save energy.
- the requesting site may be an access point (AP) or a non-AP site (non-AP STA).
- AP access point
- non-AP STA non-AP site
- the responding site can be a non-AP site or an AP.
- the request includes manufacturer information, or includes identification information of the neural network.
- the requesting site can obtain the corresponding neural network information from the responding site based on the manufacturer information or the identification information of the neural network, which has higher communication efficiency.
- the response includes manufacturer information.
- the responding site can send the neural network information and the manufacturer information associated with the neural network information to the requesting site, which helps the responding site make communication decisions based on the vendor information.
- the response also includes identification information of the neural network.
- the information of the neural network may include parameters of the neural network and may also include the structure of the neural network.
- the vendor information includes multiple vendor information.
- the request may include multiple vendor information, and the multiple vendors may include vendors to which the requested site belongs, or may include vendors supported by the requested site.
- the response may include multiple vendor information, and the multiple vendors may include vendors to which the responding site belongs, or vendors supported by the responding site. In this way, devices from manufacturers that support the same neural network can quickly exchange neural network information, achieving higher communication efficiency.
- the manufacturer information is the information of the manufacturer corresponding to the equipment manufacturer.
- the device manufacturer of the requesting site may be included in the request or the device manufacturer of the responding site may be included in the response.
- the request includes identification information of a basic service set (BSS), and the neural network information in the response is associated with the identification information of the BSS.
- BSS basic service set
- the site can obtain the neural network information of the target BSS more accurately.
- the identification information of the basic service set BSS included in the request is used to identify the BSS to which the requesting site belongs.
- the request includes a preset condition of the requested neural network, and the request is used to request information of a neural network that satisfies the preset condition.
- the requesting site can send neural network information that meets the preset conditions to the responding site.
- the requesting site terminal can obtain more suitable neural network information, which helps to achieve better communication decisions.
- the preset condition includes at least one of the following: generation time of the neural network, accuracy of the neural network, and model size of the neural network.
- the generation time of the neural network and “the generation time of the information of the neural network” indicate the same meaning, and they can be replaced with each other without limitation.
- the accuracy of the neural network and “the accuracy of the information of the neural network”
- “Degree” indicates the same meaning, which are interchangeable without limitation.
- the response includes information from multiple neural networks.
- the requesting site can select the information of one neural network from the information of multiple neural networks.
- multiple neural networks with the same structure but different parameters can be understood as multiple information of one neural network, multiple "information of neural networks", or multiple neural networks.
- the responding site can send multiple neural network information to the requesting site. This allows the requesting site to select more suitable neural network information, which helps to achieve better communication decisions.
- the response also includes attribute information of the multiple neural networks, and the attribute information includes the generation time of the multiple neural networks, or the accuracy of the multiple neural networks, or the model sizes of the multiple neural networks.
- the requesting site can select information of one neural network from information of multiple neural networks based on the above attribute information.
- the attribute information includes the generation time and accuracy of the neural network, etc., which helps the requesting site select the information of the neural network with a closer generation time and better accuracy, and helps to achieve better communication decisions.
- the triggering conditions for the requesting site to send the request include: the information of the neural network stored by the requesting site has not been updated for more than a preset time; or the accuracy of the neural network stored by the requesting site is less than a threshold; Or, the requesting site does not store neural network information or does not store any neural network information; or, the neural network information related to manufacturer information stored by the requesting site has not been updated for more than a preset time; or, the requesting site does not store information related to the manufacturer. Information related to neural networks.
- the requesting site can send a request under any of the above trigger conditions to obtain the required information of the neural network.
- the triggering conditions for the requesting station to send the request include: the requesting station wakes up after sleeping; or the network environment of the wireless local area network of the requesting station changes.
- the requesting site can send a request under any of the above trigger conditions to obtain the required information of the neural network.
- the second aspect provides a communication method in a wireless local area network.
- the method can be executed by a station.
- the station can be a terminal, or a chip, circuit or module configured in the terminal. This application is not limited to this.
- the site could be a responsive site.
- the method includes: the response site receives a request from the request site, the request is used to request information of the neural network; the response site sends a response to the request site according to the request, the response includes the requested information of the neural network, the information of the neural network and Manufacturer information association.
- the information of the neural network includes parameters of the neural network and/or the structure of the neural network.
- the vendor information includes multiple vendor information.
- the manufacturer information is the information of the manufacturer corresponding to the equipment manufacturer.
- the request includes manufacturer information or identification information of the neural network.
- the response also includes identification information of the neural network.
- the request includes identification information of the basic service set BSS
- the neural network information in the response is associated with the identification information of the BSS.
- the request includes a preset condition of the requested neural network, and the request is used to request information of a neural network that satisfies the preset condition.
- the method may further include: the access point selects information of one neural network from information of multiple neural networks according to preset conditions.
- the preset condition includes at least one of the following: generation time of the neural network, accuracy of the neural network, and model size of the neural network.
- the response includes manufacturer information.
- the response includes information from multiple neural networks.
- the response also includes attribute information of one or more neural networks, and the attribute information includes the generation time of the multiple neural networks, or the accuracy of the multiple neural networks, or the model size of the multiple neural networks.
- the triggering conditions for the requesting site to send the request include: the information of the neural network stored by the requesting site has not been updated for more than a preset time; or the accuracy of the neural network stored by the requesting site is less than the threshold; Or, the requesting site does not store neural network information or does not store any neural network information; or, the neural network information related to manufacturer information stored by the requesting site has not been updated for more than a preset time; or, the requesting site does not store information related to the manufacturer. Information related to neural networks.
- the triggering conditions for the requesting station to send the request include: the requesting station wakes up after sleeping; or the network environment of the wireless local area network of the requesting station changes.
- a third aspect provides a communication device, which has the function of implementing the method of any possible implementation of the first aspect and the second aspect.
- the functions described can be implemented by hardware, or can be implemented by hardware executing corresponding software.
- the hardware or software includes one or more units corresponding to the above functions.
- a communication device including a processor and a memory.
- a transceiver may also be included.
- the memory is used to store computer programs
- the processor is used to call and run the computer programs stored in the memory, and control the transceiver to send and receive signals, so that the communication device performs any possible implementation manner of the above-mentioned first aspect and second aspect. method in.
- a communication device including a processor and a communication interface.
- the communication interface is used to receive data and/or information and transmit the received data and/or information to the processor.
- the processing processes the data and/or information, and the communication interface is also used to output the data and/or information processed by the processor, so that the method of any possible implementation of the above first aspect and the second aspect be executed.
- a computer-readable storage medium is provided.
- Computer instructions are stored in the computer-readable storage medium.
- any one of the above first and second aspects is possible.
- the methods in the implementation are executed.
- a computer program product includes computer program code.
- the computer program product includes computer program code.
- An eighth aspect provides a wireless communication system, including the requesting site in the first aspect and the responding site in the second aspect.
- FIG. 1 is a schematic diagram of a system architecture 100 and a schematic diagram of the structure of a device provided by an embodiment of the present application.
- Figure 2 is a schematic diagram of the structure of a neural network.
- Figure 3 is a schematic diagram of a neuron calculating output based on input.
- Figure 4 shows a schematic diagram of changes in the wireless network environment of the site.
- Figure 5 shows a schematic diagram of a method for updating neural network parameters.
- Figure 6 is a schematic flow chart of a communication method 200 in a wireless local area network provided by an embodiment of the present application.
- Figure 7 is a schematic flow chart of a communication method 300 in a wireless local area network provided by an embodiment of the present application.
- Figure 8 is a schematic flow chart of a communication method 400 in a wireless local area network provided by an embodiment of the present application.
- Figure 9 is a schematic flow chart of a communication method 500 in a wireless local area network provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of a communication device 600 provided by an embodiment of the present application.
- Figure 11 is a schematic structural diagram of a communication device 700 provided by an embodiment of the present application.
- Figure 12 is a schematic structural diagram of a communication device 800 provided by an embodiment of the present application.
- WLAN wireless local area network
- 802.11 related standards such as 802.11a/b /g standard, 802.11n standard, 802.11ac standard, 802.11ax standard, IEEE 802.11ax next-generation Wi-Fi protocol, such as 802.11be, Wi-Fi 7, extremely high throughput (EHT), 802.11ad, 802.11ay or 802.11bf, such as 802.11be next generation, Wi-Fi 8, etc.
- UWB ultra wide band
- sensing sensing
- 802.11bf includes two major categories of standards: low frequency (sub7GHz) and high frequency (60GHz).
- sub7GHz mainly relies on standards such as 802.11ac, 802.11ax, 802.11be and the next generation.
- 60GHz mainly relies on standards such as 802.11ad, 802.11ay and the next generation.
- WLAN wireless local area network
- WWAN wireless wide area network
- WPAN wireless personal area network
- WLAN communication systems wireless fidelity (Wi-Fi) systems, long term evolution (LTE) systems, LTE frequency division dual Frequency division duplex (FDD) system, LTE time division duplex (TDD), universal mobile telecommunication system (UMTS), global interoperability for microwave access (WiMAX) communication system, fifth generation (5th generation, 5G) system or new radio (NR) system, future sixth generation (6th generation, 6G) system, Internet of things (IoT) network or vehicle Internet (vehicle) to x, V2X) and other wireless LAN systems.
- Wi-Fi wireless fidelity
- LTE long term evolution
- FDD frequency division dual Frequency division duplex
- TDD LTE time division duplex
- UMTS universal mobile telecommunication system
- WiMAX global interoperability for microwave access
- 5G fifth generation
- NR new radio
- future sixth generation (6th generation, 6G) system Internet of things (IoT) network or vehicle Internet (vehicle) to x, V2X)
- FIG. 1 is a schematic diagram of a system architecture 100 and a schematic diagram of the structure of a device provided by an embodiment of the present application.
- (a) of FIG. 1 is an example of the system architecture 100 suitable for the embodiment of the present application.
- the system 100 includes multiple stations (stations, STAs), where the stations can be access points (access points, AP) 110 and access points AP 120, or they can be access points.
- non-AP sites associated with the entry point AP110, for example, non-AP STA111, non-AP STA 112, and non-AP STA 113, and the non-AP sites associated with the access point AP2, for example, non-AP STA 121, non-AP STA AP STA 122, non-AP STA 123.
- AP110, non-APSTA111, non-AP STA 112, and non-AP STA 113 constitute the basic service set (BSS) 1, and AP 120, non-AP STA 121, non-AP STA 122, and non-AP STA 123 constitute BSS2.
- a site refers to a broad site, which includes AP and non-AP STAs.
- system architecture shown in (a) of Figure 1 can be applied to the Internet of Things industry, Internet of Vehicles industry, banking industry, corporate offices, sports venues and exhibition halls, concert halls, hotel rooms, dormitories, wards, classrooms, shopping malls and supermarkets , squares, streets, production workshops and warehouses, etc.
- the access point can be an access point for a terminal (for example, a mobile phone) to enter a wired (or wireless) network, which is mainly deployed in homes, Inside the building and the park, the typical coverage radius is tens to hundreds of meters. Of course, it can also be deployed outdoors.
- the access point is equivalent to a bridge connecting the wired network and the wireless network. Its main function is to connect various wireless network clients together, and then connect the wireless network to the Ethernet.
- the access point can be a terminal or network device with a Wi-Fi chip.
- the network device can be a router, a relay station, a vehicle-mounted device, a wearable device, a network device in a 5G network, and a network device in a future 6G network. Or network equipment in a public land mobile communication network (public land mobile network, PLMN), etc., which are not limited by the embodiments of this application.
- the access point can be a device that supports the 802.11be standard.
- the access point can also be a device that supports multiple WLAN standards of the 802.11 family such as 802.11ax, 802.11ac, 802.11n, 802.11g, 802.11b, 802.11a, and 802.11be next generation.
- the access point in this application can be a highly efficient (HE) AP or an extremely high throughput (EHT) AP, or it can be an access point suitable for a certain future generation of Wi-Fi standards.
- HE highly efficient
- EHT extremely
- Non-AP sites can be wireless communication chips, wireless sensors or wireless communication terminals, etc., and can also be called users, user equipment (UE), access terminals, user units, user stations, mobile stations, mobile stations, and remote stations. , remote terminal, mobile device, user terminal, terminal, wireless communications device, user agent or user device.
- Non-AP sites can be cellular phones, cordless phones, session initiation protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants (PDAs), devices with wireless communications Functional handheld devices, computing devices or other processing devices connected to wireless modems, vehicle-mounted devices, Internet of Things devices, wearable devices, terminal devices in 5G networks, terminal devices in future 6G networks or terminal devices in PLMN, etc., The embodiments of the present application are not limited to this.
- Non-AP sites can support the 802.11be standard.
- Non-AP sites can also support multiple WLAN standards of the 802.11 family such as 802.11ax, 802.11ac, 802.11n, 802.11g, 802.11b, 802.11a, and 802.11be next generation.
- the access points or non-AP sites in this application can be sensor nodes in smart cities, such as smart water meters, smart electricity meters, and smart air detection nodes, or smart devices in smart homes, such as smart cameras and projectors. Instruments, display screens, televisions, stereos, refrigerators, washing machines, etc. It can also be entertainment terminals, such as wearable devices such as virtual reality (VR) and augmented reality (AR), or smart offices.
- the smart devices in the Internet such as printers, projectors, loudspeakers, speakers, etc., can also be infrastructure in daily life scenes, such as vending machines, self-service navigation stations in supermarkets, self-service checkout equipment, self-service ordering machines, etc. , it can also be the Internet of Vehicles equipment in the Internet of Vehicles, nodes in the Internet of Things, and equipment in large sports and music venues, etc.
- access points and non-AP sites have certain artificial intelligence (AI) capabilities and can use neural networks for reasoning and decision-making.
- AI artificial intelligence
- Non-AP sites and/or access points can also perform neural network training.
- Figure 1(b) is a schematic structural diagram of a device provided by an embodiment of the present application.
- the device can be an access point or a non-AP site.
- the internal functional modules of the device include a central processor, a media access control (media access control, MAC) processing module, a transceiver, an antenna, and a neural network processing Unit (neural network processing unit, NPU).
- the transceiver includes a physical layer (PHY) processing module
- the NPU includes an inference module
- the NPU also includes a training module.
- the training module is optional.
- the training module is used to train the neural network and output the neural network parameters.
- the trained neural network parameters will be fed back to the inference module.
- the NPU can act on various other modules of the device, including the central processor, MAC processing module, transceiver and antenna.
- the NPU can be responsible for the decision-making tasks of each module. For example, it interacts with the transceiver.
- the decision-making switch of the transceiver is used to save energy. For example, it interacts with the antenna and controls the orientation of the antenna. For example, it interacts with the MAC processing module and controls channel access. , channel selection and spatial multiplexing decisions, etc.
- the solution of this application can obtain appropriate neural network information for communication decision-making.
- the neural network information can be applied to the communication decision-making of the MAC processing module or the communication decision-making of the transceiver.
- the communication decision of the transceiver includes the communication decision of the PHY processing module. It can be understood that the schematic diagram of the device provided in Figure (b) is an example and does not constitute a limitation on the device of the present application.
- AI Artificial intelligence
- AI can be applied to channel access, rate adaptation, channel aggregation or channel prediction, etc.
- f( ⁇ ) The operations of traditional wireless networks, such as channel prediction, are determined based on rules, such as predicting the channel through an algorithm or function, expressed as f( ⁇ ).
- the calculation from input x to output y is a clear rule and applies to all wireless network environments.
- f( ⁇ ) is no longer rule-based, but is described by a neural network (NN).
- f( ⁇ , ⁇ ) the neural network
- ⁇ represents the neural network parameters.
- Neural network is a machine learning technology that simulates the neural network of the human brain in order to achieve artificial intelligence.
- a neural network can include 3 layers, an input layer, at least one intermediate layer (also called a hidden layer), and an output layer, or more layers. Deeper neural networks may contain more hidden layers between the input and output layers. The following takes a neural network as an example to illustrate it.
- FIG. 2 is a schematic diagram of the structure of a neural network.
- the neural network is a fully connected neural network.
- the neural network includes 3 layers, namely the input layer, the hidden layer and the output layer.
- the input layer has 3 neurons and the hidden layer has 4 neurons.
- the output layer has 2 neurons, and the neurons in each layer are fully connected to the neurons in the next layer.
- Each connection between neurons corresponds to a weight, and each neuron in the hidden layer and output layer can also correspond to a weight. bias.
- Neural network includes the structure of neural network and the parameters of neural network. Among them, the structure of the neural network refers to the number of neurons contained in each layer and how the output of the previous neurons is input to the following neurons, that is, the connection relationship between neurons.
- the parameters of the neural network indicate the weights and biases.
- each neuron may have multiple input connections, and each neuron calculates an output based on the input.
- Each neuron may have multiple output connections, and the output of one neuron serves as the input of the next neuron.
- the input layer only has output connections, each neuron of the input layer is the value input to the neural network, and the output value of each neuron is directly used as the input of all output connections.
- the output layer only has input connections, and the output is calculated using the calculation method of the above formula (1-1).
- x represents the input of the neural network
- y represents the output of the neural network
- w i represents the weight of the i-th layer neural network
- b i represents the bias of the i-th layer neural network
- wireless local area network since most sites are highly mobile, the wireless LAN network environment where they are located often changes. For example, if a site is dormant for a period of time, the wireless network connected to the site may have changed. For another example, if a non-AP station switches from the current access point to a new access point, the wireless network it is connected to may also change.
- the wireless network connected to the non-AP site is the wireless network environment in which the non-AP site is located. It is difficult to apply a set of neural networks or inappropriate neural networks to all scenarios. In response to the changing wireless network environment, neural network information needs to be updated.
- Figure 4 shows a schematic diagram of changes in the wireless network environment of a non-AP site.
- non-AP site 1 moves and switches from AP1 to AP2, and its wireless network environment changes. Since the neural network used by non-AP site 1 does not have information about the wireless network environment where AP2 is located, it cannot achieve optimal communication performance. Therefore, the neural network needs to be updated, for example, the neural network parameters are updated or the neural network is replaced.
- Figure 5 shows a schematic diagram of a neural network update method.
- non-AP STA can send an update request to the cloud or server through the AP.
- the cloud or server obtains the updated neural network or neural network parameters, and then sends it to the non-AP STA through the AP.
- this method requires wireless network access to the Internet.
- wireless networks in many cases it is not guaranteed that the wireless network can access the Internet.
- sending neural network parameters through the cloud or server may cause a large delay, thereby affecting the communication performance of the site.
- this method is difficult to achieve refined optimization and configuration. For example, it is difficult for a neural network stored in the cloud to adapt to a refined wireless environment, that is, a certain BSS.
- non-AP STAs are trained in real time, and network-side training and distribution are not required.
- Training will incur a large overhead.
- Some non-AP STAs have insufficient power and are inconvenient for training.
- Some non-AP STAs have limited computing power and cannot be trained. This method cannot be universally applicable.
- the site is dormant, it will take some time to learn a neural network with better performance. In other words, this method will bring a large delay and affect the communication performance of the site.
- this application provides a communication method in a wireless LAN, which associates neural network information with manufacturer information, so that sites in the wireless LAN (the site can be an AP or a non-AP site) can obtain Using appropriate neural network information for communication decisions can ensure the communication performance of the site in a changing wireless network environment.
- Figure 6 is a schematic flowchart of a communication method 200 in a wireless local area network provided by an embodiment of the present application.
- the method 200 may include the following steps.
- the requesting site sends a request, which is used to request information about the neural network.
- the requesting site may be a non-access point site (non-AP STA) or an AP, which is not limited in this application.
- non-AP STA non-access point site
- AP AP
- the responding site receives the request from the requesting site.
- the request may include one or more of the following: manufacturer information, identification information of the neural network, identification information of the basic service set, generation time of the neural network, accuracy of the neural network, model size of the neural network, Used to obtain information that better meets the needs of the requested site or to obtain a more appropriate neural network. It can be understood that the request may also include other information related to the information requested by the neural network, which is not limited in the embodiment of the present application.
- the request may also include other information related to the information requested by the neural network, which is not limited in the embodiment of the present application.
- For the content and triggering conditions of the request sent by the requesting site please refer to the description of the embodiments shown in Figures 7 to 9 below, and will not be described again here.
- the responding site may be a non-AP site or an AP, which is not limited in this application.
- the responding site sends a response to the requesting site according to the request.
- the response includes the requested neural network information, and the neural network information is associated with the manufacturer information.
- the requesting site receives the response from the responding site.
- the response may also include one or more of the following related to the neural network information: manufacturer information, identification information, identification information of the basic service set, generation time, accuracy, model size, etc., In order to facilitate the requesting site to further determine the appropriate neural network based on the information of the neural network. It can be understood that the response may also include other information related to the neural network information, which is not limited in the embodiment of the present invention.
- the content of the response sent by the responding site please refer to the description of the following embodiments and will not be described again here.
- the communication method in the wireless LAN provided by this application can be applied to the communication between non-AP sites and APs, the communication between non-AP sites and non-AP sites, and the communication between APs. This application There are no restrictions on application.
- Figures 7-9 illustrate some specific embodiments of the communication method 200 in the wireless local area network provided by this application. The relevant contents of the following embodiments can be applied to this application. The communication method of wireless LAN will not be described here.
- FIG 7 is a schematic flow chart of a communication method 300 in a wireless local area network provided by an embodiment of the present application.
- the requesting site is a non-AP site as an example.
- the requesting site is called the first site, and the responding site is an access point as an example.
- the method 300 may include the following steps.
- S310 The first station sends a first request to the access point, and accordingly, the access point receives the first request.
- the first request is used to request information about the neural network.
- the first request may be a model request, used to request neural network-related information required by the first site.
- neural network and “neural network model” are interchangeable and have consistent meanings in this application.
- the first request may be a management frame, for example, a probe request (probe request) or an association request (Association Request).
- the first request may also be a control frame, for example, a request to send (RTS) or Block Acknowledgment Request (Block AcknowledgementRequest, BlockAckReq).
- RTS request to send
- Block AcknowledgementRequest BlockAckReq
- the first request can also be carried in the header of any message.
- the first request may also be other management frames or control frames. The embodiments of the present application do not limit this.
- the information of the neural network can also be called the information of the neural network model, or the model information of the neural network.
- the neural network required by the first site may be called the first neural network.
- the information of the first neural network can be understood as the information of the neural network required by the first site, and can also be called the target information of the neural network, or the target neural network information.
- the information of the first neural network may include parameters of the first neural network and/or the structure of the first neural network.
- the parameters of the first neural network include weights and/or biases of the first neural network.
- the structure of the first neural network may include the number of neurons of the first neural network, the number of layers of the neural network, the number of neural networks in each layer, the number of hidden layers, and the connection relationship between the neurons. one or more items of information.
- the "information of the first neural network" in this application can also be other forms of information related to the first neural network, or other forms of information used to reflect the calculation method of the first neural network. This application does not limit this.
- the first station can obtain the first neural network to make communication decisions based on the information of the first neural network.
- the first request may include manufacturer information, where "include” may be an explicit inclusion or an implicit inclusion, such as an implicit indication through the default relationship between other information carried in the first request and the manufacturer information.
- the manufacturer information includes the manufacturer's identification information, and the manufacturer's identification information can be used to distinguish different manufacturers.
- the identification information of the manufacturer may be the manufacturer ID, for example, 1, 2, 3, 4, 5, etc., or it may be the name of the manufacturer.
- the manufacturer information can also be carried in other interaction frames/messages. For example, the interaction information before the first site sends the request already contains the manufacturer information.
- the manufacturer information in the first request is the manufacturer associated with the first site, which can be called the first manufacturer.
- Vendors can also be called AI suppliers.
- the relationship between manufacturers and neural networks may include at least the following situations. It can be understood that the association between manufacturers and neural networks described here can be applied to other embodiments of the present application, and other embodiments will not be described in detail.
- the manufacturer corresponds to the equipment manufacturer, and the neural network is the neural network provided by the equipment manufacturer.
- the first manufacturer is equipment manufacturer #1
- the first neural network is the neural network provided by equipment manufacturer #1 corresponding to the first site.
- the manufacturer's identification can be indicated by the organization ID (organization identifier) in the standard.
- the organization ID is the globally unique identity information in the IEEE Registration Authority (IEEE Registration Authority) and is used to identify the manufacturer.
- Case 2 The manufacturer corresponds to the chip manufacturer, and the neural network is the neural network provided by the chip manufacturer.
- the first manufacturer is chip manufacturer #1
- the first neural network is the neural network provided by chip manufacturer #1.
- the manufacturer corresponds to the AI operator, and the neural network is provided by the AI operator.
- the first manufacturer is a telecommunications operator
- the first neural network is a neural network provided by the telecommunications operator.
- the AI operator in this application generally refers to an operator that can provide AI-related services. It can be a telecommunications operator, such as China Mobile, China Unicom, China Telecom, etc., or it can also be other AI-related operators. , such as operators engaged in AI-related services, which are used to engage in services such as interoperability authentication of neural networks.
- the association between the manufacturer and the site may at least include: the vendor refers to the vendor to which the site belongs, or the vendor refers to the vendor supported by the site.
- the manufacturer refers to the manufacturer to which the site belongs.
- the manufacturer may refer to the equipment manufacturer, chip manufacturer or AI operator corresponding to the site.
- the manufacturer refers to the manufacturer supported by the site, that is, the manufacturer corresponding to the neural network supported by the site.
- different vendors can use the same neural network structure. For example, some vendors certify each other's neural network structures in the form of an alliance. In this case, the first request can carry the neural network structure supported by the first site. Manufacturer's information.
- the first site is a device of manufacturer #1 and supports the neural network of manufacturer #1, it also supports the neural network of manufacturer #2.
- the manufacturer information carried in the first request can be the identification of manufacturer #1, or it can It is manufacturer #2, and it can also carry the logos of manufacturer #1 and manufacturer #2.
- the first site may support one or more vendors' neural networks.
- the first request may include a plurality of vendor information, and one or more of the plurality of vendor information is associated with the first neural network.
- the first request may further include identification information of the first neural network, and the identification information of the first neural network may be used to distinguish different neural network models, or to distinguish different types of neural networks.
- the identification information of the first neural network may include a model index (model index), a model identification (model ID), a model name, etc.
- model index model index
- model ID model identification
- model name a model name
- one or more of the model index, model ID, or model name can uniquely identify any neural network.
- one or more of the model index, model ID, or model name may identify the category of the neural network.
- a certain vendor has multiple neural network models, which are used for different tasks or functions. The tasks can be rate selection, channel access, channel state information compression, etc.
- model indexes can be used to distinguish these different categories of neural network models.
- the model indexes of these neural networks are 1, 2, and 3 respectively.
- Model names can also be used to distinguish these neural networks with different tasks or functional categories.
- the model names of these neural networks are rate selection neural network, channel access neural network, and channel state information compression neural network.
- the identification information of the first neural network may also include the version number (version) of the first neural network.
- version number version of the first neural network.
- the neural network can be identified by the model index and version number.
- the timing or triggering conditions for the first station to send the first request to the access point are not limited. For example, it may be that the first station moves to the BSS covered by the access point and when a network switch occurs, it sends the first request to the access point, or it may be that the first station ends the dormant state and sends the first request to the access point. , or it can be when the first site has a need for a new neural network, such as an updated neural network or a neural network with better accuracy, etc., it can send a first request to the access point, or it can be that the first site has learned that the access point When the entry point has a new or more suitable neural network or the first site periodically, the first request is sent to the access point.
- the first station sends the first request, which by default obtains the neural network corresponding to the BSS.
- the BSS identifier does not need to be carried.
- the first site can also include the relevant identification of the target BSS in the first request, so that the access point corresponding to the target BSS can further confirm the corresponding neural network and send it to the first site, or it can facilitate other non-target BSSs.
- the access point is not responding or forwarding to the target access point, etc.
- S320 The access point sends a first response to the first station, and accordingly, the first station receives the first response.
- the first response includes information of the first neural network.
- Information association between the first manufacturer and the first neural network is information of the first neural network.
- the information of the first neural network may include parameters of the first neural network and/or the structure of the first neural network.
- the access point can send part of the information of the first neural network to the first site, for example, the parameters of the first neural network, and the first site can obtain the first neural network through the parameters of the first neural network; the access point can also send the first neural network to the first site. All the information of the first neural network is taken as a whole and all the information of the first neural network is sent to the first station.
- the information of the first neural network may also be other forms of information related to the first neural network, or other forms of information used to reflect the calculation method of the first neural network, which is not limited in this application.
- the access point may send a first response to the first station according to the first request.
- the access point may store a correspondence between the first manufacturer's information and the first neural network's information. For example, the access point searches for the neural network corresponding to the first manufacturer based on the identification information of the first manufacturer corresponding to the first request, and determines the information of the first neural network. Optionally, the access point may also search for the neural network corresponding to the first manufacturer based on the identification information of the first manufacturer and the identification information of the neural network included in the first request.
- the first site can request the information of the neural network from the access point through the first request, and then the access point can search for the information of the neural network requested by the first site according to the first request, for example, the first
- the neural network information requested by the site is the first neural network information, and the first neural network information is sent to the first site. Since the neural network information is associated with the manufacturer information, the first site can obtain information from the neural network based on the manufacturer information. The access point obtains the information of the neural network. In this way, the site can obtain appropriate neural network information for communication decisions and ensure the communication performance of the site.
- this method avoids the site from obtaining inappropriate neural network information from the cloud or server, and also prevents the site from spending a long time training the neural network, which helps reduce communication delays.
- this method avoids the site from continuously training the neural network, which helps to reduce the power consumption of the site, thereby helping the site save energy.
- the method 300 further includes: the first station uses the information of the first neural network to make communication decisions.
- the first station can update the first neural network according to the parameters of the first neural network, and use the updated information of the first neural network to make communication decisions, such as channel access, rate adaptation, channel aggregation, and channel prediction. Decision-making on other communication tasks.
- the first request in S310 may also include other information, which is used to further request the information of the first neural network that meets the requirements of these other information.
- This information may also be called first preset conditions, or matching conditions.
- the first preset condition may be the generation time of the information of the first neural network, the accuracy of the first neural network, the model size of the first neural network, etc.
- the first preset condition may be the generation time of the information of the first neural network, or the generation time of the first neural network.
- the first preset condition indicates that the generation time of the information of the first neural network should be After time point #A; for another example, the first preset condition indicates that the time difference between the generation time of the information of the first neural network and time point #B should be less than the preset value #A.
- the first request Used to request information about the first neural network that satisfies this generation time.
- the first preset condition may be the accuracy of the first neural network.
- the first preset condition indicates that the accuracy of the first neural network should be greater than the preset value #B.
- the first request Information used to request the first neural network that meets this accuracy.
- the first preset condition may be the model size of the first neural network.
- the first preset condition indicates that the model size of the first neural network should be smaller than the preset value #C, and the first request is used to request that the model size be satisfied. Information about the model size of the first neural network.
- the method 300 further includes: the access point selects the information of the first neural network from the information of the multiple neural networks.
- the access point may maintain information for multiple neural networks.
- the access point can store the correspondence between multiple neural network information, manufacturer information, model index, etc.
- it can also include generation time, accuracy, model size, etc.
- the access point may select the information of a neural network that satisfies the first preset condition from the information of multiple neural networks, for example, select the information of a neural network whose generation time, accuracy or model size satisfies the request.
- “information of a neural network” is a set of information including the parameters and/or structure of the neural network.
- the information of multiple neural networks can be understood as an information library or information set of information of neural networks.
- the access point can select the information of the first neural network required by the first site, that is, the target information of the first neural network.
- the access point can select parameters of multiple neural networks corresponding to the first manufacturer (or identification of the first manufacturer and the first neural network) ( (can also be called alternative parameters), select the parameters that meet the first preset condition as the parameters of the neural network requested by the first site, and carry the parameters, that is, the information of the first neural network, in the first response.
- the access point can send the first neural network information that meets the preset conditions to the first station.
- the first station can obtain more suitable neural network information, which contributes to better Implement communication decisions.
- the first neural network in the first response in S320 includes information of one or more neural networks.
- the first response may also include attribute information corresponding to the one or more neural networks, such as one or more of generation time, accuracy, model size, etc.
- the generation time can be an absolute time, for example, time point #C, or the generation time can also be a relative time, for example, time difference #A, which represents the time difference between the generation time and time point #D, time point #D can be the default time reference point of the transceiver and receiver.
- the method 300 may further include: the first station selects the information of one neural network from the information of multiple neural networks.
- the first site can select one piece of information that satisfies its needs from the information of multiple neural networks and based on the attribute information of the multiple neural networks. For example, the first site selects information from multiple neural networks whose generation time or accuracy meets its own needs. Another example is that the first site selects information from multiple neural networks whose model size meets its own needs.
- the method 300 may also include: the access point obtains information of multiple neural networks.
- the information of the plurality of neural networks obtained by the access point may include information of the first neural network sent to the first site.
- the access point can obtain the neural network information in advance or in real time.
- the first site requests the neural network from the access point, it can provide a neural network that meets the needs of the first site, or provide a more appropriate neural network to the first site.
- the network provides support for better communication decisions at the first site.
- the access point can obtain the information of multiple neural networks when receiving a request from the first site, or it can obtain it according to certain preset conditions, such as a certain interval of time, or when there is a new demand. Types of sites are added, etc., which are not limited in the embodiments of this application.
- the access point can obtain the information of the neural network in various ways, such as obtaining it from the second site, or obtaining it from the cloud or server, etc. specifically:
- Method 1 The access point obtains neural network information from one or more second sites.
- one or more second stations associated with the access point may send neural network information to the access point.
- One or more stations can send neural network information to the access point one or more times.
- the one or more second sites may be access points or non-AP sites.
- one or more second stations may, after receiving the second request from the access point, respond to the second request from the access point and send a second response to the access point, where the second response includes the information of the neural network.
- one or more second sites can also actively send neural network information to the access point, and one or more second sites can also send neural network information to the access point based on a certain time or predetermined rules. ,wait.
- the embodiments of the present application do not limit this.
- a method for the access point to send a request to one or more second sites to obtain information about the neural network please refer to the implementation shown in Figure 8 and its corresponding introduction, which will not be described again here.
- the neural network that transmits one or more second stations to the access point is called a "second neural network". It should be understood that "first" and "second” in the embodiments of this application are only descriptive distinctions for ease of understanding and do not have any technical limitations.
- Method 2 The access point obtains neural network information from the cloud or server.
- the cloud or server stores neural network information, such as parameters of the neural network, the structure of the neural network, and other information, and the corresponding manufacturer information.
- the access point can obtain the neural network information from the cloud or server.
- the access point can obtain multiple neural network information from the cloud or server.
- the neural network information is relatively comprehensive and can avoid multiple acquisitions, so it can reduce Small communication overhead, and provide suitable neural networks for different non-AP sites or APs.
- the method 300 may further include: the access point may store information on multiple neural networks that satisfy the first site request, which may also be referred to as multiple candidate information.
- the access point may send multiple candidate information to the first station for selection by the first station, or may select one neural network information from the multiple candidate information and send it to the first station.
- the access point directly sends the information of the neural network, and there is no need to select.
- Figure 8 is a schematic flow chart of a communication method 400 in a wireless local area network provided by an embodiment of the present application. Among them, taking the requesting site as an access point as an example, and taking the responding site as a non-AP site as an example, the responding site in method 400 is called the second site. It should be noted that the relevant solutions in this embodiment can also be applied to the embodiment shown in FIG. 7 or FIG. 9 , and the relevant content that has been described in detail in other embodiments will not be described again here.
- the method 400 may include the following steps.
- S410 The access point sends a second request to the second station, and accordingly, the second station receives the second request.
- the second request is used to request information about the neural network.
- the second request sent by the access point may include one or more of manufacturer information, identification information of the neural network, identification information of the basic service set, generation time, accuracy, model size, etc.
- the second request sent by the access point may not include the above information to obtain information about all neural networks of the second site or information about the negotiated neural networks, etc.
- the neural network requested by the access point from the site, or the neural network sent by the site to the access point is called the second neural network
- the manufacturer associated with the second neural network is called the second manufacturer.
- the access point can send a second request to the second site when receiving a neural network request from a non-AP site, or it can send a second request to the second site based on predetermined rules or other requirements.
- the embodiments of the present application do not limit this.
- the triggering conditions for the access point to send the second request to the second station may at least include the following situations:
- Case 1 If the neural network information stored by the access point has not been updated for a predetermined time, the access point may send a second request to the second site to obtain updated neural network information.
- the neural network information stored by the access point may refer to the stored information of a certain neural network, such as the information of the second neural network. If the information of the second neural network has not been updated for a long time, the access point A second request is sent to the second site to obtain updated information about the neural network.
- the second request sent by the access point to the second site carries the identification information of the neural network, or the second request sent by the access point to the second site carries the identification information of the neural network and the corresponding manufacturer information.
- the neural network information of a certain manufacturer may not be updated, and the access point may send a second request to the second site of the corresponding manufacturer to obtain the latest neural network information of the manufacturer.
- the second request sent by the access point to the second site carries manufacturer information.
- the neural network information stored by the access point has not been updated for more than a predetermined time. This can be based on the generation time of the neural network information and has not been updated for more than a predetermined time. It can also be based on the time that the access point has stored the neural network information. The scheduled time has not been updated.
- the access point sends a second request to obtain neural network information to the second site, and may also carry the time or accuracy requirements of the neural network, so as to further obtain appropriate neural network information.
- Case 2 If the accuracy of the stored neural network information is low, for example, less than a threshold, the access point sends a second request to the second station to obtain higher-precision neural network information.
- the access point finds that the accuracy of the parameters of a certain manufacturer's neural network is lower than a threshold, the access point sends a second request to the second site of the manufacturer to request the manufacturer's higher accuracy neural network. parameter.
- the access point finds that the accuracy of the parameters of a specific neural network is lower than a threshold, the access point sends a second request to the second site to request higher-precision neural networks related to the specific neural network. Network parameters.
- Case 3 When the access point does not store information about a certain manufacturer's neural network, the access point sends a second request to the second site to request information about the neural network associated with the manufacturer.
- the second request includes information about the manufacturer and is used to request information about the neural network of the manufacturer.
- the access point can send information to the manufacturer.
- the non-AP site sends a second request to obtain the vendor's neural network information.
- the access point sends a second request to all or part of the non-AP sites in the BSS to which it belongs. If the second request is received, If the requested non-AP station belongs to the manufacturer or supports the neural network of the manufacturer or stores the neural network of the manufacturer, a second response will be sent to the access point, and the second response includes the information of the neural network of the manufacturer. If the non-AP station that receives the second request does not belong to the manufacturer or does not support the neural network of the manufacturer or does not store the neural network of the manufacturer, the second response is not sent to the access point.
- Case 4 When the access point does not store information about a specific neural network, the access point sends a second request to the second site to request information about the neural network.
- the second request includes identification information of the second neural network requested by the access point.
- the identification information of the second neural network can be used to identify the type of the neural network, or to identify the tasks that the neural network can perform, etc.
- the access point can send a second request to the second access point to request the neural network used to perform task #A.
- Neural network information the second request includes identification information of the neural network used to perform task #A.
- Case 5 When the access point does not store the information of the neural network, the access point sends a second request to the second site to request the information of the neural network.
- the second request is used to trigger the second site to report the information of the neural network it has trained.
- the second request may not include manufacturer information, or may not include time and/or accuracy requirements, neural network Any of the identifiers. That is to say, the second request does not need to be used to request information about a specific neural network.
- the second site can report information about all the neural networks it has trained according to the second request, and carry the manufacturer information, neural network information, etc. in the second response. One or more of the attribute information and the identity of the neural network.
- the access point sends a second request to all or part of the non-AP stations in the BSS to which it belongs, and the non-AP stations that receive the second request send the information of their trained neural network and the neural network to the access point.
- the network corresponding vendor including the second station, sends a second response to the access point.
- the access point sends a second request to the second site for requesting Information about the neural network associated with vendor #A.
- the second request includes identification information of the BSS to which the access point belongs. That is, the second request may be used to request information about the neural network generated in the BSS to which the access point belongs.
- the second station may send the information of the neural network generated by the second station in the BSS to which the access point belongs to the access point according to the second request.
- the identification information of the BSS may be the BSS ID.
- the neural network information in the second response can be understood as the neural network information associated with the identification information of the BSS.
- S420 The second station sends a second response to the access point, and accordingly, the access point receives the second response.
- the second response includes the information of the second neural network, and the information of the second neural network is associated with the manufacturer information.
- one or more second stations may respond to the access point's request and send a second response to the access point, where the second response includes neural network information.
- one or more second sites can also actively send neural network information to the access point, and one or more second sites can also send neural network information to the access point based on a certain time or predetermined rules. ,wait.
- the second station actively sends the information of the second neural network to the access point.
- the information of the second neural network includes the parameters of the second neural network.
- the second station when it sends the information of the second neural network to the access point, it may also include manufacturer information, and the manufacturer information is associated with the second neural network.
- the second vendor may be a vendor to which the second site belongs, or the second vendor may be a vendor supported by the second site.
- the manufacturer information in the second response includes the manufacturer's identification information.
- the manufacturer information may include one or more manufacturer information.
- the information of the second neural network sent by the second station to the access point can be used for the information of the first neural network sent by the access point to the first station in the first response of S320.
- the second neural network The information of the network can be the same as the information of the first neural network,
- the second manufacturer information may also be the same as the first manufacturer information.
- the second response also includes attribute information corresponding to the second neural network, such as one or more of generation time, accuracy, model size, etc.
- the second response also includes identification information of the second neural network, which is used to identify a specific neural network or identify the type of neural network.
- the information of the neural network in the second response may be the information of one or more neural networks.
- the second response includes the information of the plurality of neural networks and the identification information of the BSS to which the second site belongs when generating the information of the plurality of neural networks.
- the access point can save all the neural network information of the second response, or choose to save the neural network information related to the BSS to which it belongs.
- the second response includes attribute information of one or more neural networks, for example, attribute information such as generation time, accuracy, model size, etc. of one or more neural networks.
- the access point can obtain the information of the neural network during information interaction with the second site. Since the access point and the second site can interact with information in real time, it has greater flexibility.
- the method 400 further includes: the access point storing the correspondence between the information of the second neural network and the manufacturer information.
- the access point when acquiring the information of the second neural network, can acquire the manufacturer information associated with the information of the second neural network, and the access point can store the correspondence between the neural network and the manufacturer information.
- the corresponding relationship between the neural network and the manufacturer information can also be called the neural network-manufacturer table.
- the access point stores the corresponding relationship between the neural network and the manufacturer information, that is, the access point maintains the neural network-manufacturer table.
- the neural network-manufacturer table may also include identification information, attribute information, etc. of the above-mentioned neural network.
- the information of the neural network may be the parameters and/or structure of the neural network, or other forms of related information of the neural network that can be obtained.
- the access point can obtain the information of the second neural network from the second site.
- the information of the second neural network is associated with the manufacturer information, so that the access point can maintain the information of the neural network and the manufacturer information, thereby Provide support for other non-AP sites or APs to obtain more suitable neural network information, that is, provide support for better communication decisions for the site.
- Figure 9 is a schematic flow chart of a communication method 500 in a wireless local area network provided by an embodiment of the present application.
- Method 500 may be a specific implementation based on the above-mentioned method 200, method 300, and method 400.
- the contents of the above-mentioned embodiments are applicable to method 500 and will not be described again here.
- the information of the neural network is explained by taking the parameters of the neural network as an example.
- S501 STA#1 (an example of the second site) and STA#2 (another example of the second site) train neural network model #1 (an example of the second neural network).
- STA#1 trains neural network model #1 in real time and obtains parameter #0 and parameter #1 of neural network model #1 successively.
- parameter #0 is trained in BSS#0
- parameter #1 is trained in BSS#1
- BSS#0 is the BSS to which STA#1 belongs before moving to BSS#1.
- STA#2 trains neural network model #1 in real time and obtains parameter #2 of neural network model #1.
- parameter #2 is trained in BSS#1.
- Parameter #0, parameter #1, and parameter #2 are examples of parameters of the neural network respectively.
- parameter #0 is weights#0
- value of parameter #1 is weights#1
- value of parameter #2 is weights#2.
- the equipment manufacturer of STA#1 and STA#2 is manufacturer #A (an example of a manufacturer).
- AP#1 an example of the access point in method 300 and also an example of the access point in method 400 sends request #1 (second request example) to STA#1 and STA#2 to request neural parameters of the network model.
- AP#1 belongs to BSS#1.
- AP#1 when AP#1 determines that AP#1 does not have the model parameters of vendor #A (an example of a trigger condition for sending a request), AP#1 sends request #1 to the associated STA belonging to vendor #A within its BSS#1. .
- AP#1 sends request #1 to STA#1 and STA#2 respectively.
- request #1 includes information about the manufacturer, that is, manufacturer #A, indicating that information about the neural network associated with manufacturer #A is requested.
- request #1 includes identification information of the BSS where AP#1 is located, that is, BSS#1.
- STA#1 sends response #1 (an example of the second response) to AP#1.
- response #1 includes the information of neural network model #1 generated in BSS#1, that is, parameter #1 of neural network model #1.
- Response #1 also includes the information of the manufacturer to which STA#1 belongs, that is, manufacturer #A. .
- response #1 may also include an identification of neural network model #1.
- the identification of neural network model #1 is model #1.
- response #1 may also include the generation time of parameter #1 (an example of attribute information).
- parameter #1 is generated at time #1.
- response #1 may also include the accuracy of parameter #1 (yet another example of attribute information).
- the precision of parameter #1 is value #1.
- STA#2 sends response #2 (another example of the second response) to AP#1.
- response #2 includes the information of neural network model #1 generated in BSS#1, that is, parameter #2 of neural network model #1.
- Response #2 also includes the information of the manufacturer to which STA#2 belongs, that is, manufacturer #. A.
- response #2 may also include the identification of neural network model #1.
- the identification of neural network model #1 is model #1.
- response #2 may also include the generation time of parameter #2.
- parameter #2 is generated at time #2.
- response #2 may also include the accuracy of parameter #2.
- the precision of parameter #2 is value #2.
- AP#1 stores the correspondence between the manufacturer information and the parameters of the neural network model #1.
- Table 1 includes the corresponding attribute information of the neural network parameters, such as generation time and accuracy.
- the same model identifier in Table 1 can correspond to the parameters of multiple neural networks.
- S506 STA#3 (example of the first site) sends request #2 (example of the first request) to AP#1 to request parameters of the neural network model #1.
- request #2 may include the identification of model #1.
- request #2 may include information about manufacturer #A.
- the device manufacturer of STA #3 is manufacturer #A (example of manufacturer), so request #2 includes information of manufacturer #A (example of manufacturer information).
- the equipment manufacturer of STA#3 is manufacturer #B
- STA#3 supports the neural network structure of manufacturer #A
- request #2 includes the information of manufacturer #A and manufacturer #B.
- request #2 includes time information (an example of the first preset condition) for indicating the time requirement of STA #3 on the parameters of neural network model #1.
- request #2 includes accuracy information (yet another example of the first preset condition) for indicating STA #3's accuracy requirements for the parameters of neural network model #1.
- AP#1 determines parameters (example of parameters of the first neural network).
- AP#1 searches for the corresponding parameters in Table 1 based on the information of vendor #A and the identification of model #1 included in request #2. For example, AP#1 finds parameters #1 and #1 based on vendor #A and model #1. Parameter #2.
- AP#1 selects parameter #1 that meets the time requirement from parameter #1 and parameter #2 based on the time information in request #2. At this time, weights#1 is the parameter selected by AP#1. .
- AP#1 selects parameter #2 that meets the accuracy requirements from parameter #1 and parameter #2 based on the accuracy information in request #2.
- weights#2 is selected by AP#1. parameters.
- AP#1 sends response #3 (an example of the first response) to STA#3.
- response #3 includes parameters selected by AP#1.
- request #1 in S502 of method 500 does not include BSS#1, but response #1 includes BSS#0, parameter #0, and BSS#1, parameter #1.
- Response #2 includes BSS#1, parameter #2.
- the AP chooses to store parameter #1, parameter #2 and their attribute information according to BSS#1 to which it belongs, while discarding parameter #0 generated in BSS#0.
- the parameters determined by AP#1 include multiple parameters.
- AP#1 can send multiple parameters to STA#3, and STA#3 chooses which one to use for subsequent decisions. Or, AP#1 selects one of them and sends it to STA#3. For example, AP#1 can select one of the parameters in the stored order, or AP#1 can also select a parameter randomly.
- the correspondence between the manufacturer information stored in AP#1 and the parameters of the neural network model #1 can be as shown in Table 2.
- the attribute information corresponding to the parameters of the neural network in Table 2 does not include time and accuracy information. Based on this, the above-mentioned requests and responses of the neural network may not include the generation time and accuracy.
- AP#1 can select the parameter as weights#1 according to the stored order, or AP#1 can randomly select one from parameter #1 and parameter #2.
- the correspondence between the manufacturer information stored in AP#1 and the parameters of the neural network model #1 can be as shown in Table 3.
- AP#1 can only store the parameters of one neural network.
- the stored parameters of a neural network may be the latest received parameters of the neural network, or the parameters of the most accurate neural network, etc. This is not limited in the embodiment of the present invention.
- model #2 in Table 1, Table 2, and Table 3 may be information of other neural networks that AP#1 has previously stored.
- the correspondence between the manufacturer information stored in AP#1 and the parameters of the neural network model #1 can be as shown in Table 4.
- Table 4 shows examples of neural network information from multiple manufacturers.
- BSS#1 may also include other non-AP stations, such as STA#4 and STA#5, and STA#4 and STA#5 are associated with AP#1.
- AP#1 may send request #1 to part or all of all STAs associated with BSS#1.
- AP#1 will also receive information from STA#4 and STA#5, including parameters of the neural network models trained by STA#4 and STA#5.
- the equipment manufacturer of STA#4 is manufacturer #B in Table 4
- the equipment manufacturer of STA#5 is manufacturer #C in Table 4.
- Vendor #B and Vendor #C also support Model #1.
- STA#4 trains neural network model #1 and obtains neural network model #1 Parameter #3, the value of parameter #3 is weights#3, the generation time of parameter #3 is time #3, and the accuracy of parameter #3 is value #3.
- STA#5 trains neural network model #1 and obtains parameter #4 of neural network model #1. The value of parameter #4 is weights#4, the generation time of parameter #4 is time #4, and the accuracy of parameter #4 is value #4.
- weights#1, weights#2, weights#3, and weights#4 in this application all represent the specific values of the parameters of the neural network. Weights can be the specific value of the weight of the neural network, or it can also be the specific value of the neural network. The specific values of weights and biases.
- Tables 1 to 4 are some examples of the correspondence between AP#1's stored manufacturer information and the parameters of neural network model #1, but are not limited to the contents of Tables 1 to 4, and can also be other, The embodiment of the present invention does not limit this.
- FIG 10 is a schematic diagram of a communication device provided by an embodiment of the present application.
- the device 600 may include a transceiver unit 610 and/or a processing unit 620.
- the transceiver unit 610 can communicate with the outside, and the processing unit 620 is used to process data/information.
- the transceiver unit 610 may also be called a communication interface or a communication unit.
- the device 600 may be the requesting site in the above method 200, the first site in the method 300, or the access point in the method 400, or may be used to implement the above method 200.
- the device 600 can implement a process corresponding to the process performed by the requesting site in the above method 200, method 300 or method 400, wherein the transceiving unit 610 is used to perform operations related to the sending and receiving of the requesting site in the above method process.
- the device 600 further includes a processing unit 620, which is configured to perform operations related to processing of the requesting site in the above method flow.
- the transceiver unit 610 is used to send a request, the request is used to request the information of the neural network; the transceiver unit 610 is also used to receive a response from the responding site, the response includes the information of the neural network, the information of the neural network associated with the manufacturer information.
- the information of the neural network may include parameters of the neural network and/or the structure of the neural network.
- the manufacturer information may include multiple manufacturer information.
- the manufacturer information is the information of the manufacturer corresponding to the equipment manufacturer.
- the request may include vendor information or identification information of the neural network.
- the request may also include the identification information of the BSS, and the response neural network information is associated with the identification information of the BSS.
- the request may also include a preset condition of the requested neural network, and the request is used to request information about a neural network that satisfies the preset condition.
- the preset condition includes at least one of the following: generation time of the neural network, accuracy of the neural network, and model size of the neural network.
- the response may include vendor information or identification information of the neural network.
- the response may also include at least one of the following: generation time of the neural network, accuracy of the neural network, and model size of the neural network.
- the response may also include information from multiple neural networks.
- the response may also include attribute information of multiple neural networks.
- the attribute information includes generation times of multiple neural networks, or accuracies of multiple neural networks, or model sizes of multiple neural networks.
- the processing unit 620 may be configured to: select information of one neural network from information of multiple neural networks according to the attribute information.
- the trigger conditions for the transceiver unit 610 to send a request include: the neural network information stored by the device 600 has not been updated for more than a preset time; or the accuracy of the neural network stored by the device 600 is less than a threshold; or , the device 600 does not store neural network information or does not store any neural network information; or the neural network information related to the manufacturer information stored by the device 600 has not been updated for more than a preset time; or the device 600 does not store any neural network information. Neural network information related to manufacturer information.
- the triggering conditions for requesting the station to send the request include: the device 600 wakes up after the station sleeps; or the network environment of the wireless local area network of the device 600 changes.
- the above content is only taken as an example, and the device 600 can also implement other steps, actions or methods related to requesting the site in the above method 200, 300 or 400, which will not be described again here.
- the device 600 may be the response site in the above method 200, the access point in the method 300, or the second site in the method 400, or may be used to implement the above method 200.
- the device 600 can implement the response corresponding to the above method 200, 300 or 400.
- the device 600 further includes a processing unit 620, which is configured to perform operations related to processing of the response site in the above method flow.
- the transceiver unit 610 is configured to receive a request from the requesting site, where the request is used to request information about the neural network; the transceiving unit 610 is also configured to send a response to the requesting site according to the request, where the response includes the information about the neural network.
- the information of the neural network is associated with the manufacturer's information.
- the information of the neural network may include parameters of the neural network and/or the structure of the neural network.
- the manufacturer information may include one or more manufacturer information.
- the manufacturer information is the information of the manufacturer corresponding to the equipment manufacturer.
- the request may include vendor information or identification information of the neural network.
- the request may also include the identification information of the BSS, and the response neural network information is associated with the identification information of the BSS.
- the request may also include a preset condition of the requested neural network, and the request is used to request information of a neural network that satisfies the preset condition.
- the preset condition includes at least one of the following: generation time of the neural network, accuracy of the neural network, and model size of the neural network.
- the processing unit 620 may be used to select information of one neural network from information of multiple neural networks according to preset conditions.
- the response may include vendor information or identification information of the neural network.
- the response may also include at least one of the following: generation time of the neural network, accuracy of the neural network, and model size of the neural network.
- the response may also include information from multiple neural networks.
- the response may also include attribute information of multiple neural networks.
- the attribute information includes the generation time of multiple neural networks, or the accuracy of multiple neural networks, or the model sizes of multiple neural networks.
- the trigger conditions for the transceiver unit 610 to send a request include: the neural network information stored by the device 600 has not been updated for more than a preset time; or the accuracy of the neural network stored by the device 600 is less than a threshold; or , the device 600 does not store neural network information or does not store any neural network information; or the neural network information related to the manufacturer information stored by the device 600 has not been updated for more than a preset time; or the device 600 does not store any neural network information. Neural network information related to manufacturer information.
- the triggering conditions for the transceiver unit 610 to send a request include: the device 600 wakes up after sleeping; or the network environment of the wireless local area network of the device 600 changes.
- the device 600 can also implement other steps, actions or methods related to responding to the site in the above method 200, 300 or 400, which will not be described again here.
- the device 600 here is embodied in the form of a functional unit.
- the term "unit” as used herein may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a proprietary processor, or a group of processors) used to execute one or more software or firmware programs. processor, etc.) and memory, merged logic circuitry, and/or other suitable components to support the described functionality.
- ASIC application specific integrated circuit
- processor such as a shared processor, a proprietary processor, or a group of processors
- memory merged logic circuitry, and/or other suitable components to support the described functionality.
- the above device 600 has the function of realizing the corresponding steps performed by the requesting site in the above method 200, 300 or 400, or the above device 600 has the function of realizing the corresponding steps performed by the responding site in the above method 200, 300 or 400.
- the functions described can be implemented by hardware, or can be implemented by hardware executing corresponding software.
- the hardware or software includes one or more modules corresponding to the above functions; for example, the transceiver unit can be replaced by a transceiver (for example, the sending unit in the transceiver unit can be replaced by a transmitter, and the receiving unit in the transceiver unit can be replaced by a receiving unit. (machine replacement), other units, such as processing units, etc., can be replaced by processors to respectively perform the sending and receiving operations and related processing operations in each method embodiment.
- the above-mentioned transceiver unit may also be a transceiver circuit (for example, it may include a receiving circuit and a transmitting circuit), and the processing unit may be a processing circuit.
- the device in Figure 10 can be the request site or the response site in the previous embodiment, or it can be a chip or a chip system, such as a system on chip (SoC).
- SoC system on chip
- the transceiver unit may be an input-output circuit or a communication interface.
- the processing unit is a processor or microprocessor or integrated circuit integrated on the chip. No limitation is made here.
- Figure 11 is another schematic structural diagram of a communication device provided by an embodiment of the present application.
- the communication device 700 includes: at least one processor 710 and a transceiver 720 .
- the processor 710 is coupled to the memory and is used to execute instructions stored in the memory to control the transceiver 720 to send signals and/or receive signals.
- the communication device 700 further includes a memory 730 for storing instructions.
- processor 710 and the memory 730 can be combined into one processing device, and the processor 710 is used to execute the program code stored in the memory 730 to implement the above functions.
- the memory 730 may also be integrated in the processor 710 or independent of the processor 710 .
- the transceiver 720 may include a receiver and a transmitter.
- the transceiver 720 may further include an antenna, and the number of antennas may be one or more.
- the transceiver 1020 may be a communication interface or an interface circuit.
- the chip When the communication device 700 is a chip, the chip includes a transceiver unit and a processing unit.
- the transceiver unit may be an input-output circuit or a communication interface;
- the processing unit may be a processor, microprocessor, or integrated circuit integrated on the chip.
- An embodiment of the present application also provides a processing device, including a processor and an interface.
- the processor may be used to execute the method in the above method embodiment.
- the above processing device may be a chip.
- the processing device may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a system on chip (SoC), or It can be a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller unit , MCU), it can also be a programmable logic device (PLD) or other integrated chip.
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- SoC system on chip
- CPU central processing unit
- NP network processor
- DSP digital signal processing circuit
- MCU microcontroller unit
- PLD programmable logic device
- each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor.
- the steps of the methods disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
- the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
- Figure 12 is another schematic structural diagram of a communication device provided by an embodiment of the present application.
- the device 800 includes a processing circuit 810 and a transceiver circuit 820.
- the processing circuit 810 and the transceiver circuit 820 communicate with each other through internal connection paths.
- the processing circuit 810 is used to execute instructions to control the transceiver circuit 820 to send signals and/or receive signals.
- the device 800 may also include a storage medium 830, which communicates with the processing circuit 810 and the transceiver circuit 820 through internal connection paths.
- the storage medium 830 is used to store instructions, and the processing circuit 810 can execute the instructions stored in the storage medium 830 .
- the device 800 is configured to implement the process corresponding to the requesting site in the above method embodiment.
- the device 800 is configured to implement the process corresponding to the response site in the above method embodiment.
- the present application also provides a computer program product.
- the computer program product includes: computer program code.
- the computer program code When the computer program code is run on a computer, it causes the computer to execute the embodiment shown in Figure 3 method in.
- the present application also provides a computer-readable medium.
- the computer-readable medium stores program code.
- the program code When the program code is run on a computer, it causes the computer to execute the steps in the above method embodiment. method.
- this application also provides a system, which includes the aforementioned request site and response site.
- At least one of! or "at least one of" as used herein means all or any combination of the listed items, for example, "at least one of A, B and C", It can mean: A exists alone, B exists alone, C exists alone, A and B exist simultaneously, B and C exist simultaneously, and A, B and C exist simultaneously. "At least one” in this article means one or more. "Multiple" means two or more.
- B corresponding to A means that B is associated with A, and B can be determined based on A.
- determining B based on A does not mean determining B only based on A.
- B can also be determined based on A and/or other information.
- the terms “including,” “includes,” “having,” and variations thereof all mean “including but not limited to,” unless otherwise specifically emphasized.
- the disclosed systems, devices and methods can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
- the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
- the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
La présente demande se rapporte au domaine des communications, et se rapporte en particulier à un procédé et à un appareil de communication dans un WLAN. La solution peut être appliquée à des systèmes WLAN qui prennent en charge des protocoles Wi-Fi de prochaine génération de la norme IEEE 802.11ax, tels que la série des normes 802.11be, Wi-Fi 7 ou EHT, d'autres protocoles de la série des normes 802.11 et Wi-Fi, et la prochaine génération des normes 802.11be, et la solution peut également être appliquée à des systèmes de réseau personnel sans fil à base d'UWB, tels que les séries de normes 802.15, ou appliqués à des systèmes de détection, tels que les séries de normes 802.11bf. Le procédé comprend les étapes suivantes : un site de demande demande des informations d'un réseau neuronal à partir d'un site de réponse au moyen d'une demande, de sorte que le site de réponse puisse envoyer les informations demandées du réseau neuronal au site de demande selon la demande, les informations du réseau neuronal étant associées à des informations de fabricant. Ainsi, un site peut obtenir des informations appropriées d'un réseau neuronal pour mettre en œuvre une décision de communication, et les performances de communication du site sont assurées.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210885655.6A CN117499981A (zh) | 2022-07-26 | 2022-07-26 | 一种无线局域网中通信的方法和装置 |
CN202210885655.6 | 2022-07-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024022007A1 true WO2024022007A1 (fr) | 2024-02-01 |
Family
ID=89683501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/104158 WO2024022007A1 (fr) | 2022-07-26 | 2023-06-29 | Procédé et appareil de communication dans un réseau local sans fil |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN117499981A (fr) |
TW (1) | TW202406402A (fr) |
WO (1) | WO2024022007A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101730107A (zh) * | 2010-01-29 | 2010-06-09 | 北京新岸线无线技术有限公司 | 一种无线局域网的接入方法及系统 |
EP3683733A1 (fr) * | 2019-01-10 | 2020-07-22 | Nokia Technologies Oy | Procédé, appareil et produit-programme d'ordinateur pour réseaux neuronaux |
US20220046385A1 (en) * | 2020-08-04 | 2022-02-10 | Qualcomm Incorporated | Selective triggering of neural network functions for positioning measurement feature processing at a user equipment |
CN114492784A (zh) * | 2020-10-27 | 2022-05-13 | 华为技术有限公司 | 神经网络的测试方法和装置 |
US20220182263A1 (en) * | 2020-12-03 | 2022-06-09 | Qualcomm Incorporated | Model discovery and selection for cooperative machine learning in cellular networks |
-
2022
- 2022-07-26 CN CN202210885655.6A patent/CN117499981A/zh active Pending
-
2023
- 2023-06-29 WO PCT/CN2023/104158 patent/WO2024022007A1/fr unknown
- 2023-07-26 TW TW112127931A patent/TW202406402A/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101730107A (zh) * | 2010-01-29 | 2010-06-09 | 北京新岸线无线技术有限公司 | 一种无线局域网的接入方法及系统 |
EP3683733A1 (fr) * | 2019-01-10 | 2020-07-22 | Nokia Technologies Oy | Procédé, appareil et produit-programme d'ordinateur pour réseaux neuronaux |
US20220046385A1 (en) * | 2020-08-04 | 2022-02-10 | Qualcomm Incorporated | Selective triggering of neural network functions for positioning measurement feature processing at a user equipment |
CN114492784A (zh) * | 2020-10-27 | 2022-05-13 | 华为技术有限公司 | 神经网络的测试方法和装置 |
US20220182263A1 (en) * | 2020-12-03 | 2022-06-09 | Qualcomm Incorporated | Model discovery and selection for cooperative machine learning in cellular networks |
Also Published As
Publication number | Publication date |
---|---|
TW202406402A (zh) | 2024-02-01 |
CN117499981A (zh) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103828443B (zh) | 发现经由无线网络可用的网络信息 | |
US11095726B2 (en) | Neighbor awareness networking multi-hop | |
TWI554130B (zh) | 用於與鄰點感知網路合倂的方法和度量 | |
CN105493421B (zh) | Wlan 802.11中的快速关联 | |
US8520583B2 (en) | Method, apparatus, and computer program product for roaming partner discovery | |
EP3069542A1 (fr) | Fusion de grappes nan assistée par serveur | |
US8064921B2 (en) | Method and system for client-driven channel management in wireless communication networks | |
WO2013096678A2 (fr) | Appareil, systèmes et procédés de découverte d'adresse ip pour un paramétrage de liaison directe tunnellisée | |
US10674331B1 (en) | Indoor location estimation for wireless device | |
US20150341892A1 (en) | Positioning with access network query protocol neighbor reports | |
WO2021008492A1 (fr) | Procédé de commutation et appareil de communication | |
TWI605725B (zh) | 用於低成本無線寬頻帶通訊系統的有限電力傳送狀態之方案 | |
US20230353465A1 (en) | Method for determining service experience model and communication apparatus | |
JP2023529658A (ja) | マルチリンクデバイスプロービング方法および通信装置 | |
WO2024012331A1 (fr) | Procédé et appareil de détermination de modèle d'intelligence artificielle (ia) | |
WO2024022007A1 (fr) | Procédé et appareil de communication dans un réseau local sans fil | |
CN116097688A (zh) | 通信方法、装置及系统 | |
WO2023208451A1 (fr) | Structure nr pour prédiction de faisceau dans le domaine spatial | |
US9930147B2 (en) | Methods and systems for dual channel information | |
WO2024027511A1 (fr) | Procédé d'accès à un canal et appareil associé | |
WO2024067248A1 (fr) | Procédé et appareil d'acquisition d'ensemble de données d'entraînement | |
WO2023186048A1 (fr) | Procédé, appareil et système d'acquisition d'informations de service d'ia | |
WO2024017301A1 (fr) | Procédé et appareil de communication | |
WO2023125051A1 (fr) | Procédé de détermination d'identifiant d'établissement de mesure et dispositif associé | |
WO2023125598A1 (fr) | Procédé de communication et appareil de communication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23845220 Country of ref document: EP Kind code of ref document: A1 |