WO2024012331A1 - Procédé et appareil de détermination de modèle d'intelligence artificielle (ia) - Google Patents

Procédé et appareil de détermination de modèle d'intelligence artificielle (ia) Download PDF

Info

Publication number
WO2024012331A1
WO2024012331A1 PCT/CN2023/105923 CN2023105923W WO2024012331A1 WO 2024012331 A1 WO2024012331 A1 WO 2024012331A1 CN 2023105923 W CN2023105923 W CN 2023105923W WO 2024012331 A1 WO2024012331 A1 WO 2024012331A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal device
model
scene
identifier
network
Prior art date
Application number
PCT/CN2023/105923
Other languages
English (en)
Chinese (zh)
Inventor
秦城
李�远
王四海
杨锐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210970366.6A external-priority patent/CN117459409A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024012331A1 publication Critical patent/WO2024012331A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present application relates to the field of communications, and in particular, to a method and device for determining an artificial intelligence AI model.
  • AI artificial intelligence
  • whether the AI model can be adapted to different scenarios is an important indicator to measure the performance of a model.
  • the AI model needs to obtain a large amount of data from different scenarios to train the AI model. In this way, the resulting AI model will also be more complex.
  • the trained AI model considers as many scenarios as possible, the performance of the AI model may not achieve good results for each scenario.
  • scenarios are distinguished from the perspective of data sets. It is necessary to know what the corresponding data sets are in advance before selecting the corresponding model. In practice, it is impossible to know the data set information used during training.
  • This application provides a method and device for determining an artificial intelligence AI model, constructing a scene identifier corresponding to the scene for a scene, and then obtaining an AI model corresponding to the scene identifier.
  • this application can adopt the following technical solutions:
  • the first aspect is to provide a method for determining the artificial intelligence AI model.
  • the method includes: the terminal device obtains a first identifier, the first identifier is used to indicate a first scene; the terminal device obtains a first AI model, and the first AI model corresponds to the first scene. Based on the method provided in the first aspect, the first identifier corresponds to the first scene, and the first scene corresponds to the first AI model. Then the terminal device can accurately and quickly obtain the first AI model corresponding to the first identification.
  • the terminal device obtains the first identity, including: the terminal device receives the first identity from the network device; or the terminal device determines the first identity according to the first scenario.
  • the terminal device may receive the first identification from the network device.
  • the terminal device may also determine the first identifier based on its own scene information (first scene).
  • the terminal device receiving the first identifier from the network device includes: the terminal device receiving a first preset message from the network device, and the first preset message carries the first identifier.
  • the first preset message includes: system message SIB and/or dedicated signaling between the network device and the terminal device.
  • the terminal device obtains the first AI model, including: the terminal device obtains the first AI model according to the first identification and training data; or, the terminal device receives the first AI model from the network device; or, the terminal device Get the first AI model from the locally stored AI model.
  • the terminal device acquires the first AI model, and the training data is a training data set.
  • the terminal device receiving the first identifier from the network device further includes: the terminal device receiving a neighbor cell measurement configuration message from the network device, where the neighbor cell measurement configuration message carries the first identifier.
  • the terminal device receives the first identifier from the network device, which further includes: the terminal device receives a switching command from the network device; the switching command carries the first identifier or the switching command indicates the first network device scene information and Whether the scene information of the second network device after switching is the same.
  • the terminal device receives the first AI model from the network device, which further includes: the terminal device receives the first AI model from the server, and the server can communicate with the terminal device.
  • the terminal device obtains the first AI model according to the first identification and training data, including: the terminal device determines the training data of the first AI model according to the first identification, and the training data corresponds to the first scene; the terminal device determines the training data of the first AI model according to the first identification; training data training Get the first AI model.
  • the terminal device obtains the first AI model according to the first identification and training data, which further includes: the server and/or the network device trains according to the first identification and training data to obtain the first AI model.
  • the method further includes: the terminal device generates a correspondence between the first identifier and the first AI model. In this way, the terminal device can obtain the corresponding AI model according to the first identification.
  • the same first identifier corresponds to the same first AI model.
  • first identifiers there are multiple first identifiers; different first identifiers correspond to the same network configuration parameters, and the network configuration parameters at least include antenna port configuration parameters and beam configuration parameters.
  • the terminal device receives multiple first identifiers from the network device; the terminal device determines one first identifier based on a pair of first identifiers and network configuration parameters.
  • the first message includes media access layer control element MAC-CE signaling.
  • the first identifier includes: at least one of a scene type identifier, a first PMI identifier, and a scene mark identifier.
  • the granularity of the scene type identifier is greater than the first PMI identifier.
  • the first identifier indicates a first network information set
  • the first network information set includes a cell identifier of the first cell, a public land mobile network identifier PLMN, a tracking area identifier TAC, an access network area identifier RANID, and a cell identifier.
  • PLMN public land mobile network identifier
  • TAC tracking area identifier
  • RANID access network area identifier
  • cell identifier At least one of a frequency point and a cell band; the first cell is a neighboring cell of the second cell, and the second cell is the cell where the terminal equipment is located.
  • the method when the terminal device does not receive the first identifier, the method further includes: the terminal device sends a first message to the network device, where the first message is used to instruct the network device to send the first identifier.
  • the method further includes: the terminal device sends a first message to the network device, and the first message is used to Instruct the network device to send a second identifier, where the second identifier is different from the first identifier.
  • the terminal device after the terminal device receives the first identifier, the terminal device obtains the first AI model corresponding to the first identifier.
  • the method further includes: the terminal device sends a first message to the network device, and the first message is used to Instruct the network device to send a second identifier, where the second identifier is different from the first identifier.
  • the first message carries at least one of one or more first identities supported by the terminal device, scene information of the first scene of the terminal device, and request information for the terminal device to request the network device to send the first identity.
  • the first identifier corresponds to the current scene (first scene) of the terminal device.
  • the current scene can be expressed as multiple scene identifiers, and the first identifier is the collective name of these scene identifiers.
  • the method further includes: the terminal device receiving the configuration information of the first random access resource and/or the second random access resource from the network device; and obtaining the first random access resource corresponding to the first identification by the terminal device.
  • the terminal device initiates random access through the first random access resource according to the configuration information of the first random access resource; in the case where the terminal device does not obtain the first AI model corresponding to the first identifier, The terminal device initiates random access through the second random access resource according to the configuration information of the second random access resource. In this way, the terminal device can quickly notify the network device whether the terminal device supports the first identification sent by the network device.
  • the second aspect is to provide a method for determining the artificial intelligence AI model.
  • the method includes: a network device determines a first identifier; the network device sends the first identifier to a terminal device, and the first identifier is used to indicate a first scenario. Based on the method provided in the second aspect, the network device determines the first identifier corresponding to the first scene, and the network device sends the first identifier to the terminal device, indicating the first identifier corresponding to the current scene (first scene) of the terminal device. Then the terminal device can obtain the first AI model corresponding to the first identification.
  • the method further includes: the network device receives a first message from the terminal device; and the network device sends a first identification to the terminal device according to the first message.
  • the method further includes: the network device receives a first message from the terminal device; and the network device sends a second identification to the terminal device according to the first message.
  • the method further includes: the first network device sending one or more first identifiers supported by the terminal device to the second network device; the first network device is a network device that the terminal device accessed before the handover; The second network device is the network device to which the terminal device is connected after the handover.
  • a communication device in a third aspect, includes: a processor, the processor is coupled to a memory, and the memory Used to store computer programs. The processor is used to execute the computer program stored in the memory, so that the method as implemented in any one of the first aspect and the second aspect is executed.
  • a fourth aspect provides a chip system including a logic circuit and an input/output port.
  • the logic circuit is used to implement the processing functions involved in the first aspect and the second aspect
  • the input/output port is used to implement the transceiving functions involved in the first aspect to the fourth aspect.
  • the input port can be used to implement the receiving functions involved in the first to fourth aspects
  • the output port can be used to implement the sending functions involved in the first and second aspects.
  • the chip system further includes a memory, which is used to store program instructions and data for implementing the functions involved in the first and second aspects.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • a computer-readable storage medium stores computer programs or instructions; when the computer program or instructions are run on the computer, the method of any one of the first aspect and the second aspect is executed.
  • a computer program product which includes a computer program or instructions.
  • the method of any one of the first and second aspects is executed.
  • Figure 1 is a schematic architectural diagram of a communication system provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram corresponding to a scenario and an AI model provided by the embodiment of the present application;
  • Figure 3 is a schematic diagram of the correspondence between the AI model and the AI model in a scenario provided by the embodiment of the present application;
  • Figure 4 is a schematic diagram of the correspondence between the AI model and the AI model in another scenario provided by the embodiment of the present application;
  • Figure 5 is a schematic diagram corresponding to another scenario and the AI model provided by the embodiment of the present application.
  • Figure 6A is a schematic diagram corresponding to another scenario and the AI model provided by the embodiment of the present application.
  • Figure 6B is a schematic diagram of a scene and a scene relationship provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram corresponding to another scenario and the AI model provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram corresponding to another scenario and the AI model provided by the embodiment of the present application.
  • Figure 9 is a schematic diagram corresponding to another scenario and the AI model provided by the embodiment of the present application.
  • Figure 10A is a schematic flowchart of a method for determining an AI model provided by an embodiment of the present application
  • Figure 10B is a schematic flowchart of another method for determining an AI model provided by an embodiment of the present application.
  • Figure 11 is a schematic flow chart of another method for determining an AI model provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of another scenario and the relationship between scenarios provided by the embodiment of the present application.
  • Figure 13 is a schematic flowchart of another method for determining an AI model provided by an embodiment of the present application.
  • Figure 14 is a schematic flowchart of another method for determining an AI model provided by an embodiment of the present application.
  • Figure 15 is a schematic flowchart of a method for determining an AI model provided by an embodiment of the present application.
  • Figure 16 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
  • UMTS universal mobile telecommunications system
  • WLAN wireless local area network
  • Wi-Fi wireless fidelity
  • Wi-Fi wireless fidelity
  • wired network wired network
  • V2X vehicle to everything
  • D2D device-to-device
  • Internet of Vehicles communication system 4th generation (4G) mobile communication Systems, such as long term evolution (LTE) systems, worldwide interoperability for microwave access (WiMAX) communication systems
  • 5th generation, 5G) mobile communication systems such as new radio , NR) system
  • future communication systems such as the sixth generation (6th generation, 6G) mobile communication system.
  • the communication system includes terminal equipment and network equipment.
  • the above-mentioned terminal device is a terminal that is connected to the above-mentioned communication system and has a wireless transceiver function, or a chip or chip system that can be installed on the terminal.
  • the terminal equipment may also be called user equipment (UE), user device, access terminal, user unit, user station, mobile station, mobile station (MS), remote station, remote terminal, mobile device, User terminal, terminal, terminal unit, terminal station, terminal device, wireless communication device, user agent or user device.
  • the terminal device in the embodiment of the present application may be a mobile phone (mobile phone), a wireless data card, a personal digital assistant (personal digital assistant, PDA) computer, a laptop computer (laptop computer), a tablet computer (Pad), Drones, computers with wireless transceiver functions, machine type communication (MTC) terminals, virtual reality (VR) terminal equipment, augmented reality (AR) terminal equipment, Internet of Things (internet of things, IoT) terminal equipment, wireless terminals in industrial control, wireless terminals in self-driving, wireless terminals in remote medical, wireless terminals in smart grid Terminals, wireless terminals in transportation safety, wireless terminals in smart cities, wireless terminals in smart homes (such as game consoles, smart TVs, smart speakers, smart refrigerators and fitness equipment etc.), vehicle-mounted terminals, RSUs with terminal functions.
  • MTC machine type communication
  • VR virtual reality
  • AR augmented reality
  • IoT Internet of Things
  • wireless terminals in industrial control wireless terminals in self-driving
  • wireless terminals in remote medical wireless
  • the access terminal can be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA) , handheld devices (handsets) with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, wearable devices, etc.
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistant
  • the terminal device in the embodiment of the present application may be an express terminal in smart logistics (such as a device that can monitor the location of cargo vehicles, a device that can monitor the temperature and humidity of cargo, etc.), a wireless terminal in smart agriculture (such as a device that can collect poultry, etc.) wearable devices with animal-related data), wireless terminals in smart buildings (such as smart elevators, fire monitoring equipment, and smart meters, etc.), wireless terminals in smart medical care (such as wireless terminals that can monitor the physiological status of people or animals) Wearable devices), wireless terminals in smart transportation (such as smart buses, smart vehicles, shared bicycles, charging pile monitoring equipment, smart traffic lights, smart monitoring and smart parking equipment, etc.), wireless terminals in smart retail (such as automatic vending machines) Cargo aircraft, self-service checkout machines, and unmanned convenience stores, etc.).
  • smart logistics such as a device that can monitor the location of cargo vehicles, a device that can monitor the temperature and humidity of cargo, etc.
  • a wireless terminal in smart agriculture such as a device that can collect poultry,
  • the terminal device of this application may be a vehicle-mounted module, vehicle-mounted module, vehicle-mounted component, vehicle-mounted chip or vehicle-mounted unit built into the vehicle as one or more components or units.
  • the vehicle uses the built-in vehicle-mounted module, vehicle-mounted module
  • the group, vehicle-mounted component, vehicle-mounted chip or vehicle-mounted unit can implement the method provided by this application.
  • the above network device may be one of an access network device and a core network element, or the network device may be an integrated device of one or more devices in a core network element and an access network device.
  • the above-mentioned access network device is a device located on the network side of the above-mentioned communication system and has a wireless transceiver function, or a chip or chip system that can be installed on the device.
  • the access network equipment includes but is not limited to: access points (APs) in wireless fidelity (Wi-Fi) systems, such as home gateways, routers, servers, switches, bridges, etc., evolved Node B (evolved Node B, eNB), radio network controller (radio network controller, RNC), Node B (Node B, NB), base station controller (base station controller, BSC), base transceiver station (base transceiver station, BTS), home base station (for example, home evolved NodeB, or home Node B, HNB), baseband unit (BBU), wireless relay node, wireless backhaul node, transmission point (transmission and reception point, TRP or transmission point, TP), etc., can also be 5G, such as gNB in the new radio (NR) system, or transmission point (TRP or TP), one or a group of base stations (including multiple Antenna panel) Antenna panel, or it can also be a network node that constitutes a gNB or a transmission point
  • the above-mentioned core network elements may include but are not limited to one or more of the following: user plane network elements, authentication servers, mobility management network elements, session management network elements, unified data management network elements, policy control network elements, and storage function networks. element, application network element, and network open network element.
  • the user plane network element As the interface with the data network, it completes functions such as user plane data forwarding, session/flow level-based billing statistics, bandwidth limitation, etc. That is, packet routing and forwarding and quality of service (QoS) processing of user plane data, etc.
  • the user plane network element may be a user plane function (UPF) network element.
  • UPF user plane function
  • Authentication server used to perform user security authentication.
  • the authentication server may be an authentication server function (AUSF) network element.
  • AUSF authentication server function
  • Mobility management network elements are mainly used for mobility management and access management.
  • the access management network element can be an access and mobility management function (AMF) network element, which mainly performs functions such as mobility management and access authentication/authorization.
  • AMF access and mobility management function
  • the mobility management network element is also responsible for transmitting user policies between the terminal and the policy control function (PCF) network element.
  • PCF policy control function
  • Session management network element Mainly used for session management (such as creation, deletion, etc.), maintenance of session context and user plane forwarding pipeline information, network interconnection protocol (IP) address allocation and management of user equipment, and selection of manageable users Termination points of plane functions, policy control and charging function interfaces, and downlink data notifications, etc.
  • the session management network element can be a session management function (SMF) network element, which completes terminal IP address allocation, UPF selection, accounting and QoS policy control, etc.
  • SMF session management function
  • Unified data management network element responsible for the management of user identification, contract data, authentication data, and user service network element registration management.
  • the unified data management network element may be a unified data management (unified data management, UDM) network element.
  • Policy control network element including user subscription data management function, policy control function, billing policy control function, quality of service (QoS) control, etc. It is a unified policy framework used to guide network behavior and is a control plane functional network element (such as AMF, SMF network elements, etc.) to provide policy rule information, etc.
  • the policy control network element may be the PCF.
  • Storage function network element Provides storage and selection functions for network function entity information for other core network elements.
  • the network element may be a network function repository function (NRF) network element.
  • NRF network function repository function
  • Application network elements can be used to provide various business services, can interact with the core network through network element function (NEF) network elements, and can interact with the policy management framework for policy management.
  • the application network element can be an application function (AF) network element, which represents the application function of a third party or operator. It is the interface for the 5G network to obtain external application data, and is mainly used to transmit application side information. requirements on the network side.
  • AF application function
  • Network open network element It can be used to provide frameworks, authentications and interfaces related to network capability opening, and to transfer information between 5G system network functions and other network functions.
  • the network open network element can be a network element function (NEF) network element, which is mainly used to expose the services and capabilities of 3GPP network functions to AF, and also allows AF to provide 3GPP network functions to AF. information.
  • NEF network element function
  • the communication system shown in Figure 1 may be applicable to the communication network currently being discussed, and may also be applicable to other networks in the future, etc. This is not specifically limited in the embodiments of the present application.
  • the method for determining the AI model provided by the embodiment of the present application can be applied between the terminal device and the network device shown in Figure 1.
  • the method for determining the AI model provided by the embodiment of the present application can also be applied to the terminal device or network device shown in Figure 1.
  • FIG. 1 is only a simplified schematic diagram for ease of understanding.
  • the communication system may also include other network devices and/or other terminal devices, which are not shown in FIG. 1 .
  • AI models have powerful learning capabilities. Therefore, AI models can be applied to more and more scenarios, such as channel state information (CSI) feedback scenarios, beam management scenarios, positioning scenarios, etc.
  • CSI channel state information
  • a generalized AI model can be designed for multiple scenarios, and the generalized AI model can be applied to multiple scenarios. Since the training of AI models depends on training data, in order to improve the generalization of the AI model, it is necessary to obtain a large amount of training data in different scenarios for training. In addition, in order to adapt to various scenarios, generalized AI models are usually more complex in design. Moreover, the AI model’s feature extraction considers as many scenarios as possible, so the AI model’s performance for each scenario may not necessarily reach Better results.
  • different AI models can be designed for different scenarios, which will reduce the difficulty in collecting training data and designing AI models.
  • the AI model is only trained for a specific scenario, so the feature extraction of the AI model can be more adapted to the current scenario, so that the performance of the AI model for this specific scenario can be better.
  • the autoencoder architecture can be used for CSI feedback.
  • the autoencoder architecture generally includes an AI encoder and an AI decoder.
  • the AI encoder can be deployed on terminal equipment, and the AI decoder can be deployed on network equipment.
  • CSI feedback based on the AI model can reduce the feedback overhead of the air interface and the computational complexity of the terminal device under the same CSI feedback performance, and has greater application prospects.
  • the terminal device or network device can use the AI model to efficiently and accurately identify the best beam.
  • This AI model can be located only in the end device or only in the network device.
  • the terminal device can train or use the AI model based on training data sent by the network device or training data perceived by the terminal device itself.
  • three-point positioning can be used for positioning.
  • the terminal device obtains the location information of three surrounding network devices and inputs it into the corresponding AI model. Furthermore, the location of the terminal device is obtained based on the distance, direction, channel and other information from the terminal device to the three network devices.
  • the scenario-based AI model has better performance. That is to say, different scenario-based AI models can be determined based on different data sets. The different data sets can be considered as different scenarios. However, to distinguish scenarios from the perspective of data sets, you need to know what the corresponding data sets are in advance before you can choose the corresponding AI model. In practice, it is impossible to know the training data set information used during training.
  • the collected training data also needs to be divided in advance. If you do not know in advance which training data set the training data should be divided into, you will not be able to train and generate the corresponding AI model, or the divided training data set will not be able to achieve the best matching performance with the current scenario, reducing the efficiency and performance of the AI model. .
  • the terminal device can determine whether the AI model to be used is suitable for the current scenario based on the distribution characteristics of the training data set sent by the network device.
  • the distribution characteristics of the training data set are diverse and the data is huge. It is difficult for the terminal device to judge whether the AI model is suitable for the current scenario, and it is not necessarily accurate.
  • embodiments of the present application propose a method for determining an AI model, constructing a scene identifier corresponding to the scene for the scene, and then obtaining the AI model corresponding to the scene identifier.
  • AI models are deployed and applied in different ways. For example: When applying an AI model in a CSI feedback scenario, the terminal device deploys an AI encoder and the network device deploys an AI decoder. The AI models of terminal devices and network devices form a whole. The AI models of terminal devices and network devices use the same training data set and can be trained or used together. When applying the AI model in beam management or positioning scenarios, the AI model can be located only in the terminal device or only in the network device. The terminal device trains or uses the AI model based on the data sent by the network device or the data perceived by the terminal device itself.
  • the AI model is deployed in both the network device and the terminal device, either device on both sides cannot complete the use of the overall AI network through its own AI model, but needs to be deployed through the network device and the terminal device.
  • Cooperating with the AI model such as transmitting the inference results of the AI model over the air interface, is defined as a double-ended model.
  • the AI model only exists on one side, that is, if the network device or terminal device can directly obtain inference results based on the AI model deployed on itself, without the need to rely on the AI model on the other end to complete the inference, it can be defined as a single-ended model.
  • the AI model needs to be loaded into the network device and the terminal device.
  • One way is to load the AI model offline from the network device or terminal device. At this time, there is no need to transmit the AI model over the air interface.
  • Another way is that the network device or terminal device loads the AI model online. In this case, the AI model needs to be transmitted over the air interface to load it.
  • AI model A2 determines that the current scenario to be adapted is scenario 1; AI model C2 determines that the current scenario to be adapted is scenario 2.
  • both scene 1 and scene 2 are preset scenes.
  • training data in the corresponding scene of the AI model can train the AI model.
  • the training data in scenario 1 trains AI model A2, making AI model A2 perform better in scenario 1.
  • the AI model selection at both ends may not match, or the training data sets of the AI models at both ends may not match, affecting the performance of the AI model.
  • the scenarios corresponding to the AI model A1 in the network device and the AI model A2 in the terminal device are both scenario 1; the scenarios corresponding to the AI model B1 in the network device and the AI model C2 in the terminal device are both scenario 2.
  • Scene 1 and Scene 2 may be preset scenes.
  • AI model A includes two parts, AI model A1 and AI model A2. .
  • AI model A1 is deployed in network device 1, and AI model A2 is deployed in terminal device 1.
  • the AI models in network device 1 and terminal device 1 match.
  • Network device 2 and terminal device 2 were trained in scenario 2 to obtain a set of AI model B.
  • AI model B includes two parts, AI model B1 and AI model B2.
  • AI model B1 is deployed in network device 2, and AI model B2 is deployed in terminal device 2.
  • the AI models in network device 2 and terminal device 2 match.
  • Scenario 2 and Scenario 1 have a similar relationship, the AI model of each network device and the AI model of each terminal device can match each other.
  • the training data in Scenario 1 and Scenario 2 can train all AI models in Scenario 1 and Scenario 2.
  • terminal device 1 when terminal device 1 switches from scene 1 to scene 2, if scene 2 has a similar relationship with scene 1, terminal device 1 can use AI model A2 in scene 2, and the AI model A2 of terminal device 1 It can be used in conjunction with the AI model B1 of network device 2 in scenario 2. Similarly, terminal device 2 can also use AI model B2 in scenario 1, and the AI model B2 of terminal device 2 can be matched with the AI model A1 of network device 1 in scenario 1.
  • training data 1 in scene 1 can be collected.
  • terminal device 1 switches from scene 1 to scene 2, terminal device 1 can collect training data 2 in scene 2.
  • Scenario 2 has a similar relationship with Scenario 1.
  • training data 1 and training data 2 can form a training data set, and the training data set can be used to train the AI model of the terminal device in scenario 1 or scenario 2.
  • the terminal device for the single-ended model, take the AI model deployed in the terminal device as an example.
  • the terminal device itself can determine the scene it is in, or the network device can indicate the current scene, and the terminal device selects the corresponding AI model to use according to the current scene.
  • the terminal device attributes the collected training data to the training data set used to train the AI model in the current scenario.
  • the network device indicates the current scene of the terminal device through the scene identifier, and the terminal device selects the corresponding AI model according to the scene identifier.
  • the scene in which the terminal device is located may constantly change. Therefore, when the scene changes, it is necessary to determine the new scene where the terminal device is located, switch the AI model corresponding to the new scene, and collect training data from the new scene to form a training data set for training the AI corresponding to the new scene. Model.
  • the embodiments of this application provide multiple possible implementation forms of scene identification.
  • the scene identifier may be a scene type identifier, indicating the type of a scene with relatively obvious characteristics, for example, the channel type of the channel in the scene, or the type of the environment in which the scene is located.
  • the scene type may be an urban macrocell scene type (urban area macrocell, Uma), an urban area microcell scene type (urban area microcell, Umi), an indoor scene type (indoor), an outdoor scene type (outdoor), etc.
  • urban macrocell scene type urban area macrocell, Uma
  • urban area microcell scene type urban area microcell, Umi
  • an indoor scene type indoor
  • outdoor scene type outdoor
  • the embodiments of this application do not limit this.
  • indoor scene types include indoor factory scene types, indoor office scene types, indoor playground scene types, etc., which are not limited in the embodiments of the present application.
  • the outdoor scene type includes an outdoor factory scene type, an outdoor office scene type, an outdoor playground scene type, etc., and the embodiments of the present application are not limited thereto.
  • a certain scene can have multiple scene type identifiers.
  • a certain scene can be described as both an urban macro-cell scene type and an outdoor scene type, so it can have identifiers for these two scene types.
  • the channel characteristics in different scene types may have large differences.
  • the channel characteristics between an outdoor scene type channel and an indoor scene type channel have a large difference. Therefore, network equipment and/or terminal equipment can operate in different scenarios. Collect data under different scene types and train scene-based AI models for different scene types. At this time, the AI model has better performance in specific scenes.
  • each scene type corresponds to its own AI model.
  • the terminal device uses the AI model corresponding to the urban macro cell scenario type.
  • each scene type corresponds to a training data set of its own AI model.
  • the terminal device can collect training data of the urban macro cell scene type.
  • the training data can be used as a training data set for the AI model corresponding to the urban macro cell scene type, or as a training data set. part of a data collection.
  • scene types with a smaller scope can use AI models corresponding to scene types with a larger scope.
  • the AI model corresponding to the indoor scene type is A
  • the AI model corresponding to the indoor office scene type is B.
  • the terminal device can use AI model B or AI model A.
  • the AI model trained or used in network device 1 can be trained or used in network device 2.
  • the AI model trained or used in terminal device 1 can be trained or used in terminal device 2.
  • the training data collected in the scene of network device 1 can be used to train the AI model in network device 2.
  • the training data collected in the scene of terminal device 1 can be used to train the AI model of terminal device 2.
  • scene 1 is indoor room 1, which belongs to the indoor scene type
  • scene 2 is the outdoor scene type
  • scene 3 is indoor room 2, which belongs to the indoor scene type.
  • the terminal device has trained or used AI model 1 in scenario 1, and trained or used AI model 2 in scenario 2.
  • the terminal device is located in scene 3, since scene 3 is also an indoor scene type, scene 3 has a similar relationship with the scene of scene 1. Therefore, the terminal device can train or use AI model 1 in scenario 3.
  • the terminal device can collect training data 1 in scenario 1 and training data 2 in scenario 2.
  • training data 3 can jointly form a training data set. This training data set can be used to train the AI model in scenario 1 or scenario 3.
  • the scene identifier may be the first network information set.
  • the first network information set indicates scene identification between geographically adjacent cells.
  • the first network information set includes identities of multiple neighboring cells.
  • multiple cells may be directly adjacent or not directly adjacent.
  • they may be neighboring cells to each other.
  • the first network information set includes the cell identity of the first cell, the public land mobile network identity (public land mobile network, PLMN), the tracking area code (TAC), the access network area identity (radio access network identity document), RAN ID), cell frequency point, and cell band; the first cell is a neighboring cell of the second cell, and the second cell is the cell where the terminal equipment is located.
  • the cell identity of the first cell may include a global cell identity and/or a physical cell identity.
  • cells generally adopt a continuous coverage deployment method.
  • the frequency of cells is higher and the coverage of each cell is relatively small. Therefore, the channel characteristics of multiple geographically adjacent cells are relatively similar.
  • three cells are deployed within an outdoor area of 1 square kilometer. The outdoor area of 1 square kilometer is not large, so the channel characteristics of the three cells may be similar. Therefore, the scenarios in these three communities are similar.
  • the cell information of these three cells may form a first network information set.
  • the first network information set may include a plurality of adjacent cell network information sets, may also include an adjacent radio access network (radio access network, RAN) network information set, and may also include a tracking area (TA) ) network information collection.
  • RAN radio access network
  • TA tracking area
  • multiple cells in the first network information set may train or use the same AI model.
  • the training data of multiple cells in the first network information set may form a training data set, and the training data set may be AI models in multiple cells.
  • the terminal device has trained or used AI model 1 within the coverage of cell 1, and trained or used AI model 2 within the coverage of cell 7.
  • AI model 1 can be trained or used.
  • the terminal device collects training data 1 within the coverage of cell 1, and collects training data 2 within the coverage of cell 7.
  • the scenes of cell 3 and cell 1 are similar. Therefore, when the terminal device is located in cell 3, the collected training data 3 can be combined with the training data 1 to form a training data set.
  • This training data set can be used to train terminals within the coverage of cell ⁇ 1, 2, 3, 4, 5 ⁇ . AI model of the device.
  • the tracking area identifier of cell 1 is 0 and the frequency point is 1; the tracking area identifier of cell 2 is 0 and the cell frequency is 1; the tracking area identifier of cell 3 is 0 and the cell frequency is 1.
  • the point is 2; the tracking area identifier of cell 4 is 0, and the cell frequency point is 1; when the tracking area identifier and cell frequency point of the cell are the same, there is a similar relationship between the cells.
  • the tracking area identifier of the cell is 0, and there is a similar relationship between cells 1, 2, and 4 with the cell frequency point 1.
  • cells 1, 2, and 4 all correspond to training or using AI model 1.
  • the tracking area identifier of cell 3 is 0, and the frequency cell point is 2. Different from other cells, it corresponds to training or using AI model 2.
  • the training data collected in cells 1, 2, and 4 can form a training data set 1.
  • the tracking area identifier of cell 3 is 0, and the frequency cell point is 2. Different from other cells, the training data collected in cell 3 forms training data set 2.
  • the tracking area identifier of the cell is 0, the cell frequency point is 1, the tracking area identifier is 1, and the cell with the cell frequency point 1 may be a cell with a similar relationship.
  • the embodiments of this application do not limit this.
  • the scene identifier may be used to indicate channel characteristics.
  • the channel characteristics may include: impulse response characteristics of the channel, time-frequency domain response characteristics of the channel, response characteristics of the channel in the transform domain, etc.
  • the impulse response characteristics of the channel mainly represent the impulse response of the channel in the delay domain.
  • the time-frequency domain response characteristics of the channel mainly represent the channel response of the channel in the time domain and frequency domain.
  • the response characteristics of the channel in the transform domain mainly represent the response of the channel in the angular delay domain or the Doppler domain.
  • the response of the channel in the transform domain may be sparser than the response in the time-frequency domain.
  • the corresponding power spectrum can generally be used to describe it. Since the channel may be sparsely distributed in some domains, these sparse distributions can be used as indicators of channel characteristics to indicate, but describing a distribution in this way is difficult to implement.
  • precoding matrix indicator precoding matrix indicator
  • PMI precoding matrix indicator
  • the embodiment of the present application is based on codebook-related technology and uses PMI identifiers to represent channel characteristics, that is, the scene identifier is a PMI identifier.
  • the way PMI is used is that the terminal equipment first measures the reference signal of the downlink channel, such as the channel state information-reference signal (CSI-RS), and then measures the channel or channel protocol on each subband.
  • the variance matrix performs eigendecomposition to obtain one or more eigenvectors. Different numbers of eigenvectors correspond to different channel ranks. Taking the rank of 1 as an example, the terminal device obtains a feature vector, that is, the main feature vector. Then, the main eigenvectors of all subbands are converted into the angular delay domain using the two-dimensional discrete Fourier transform, and a PMI closest to the converted eigenvector is selected from the codebook and fed back to the network device. Different PMIs can represent eigenvectors in different angular delay domains. That is to say, several typical PMI values can be selected to represent channel characteristics with obvious characteristics.
  • CSI-RS channel state information-reference signal
  • This codebook has been defined by the protocol and currently has 8 parameter configuration combinations for the same number of CSI-RS ports. Under the same number of subbands, the number of feedback bits gradually increases under the eight configurations, and the representation of the channel feature vector is also more accurate. For example, in CSI feedback with a port number of 32, under 13 subbands, the number of bits required for PMI is between 60 and 360 bits.
  • scenario 1 corresponds to PMI1.
  • the first configuration can be predefined, 13 subbands, the number of bits between 60 and 360 bits, frequency domain position and other information.
  • multiple scenarios with the same PMI identification can use the same AI model.
  • multiple training data can be collected for multiple scenarios with the same PMI identifier.
  • the training data set corresponding to the PMI identifier can include the multiple training data.
  • the training data set corresponding to the PMI identifier is used for training.
  • the training data set corresponding to the PMI identifier can be used for training.
  • AI model For example, as shown in Figure 7, the channel characteristics of scenario 1 are PMI1, the channel characteristics of scenario 2 are PMI2, and the channel characteristics of scenario 3 are PMI1.
  • the terminal device has trained or used AI model 1 in scenario 1, and trained or used AI model 2 in scenario 2.
  • the terminal device is located in scenario 3, the channel characteristics of scenario 3 and scenario 1 are the same. Therefore, when the terminal device is in scenario 3, it can train or use AI model 1.
  • the terminal device collects training data 1 in scenario 1 and collects training data 2 in scenario 2.
  • the terminal device collects training data 1 in scenario 1 and collects training data 2 in scenario 2.
  • the channel characteristics of scenario 3 and scenario 1 are the same. Therefore, when the terminal device is in scene 3, the collected training data 3 can be combined with the training data 1 to form a training data set, and the training data set can train the AI models corresponding to scene 1 and scene 3.
  • a PMI identifier includes one or more PMIs.
  • a PMI identifier includes a PMI.
  • the terminal device trains or uses AI model 1 in the scenario corresponding to PMI1, trains or uses AI model 2 in the scenario corresponding to PMI2, and trains or uses AI model 3 in the scenario corresponding to PMI3. .
  • the training data collected by the terminal device in the scene corresponding to PMI1 is used as training data set 1
  • the training data collected in the scene corresponding to PMI2 is used as training data set 2
  • the training data collected in the scene corresponding to PMI3 is used as training data set 1.
  • Data set 3 For example, one PMI identifier includes multiple PMIs. For example, when the difference between multiple adjacent PMIs in space is small, the channel characteristics corresponding to the multiple adjacent PMIs are similar. In this case, the AI models trained or used for the channel characteristics indicated by the multiple PMIs are the same.
  • the terminal device trains or uses AI model 1 in the scenario corresponding to PMI1 and the scenario corresponding to PMI2; PMI1, PMI2 The indicated channel characteristics are quite different from those indicated by PMI3. Train or use AI model 3 in the scenario corresponding to PMI3.
  • the training data collected by the terminal device in the scenarios corresponding to PMI1 and PMI2 is used as training data set 1; the channel characteristics indicated by PMI1 and PMI2 are quite different from the channel characteristics indicated by PMI3.
  • the collected training data is used as training data set 2.
  • the scene identifier may be a scene mark identifier.
  • the scene tag may not have actual physical meaning, but may be a series of abstract tags agreed upon by the terminal device and the network device in advance.
  • the scene mark can be a numeric number, such as 1, 2, 3,..., or a letter number, such as A, B, C,....
  • mark each scene Different scenes have different marks.
  • Each AI model is also marked. Different AI models have different marks.
  • the mark of the scene is the same as the mark of the AI model corresponding to the scene. There is a mapping relationship between them.
  • mark each scene Different scenes have different marks.
  • the training data sets corresponding to each AI model are also marked.
  • the training data sets corresponding to different AI models are also different.
  • the marks of the scenes are the same as There is a mapping relationship between the training data sets of the AI models corresponding to the scenes.
  • mapping relationship can be agreed in advance between the terminal device and the network device and recorded in a preset list, which is stored in the network device or the terminal device.
  • scenes labeled with the same scene use the same AI model.
  • the training data collected under the same scene label can belong to the same training data set and be used to train the same AI model.
  • network equipment manufacturer A and terminal equipment manufacturer B have collected and formed different training data sets in different scenarios in advance, trained the AI model, and formed a preset list, which can record the scenarios and Mapping relationships between AI models.
  • the network equipment A’ of network equipment manufacturer A and the terminal equipment B’ of terminal equipment manufacturer B can use the same set of preset lists to represent the mapping relationship between different scenarios and the corresponding AI models.
  • the tag of a certain scene 1 is "1”
  • the tag of the AI model 1 corresponding to this scene 1 can be "1'”
  • the tag and mapping relationship between scene 1 and AI model 1' can be recorded in the preset List.
  • different scene markers correspond to different AI models, or to training data sets of different AI models.
  • Network devices and terminal devices can distinguish different training data sets based on different scene tags to train different AI models.
  • different scene markers correspond to the same AI model, or to the training data set of the same AI model.
  • the network device and the terminal device can combine the training data collected in different scenarios into a common training data set, thereby training a same AI model.
  • scene 1 corresponds to mark 1
  • scene 2 corresponds to mark 2
  • scene 3 corresponds to mark 1.
  • the terminal device has trained or used AI model 1 in scenario 1, and trained or used AI model 2 in scenario 2.
  • the terminal device is located in scene 3, since scene 3 also corresponds to mark 1, scene 3 has a similar relationship with the scenes of scene 1. Therefore, the terminal device can train or use AI model 1 in scenario 3.
  • the terminal device can collect training data 1 in scenario 1 and training data 2 in scenario 2.
  • training data 3 can jointly form a training data set. This training data set can be used to train the AI model in scenario 1 or scenario 3.
  • different scene identifiers correspond to the same network configuration parameters, such as network port parameters and beam configuration parameters.
  • the terminal cannot distinguish whether it is appropriate to divide it into different data sets through different network configuration parameters. It needs to use different identifiers to classify the data sets; for example, the terminal itself can classify the data collected under different network configuration parameters and provide identification information. , it can be considered that the data types under multiple configuration parameters are the same, and the collected data can be merged into a training data set.
  • scene identifiers are only exemplary. There may also be other scene identifier types, and this embodiment of the present application is not limited thereto.
  • the above multiple scene identification types can be used in combination.
  • the scene types of two scenes are identified as having a similar relationship, it can be further determined whether other scenes of the two scenes are in a similar relationship.
  • the PMI identifier can describe channel characteristics in more detail.
  • the scene type identifier and the PMI identifier can be used in combination.
  • the present application can determine the AI model according to the corresponding relationship between the constructed scene identifier and the AI model.
  • the method for determining an AI model provided by the embodiment of the present application includes the following steps:
  • the terminal device obtains a first identifier, where the first identifier is used to indicate the first scene.
  • the first identifier may include at least one of a scene type identifier, a first PMI identifier, and a scene mark identifier.
  • the first identification may include a scene type identification, and the scene type identification is used to indicate the first scene.
  • the first scene may include an urban macro cell scene type, an urban micro cell scene type, an indoor scene type, an outdoor scene type, etc. The embodiments of this application do not limit this.
  • the first identifier may also include a first PMI identifier, and the first PMI identifier is used to indicate the first scenario.
  • the first scenario may include multiple scenarios with different channel characteristics.
  • the first identifier may also include a scene mark identifier, and the scene mark identifier is used to indicate the first scene.
  • the first scene may include multiple scenes with different scene tags.
  • the scene mark may not have actual physical meaning.
  • the scene markers may be numerically numbered.
  • the first identifier may indicate a first network information set, and the first network information set includes a cell identifier of the first cell, a public land mobile network identifier PLMN, a tracking area identifier TAC, an access network area identifier RANID, At least one of a cell frequency point and a cell band; the first cell is a neighboring cell of the second cell, and the second cell is the cell where the terminal device is located.
  • the first cell and the second cell may or may not be directly adjacent.
  • the first scenario is a scenario corresponding to a cell in the first network information set.
  • the network device sends the first identification to the terminal device.
  • the network device sends a broadcast message to the terminal device, where the broadcast message carries the first identifier.
  • the broadcast message of the network device can be sent to all terminal devices located within the coverage of the network device.
  • the network device sends broadcast messages to all terminal devices located within the signal coverage of the network device, so all terminal devices should understand the broadcast message, and the terminal device can receive the broadcast message regardless of whether it enters the connected state. Therefore, the broadcast method is suitable for situations where the first identifier is the scene type identifier, the first network information set and the first PMI identifier. Since these identification information have clear meanings, they can be understood by terminal devices of different manufacturers. Therefore, all Both the terminal device and the network device can understand the meaning of the first identifier.
  • the network device 1 determines that the terminal device is located indoors, the network device 1 broadcasts the indoor scene type identification through a broadcast message, the terminal device 1 is located within the coverage of the network device 1, the terminal device 1 receives the broadcast message, and the terminal device 1, it can be known that the terminal device 1 is in an indoor scene type at this time.
  • the network device sends dedicated signaling to the terminal device, where the dedicated signaling carries the first identifier.
  • the dedicated signaling may be dedicated signaling between the terminal device and the network device, and may transmit scene mark identifiers of part of the terminal device and the network device.
  • the dedicated signaling may be RRC configuration signaling.
  • dedicated signaling is applicable to the case where the first identifier is a scene type identifier, a first network information set, a first PMI identifier, and a scene mark identifier.
  • the terminal device determines the first identification according to the first scene of the terminal device.
  • the terminal device can obtain the surrounding scene information, that is, the scene information of the first scene, and determine the first identifier corresponding to the scene (first scene) based on the scene information.
  • the terminal device can obtain the surrounding scene information, that is, the scene information of the first scene, and determine the first identifier corresponding to the scene (first scene) based on the scene information.
  • take the terminal device as a mobile phone.
  • the location information of the mobile phone it is known that the mobile phone is located in shopping mall A. Therefore, the mobile phone can determine that the scene is an indoor scene at this time, and the corresponding first identifier is the indoor scene identifier.
  • the mobile phone can also determine the characteristics of the channel in the current scene based on the characteristics of the wireless signal. For example, based on channel measurement, the characteristics of the channel in the time domain and angle domain can be obtained to determine whether the channel is in a dense or open scene.
  • the terminal device acquires a first AI model, and the first AI model corresponds to the first scene.
  • each scene type has a corresponding first AI model.
  • the corresponding first AI model is the first AI model corresponding to the urban macro cell scenario type.
  • each scene with channel characteristics has a corresponding first AI model.
  • each scene marked by a scene has a corresponding first AI model.
  • each scenario corresponding to a cell in the first network information set has a corresponding first AI model.
  • the same first AI model can be used in scenarios with the same first identifier, or training data collected in scenarios with the same first identifier can be used as a training data set for training the first AI model.
  • the same first AI model can be used for scenes with the same scene tag identifiers, or the training data collected in scenes with the same scene tag identifiers can be used as a training data set for training the first AI model.
  • multiple cells in the first network information set may use the same first AI model.
  • the terminal device obtains the first AI model based on the first identification and collected training data.
  • the terminal device determines the first scenario of the terminal device according to the first identifier; the terminal device trains according to the training data
  • the first AI model is obtained, and the training data corresponds to the first scene.
  • the terminal device sends the training data to the network device, and the network device trains according to the training data and sends it to the terminal device.
  • the terminal device determines a training data set for training the first AI model based on the training data; the training data set corresponds to the first scenario.
  • the terminal device may associate each collected training data with the first identification.
  • the terminal device can train a new first AI model based on the training data, can also optimize and train an existing first AI model, or can continue to train a part of the first AI model that has been trained.
  • the terminal device can obtain surrounding scene information data for training.
  • the scene information data includes channel measurement results, beam measurement results, position measurement results, etc. in the scene.
  • the terminal device may determine the corresponding first AI model according to the first identification sent by the network device.
  • the terminal device receives the first identification sent from the network device, and the terminal device obtains training data around the terminal device, that is, collected training data. Then, there is a mapping relationship between the first AI model trained by the terminal device based on the training data and the first identifier sent by the network device.
  • the first AI model can be trained at the training node.
  • the training node may be an application server, such as a terminal manufacturer's cloud server, a third-party server, etc.
  • different first identifiers correspond to the same network configuration parameters, and the network configuration parameters include different antenna ports, beam parameters, etc. This is because the terminal device cannot distinguish different training data through different network configuration parameters, nor can it distinguish between Whether some training data can be classified into one category requires the use of different first identifiers to classify the training data into scenarios.
  • the terminal device can determine the first identifier corresponding to the surrounding scene information, and the terminal device trains the training data according to the surrounding scene information to obtain the first AI model corresponding to the first identifier.
  • the terminal device receives the first AI model from a network device;
  • the network device may be a network-side device, such as a base station, or an application server, such as a cloud server of a terminal manufacturer.
  • the terminal device can send the training data and scene information to the cloud server.
  • the cloud server forms a training data set based on the scene information obtained by multiple terminal devices and the training data collected in the corresponding scene.
  • the training data is obtained corresponding to the first identifier.
  • the first AI model is a model that uses the first AI model to train the training data and scene information to generate a training data set.
  • the terminal device obtains the first AI model from a locally stored AI model.
  • the terminal device locally stores an AI model library, which includes multiple AI models, and the terminal device obtains the first AI model from the AI model library.
  • the terminal device generates a corresponding relationship between the first identifier and the first AI model.
  • the first identification corresponds to the first scenario
  • the first scenario corresponds to training or using the first AI model.
  • the terminal device may generate and save the corresponding relationship between the first identifier and the first AI model, or the network device may save the corresponding relationship.
  • the terminal device needs to determine the AI model, it obtains the corresponding relationship and determines the first AI model based on the corresponding relationship.
  • the following embodiments 1 to 4 respectively introduce the use of different first identifiers to indicate similar scenarios, determine the AI model to be used or the training data set corresponding to the AI model, and use different message sending methods to send the first identifier.
  • Embodiment 1 uses a scene type identifier to indicate a scene, and the network device uses a broadcast message to send the first identifier.
  • the network device and the terminal device can understand multiple scene type identifiers.
  • multiple scene type identifiers can be predefined for the standard.
  • the scene type identifier represents a scene type with relatively obvious characteristics, which may be a specific environmental scene type name.
  • the scene type identifier may be an urban macrocell scene type (urban area macrocell, Uma) identifier, an urban area microcell scene type (urban area microcell, Umi) identifier, an indoor scene type (indoor) identifier, an outdoor scene type (outdoor) ) logo, etc.
  • the embodiments of this application do not limit this.
  • the network device is a cell as an example.
  • each cell can know the scene type characteristics of its own cell, and thereby learn the scene type identifier of its own cell based on the scene type characteristics.
  • each cell can determine the scene type characteristics of its own cell according to the deployment location during deployment. For example, if cell 1 is deployed indoors, then the scene type identifier of cell 1 is the indoor scene type identifier.
  • cell 1 can collect channel characteristic information within its own coverage, and determine the corresponding scene type identifier based on the channel characteristic information.
  • cell 1 after cell 1 obtains its own scene type identifier, it can provide the scene type identifier to terminal devices that actively enter its coverage area.
  • cell 1 is an indoor scene
  • the scene type identifier of cell 1 is an indoor scene identifier.
  • terminal equipment 1 enters the coverage area of cell 1, terminal equipment 1 is also located in the indoor scene.
  • Cell 1 may send a broadcast message to terminal device 1, and the first identifier carried in the broadcast message is an indoor scene identifier.
  • the terminal device 1 trains or uses the corresponding indoor scene first AI model according to the indoor scene identifier carried in the broadcast message.
  • the terminal device collects training data under different scene types under different scene type identifiers to form training data sets of different scene types for training different first AI models.
  • the terminal device can establish an association between different scene type identifiers and different first AI models.
  • the terminal device can also establish an association between different scene type identifiers and training data sets of different first AI models.
  • the terminal device obtains different scene-based first AI models based on different scene type identifiers.
  • the terminal device may train a first AI model of the indoor scene type in the indoor scene type based on the training data collected in the indoor scene type. Under the outdoor scene type, based on the training data collected in the outdoor scene type, a first AI model of the outdoor scene type was also trained.
  • multiple scenario-based first AI models can be stored in the terminal device or in the cloud server of the terminal device manufacturer.
  • multiple scenario-based first AI models can be trained by the terminal device or by the cloud server of the terminal device manufacturer.
  • the terminal device can send the collected training data and corresponding scene identifiers to the cloud server of the terminal device manufacturer.
  • multiple scenario-based first AI models are stored in the cloud server of the terminal equipment manufacturer.
  • the terminal device requests the required first AI model from the cloud server, and the cloud server sends the required first AI model to the terminal device.
  • the flow chart for determining the AI model includes steps S101-S105:
  • the cell sends the scene type identifier corresponding to the current cell to the terminal device through a broadcast message.
  • the broadcast message may 80 include system information block 1 (SIB1), and SIB1 carries the scene type identifier corresponding to the current cell.
  • SIB1 system information block 1
  • the scene type identifier corresponding to the current cell is an indoor scene identifier.
  • the position where SIB1 carries the scene type identifier can be in some fields of SIB1, for example, it is carried in the PRACH related configuration of SIB1; or it is carried in the serving cell information of SIB1; or it is a separately established field in SIB1, etc.
  • the embodiments of this application do not limit this.
  • the broadcast message may also include other SIB broadcast messages, which is not limited in this embodiment of the present application.
  • the categories identified by the scene type can be predefined by the standard.
  • the terminal device or cell may support some scene type identifiers or support all scene type identifiers.
  • it can be predefined in the standard, and all scene type identification categories need to be supported by the terminal equipment and the cell. That is, terminal equipment and cells need to know the meaning of all scene type identifiers.
  • it can also be predefined in the standard, and the terminal equipment or cell can only support part of the scene type identifiers.
  • the terminal equipment of a certain terminal equipment manufacturer may only support two scene type identifications: indoor scene type identification and outdoor scene type identification.
  • the cell needs to obtain the terminal device's support for the scene type identifier type.
  • S102 The cell sends the PRACH resource grouping result to the terminal device.
  • the cell groups PRACH resources in the PRACH configuration, and different resource groups correspond to the support capabilities of the scene type identification of the cell.
  • PRACH resource grouping rules can be predefined in the standard.
  • grouping of 64 random access preamble sequences is taken.
  • random access preamble sequences 0 to 32 are specified as group A, indicating that the terminal device supports the scene type identifier broadcast by the cell, that is, the terminal device has the first AI model corresponding to the cell scene type identifier.
  • Random access preamble sequences 32 to 64 are group B, indicating that the terminal device does not support the scene type identifier broadcast by the cell, that is, the terminal device does not have the first AI model corresponding to the cell scene type identifier.
  • S103 The terminal device receives the broadcast message of the cell and initiates a random access process.
  • the terminal device after the terminal device obtains the scenario type identifier sent by the current cell, it selects different PRACH resource groups to initiate random access according to the terminal device's own support for the cell scenario type identifier.
  • the scene type identifier broadcast by the cell broadcast message is an indoor scene type identifier. If the terminal device has a scene-based first AI model that supports the indoor scene type identifier, or in other words, the terminal device has a corresponding indoor scene type identifier, the terminal device selects the corresponding PRACH resource group to access. For example, the terminal device selects the PRACH resource. The random access sequence in group A initiates random access. Otherwise, select the random access sequence in PRACH resource group B and initiate random access.
  • the cell may determine whether the terminal device supports the scenario type identifier based on the received PRACH resource access situation sent by the terminal device.
  • the cell can further activate or configure related AI applications for the terminal device.
  • AI-based CSI feedback reporting of the terminal device may be configured in a subsequent RRC configuration message sent to the terminal device.
  • the CSI feedback has a dual-end model structure, that is, the first AI model of the network device and the terminal device need to be used together. Therefore, the network device (ie, the cell) broadcasts its own scene type identification. Then the cell can use the scenario-based first AI model corresponding to the cell's own scene type identifier after activating the CSI feedback function. The terminal device also uses the corresponding first AI model to perform CSI feedback according to the scene type identifier broadcast by the cell.
  • the terminal device can also determine the current scene based on the scene type identifier provided by the community. This is because the terminal equipment's own judgment may not be completely accurate, but the cell has complete information in its own coverage.
  • the terminal can determine the corresponding first AI model based on the scene type identifier provided by the cell.
  • the terminal device selects the first AI model corresponding to the scene type identifier and reports the CSI.
  • the terminal device can use the corresponding first AI model according to the corresponding scene type identifier in the broadcast message to perform CSI encoding and reporting.
  • the terminal device can also establish an association between the training data collected in the cell and the scene identifier.
  • the terminal device can attribute the collected training data to the training data set used for AI model training of the corresponding scenario.
  • the terminal device after receiving the scene type mark indicated by the network device, marks the collected training data as belonging to the corresponding scene type, and can use the collected training data as part of the training data set of the corresponding scene type, The first AI model used to train the corresponding scenario.
  • the network device sends the scene type identifier of the network device by sending a broadcast message, which is relatively simple to implement.
  • the terminal device or network device can generate the corresponding scene-based first AI model in advance under multiple predefined scene types, so that according to the scene type identification as an index, the consistent scene understanding of the network device and the terminal device can be determined, thereby selecting the appropriate one.
  • the first scene-based AI model can be generated by the network device or network device.
  • Embodiment 2 is to use the first network information set to indicate the scene, and the network device sends the first identifier through a broadcast message.
  • the network device as a cell as an example.
  • the first set of network information includes a plurality of neighboring cells. Scenarios in which multiple adjacent cells in the first network information set have a similar relationship.
  • Figure 11 is a flow chart for determining an AI model, which includes steps S201-S202:
  • the cell sends the first network information set corresponding to the current cell to the terminal device through a broadcast message.
  • the broadcast message may include SIB3 or SIB4, where SIB3 or SIB4 carries the first network information set corresponding to the current cell.
  • the first network information set corresponding to the current cell may be a cell information list including the current cell.
  • the cell information list includes multiple cell information, and the scenarios of multiple cells in the cell information list have a similar relationship.
  • the cell list may include: at least one of a global cell identity, a physical cell identity PCI, a public land mobile network identity PLMN, a tracking area identity TAC, an access network area identity RANID, a cell frequency point, and a cell band band. .
  • the broadcast message may also include other SIB broadcast messages, which is not limited in this embodiment of the present application.
  • the cell information list may include specific information of one or more cells, or may include one or more large-scale cell information.
  • the cell information list may include tracking area identifiers, cell frequency points, etc.
  • the tracking area identifier of cell 1 is 0 and the frequency point is 1; the tracking area identifier of cell 2 is 0 and the cell frequency is 1; the tracking area identifier of cell 3 is 0 and the cell frequency is 1.
  • the point is 2; the tracking area identifier of cell 4 is 0, and the cell frequency point is 1; cell 1 sends the cell information list corresponding to the current cell to the terminal device through a broadcast message.
  • the cell list includes cell 1, cell 2, and cell 4.
  • the tracking area identifier of the cell is 0, and there is a similar relationship between cells 1, 2, and 4 whose cell frequency point is 1.
  • Cells 1, 2, and 4 all correspond to training or using the first AI model 1.
  • the tracking area identifier of cell 3 is 0, and the frequency cell point is 2.
  • the second AI model should be trained or used.
  • the scenarios between the cells are similar.
  • the tracking area identifier of the cell is 0, and there is a similar relationship between cells 1, 2, and 4 with the cell frequency point 1.
  • the training data collected in cells 1, 2, and 4 can be used as the training data set of the first AI model.
  • the tracking area identifier of cell 3 is 0, and the frequency cell point is 2. Unlike other cells, the training data collected in cell 3 can be used as a training data set for the second AI model.
  • the terminal device receives the broadcast message from the cell and obtains the first network information set corresponding to the cell.
  • the terminal device trains or uses the corresponding first AI model according to the first network information set in the broadcast message.
  • the measurement configuration message includes a neighbor cell measurement list.
  • the cell is based on sending broadcast messages.
  • the cell sends the first network information set of the cell.
  • the first network information set includes a cell information list,
  • the neighbor cell measurement list includes PCI and frequency information of adjacent cells to be measured.
  • the cell when the cell sends the measurement configuration message to the terminal device, the cell carries the first network information set of the cell.
  • the first network information set of the cell is written into the measurement configuration MEAS-CONFIG and other fields in the RRC configuration delivered by the cell.
  • a list information may be added to indicate that the adjacent cell corresponding to the PCI or the cell corresponding to the frequency point has the same cell scene type as the current cell.
  • a set of rules can also be predefined as standard, indicating that the neighboring cells and the current cell have the same cell scene type.
  • the terminal device may consider that the neighbor cells included in the cell information list in the first network information set have the same first network as the current cell. Information collection, therefore, if the terminal device switches to a neighboring cell in the cell information list, the terminal device does not need to replace the first AI model, and the terminal device can also combine the training data collected in the neighboring cells with the training data collected in the source cell. form the same training data set.
  • the cell when a terminal device performs inter-cell handover, the cell carries a scene identifier in the handover command, indicating whether the scene identifier of the target cell and the scene identifier of the source cell have a similar relationship.
  • This identifier can be used by the terminal device to determine whether to maintain the original first AI model in the target cell after completing the cell handover.
  • This identifier can also be used to determine whether the training data collected in the target cell and the training data collected in the source cell belong to the same training data set after the terminal device completes cell switching.
  • the tracking area identifier of cell 1 is 0 and the frequency point is 1; the tracking area identifier of cell 2 is 0 and the cell frequency is 1; the tracking area identifier of cell 3 is 0 and the cell frequency is 1.
  • the point is 2; the tracking area identifier of cell 4 is 0, and the cell frequency point is 1; when the terminal device moves from cell 1 to cell 2, the handover command sent by cell 1 carries the scene identifier.
  • the scene identifier is only a yes or no identifier, indicating whether the scene identifier of the handover destination cell is the same as the scene identifier of the source cell.
  • the terminal device moves from cell 1 to cell 2, and the scene identifier is "yes", which means that the scene identifier of cell 2 after the handover is the same as the scene identifier of cell 1.
  • the handover command carries a cell list, which list includes information on one or more cells, indicating that the scene identifiers of the destination cell and the scene identifiers of the cells in the list have a similar relationship.
  • the terminal device can determine whether the first AI model needs to be replaced when switching to the destination cell.
  • the terminal device may also determine whether the training data collected in the destination cell needs to belong to a different training data set from the training data collected in the source cell. For example, when the terminal device moves from cell 1 to cell 2, the cell list in the handover command includes cell 1, cell 2, and cell 4.
  • the cell can also obtain the terminal device's support for the cell's own scene identifier through interaction of context information of the terminal device.
  • the source cell can send the context of the terminal to the destination cell.
  • the source cell sends the scene identification support status of the terminal device in the source cell to the destination cell.
  • the destination cell can Directly obtain the terminal’s scene identification support status.
  • the destination cell directly obtains the scene identification support status of the terminal device in the destination cell without re-querying.
  • the terminal device does not receive the first network information set of the cell.
  • the terminal device may consider that there is no corresponding first AI model in the current cell and cannot use the first AI model, but may collect training data in the current cell and associate the collected training data with the first network information set, As a training data set for the AI model corresponding to the cells in the first network information set.
  • the cell does not send the first network information set of the cell. It can be agreed in a predefined manner that the terminal device does not support the first identifier of the cell, or the terminal device does not have a corresponding first AI model.
  • the terminal cannot obtain the corresponding first AI model, for example, it does not have this type of cell in its own model library, then The terminal cannot use the corresponding scene model in this community. at this time.
  • the cell's own first identity or first AI model support status can be informed.
  • the terminal device can still collect data in the cell for training the first AI model.
  • the training data set of the first AI model is determined by the terminal device itself.
  • the corresponding first network information set of the terminal device can be instructed by defining the scene similarity relationship between geographically adjacent cells, without defining specific scene type categories, thereby reducing dependence on scene category definitions.
  • the scenarios of geographically adjacent cells are similar. Therefore, by defining similar relationships in geographical locations, the effect of indicating scene identification can be achieved.
  • Embodiment 3 uses the first PMI identifier to indicate similar identifiers, and the network device uses a broadcast message to send the first identifier.
  • the first PMI identifier is used to indicate coarse-grained characteristics of the current channel.
  • the coarse-grained characteristics of the channel may include: delay distribution characteristics of the channel, angular distribution characteristics of the channel, Doppler distribution characteristics of the channel, etc.
  • the flow chart for determining the AI model includes steps S301-S302:
  • the cell sends the first PMI identifier corresponding to the current cell to the terminal device through a broadcast message.
  • the broadcast message may include SIB1, where SIB1 carries the first PMI identifier corresponding to the current cell.
  • the network device sends a first PMI identifier according to the codebook information in the current protocol.
  • the current protocol is the R16 codebook.
  • the terminal device and the network device agree on the configuration used by the indicated PMI and the number of bits to be transmitted.
  • the network device can use the PMI to characterize a type of characteristics of the channel of the current cell, such as characteristics in the channel angle and delay domain.
  • the terminal device receives the first PMI identifier, and the terminal device determines the corresponding first AI model according to the first PMI identifier.
  • the terminal device may select the first AI model corresponding to the PMI from its own stored models.
  • the terminal device can select the first AI model corresponding to the PMI from the models stored in the terminal manufacturer's cloud server.
  • the terminal device can select a channel feature corresponding to the PMI from the first AI model corresponding to the existing PMI.
  • Vector AI model corresponding to the nearest PMI For example, the cosine similarity between the channel feature vectors corresponding to two PMIs is selected to be the largest.
  • PMIs corresponding to some channel feature vectors with large channel feature distinctions can be selected as markers of channel features. For example, among the channel feature vectors corresponding to the PMIs, some PMIs with a large difference in delay distribution, a large difference in angle domain distribution, or a large difference in the combined distribution of the two are used for the first PMI identification.
  • the specific PMI used may not be limited in advance, and it is decided by the implementation. For example, based on the configuration type of CSI feedback defined by the standard, the number of PMIs that can be used is 100. Then the network device can be used to indicate the current channel characteristics based on any one of these 100 PMIs.
  • the terminal device trains the first AI model, it can train a set of first AI models for each of the 100 PMIs, or it can classify the PMIs corresponding to some possibly similar channel feature vectors into one category based on its own prior information, thereby reducing the risk. Training costs. For example, the terminal device may only have trained 50 first AI models, and each first AI model may be identified by one or more PMIs as scene identifiers.
  • the terminal device may not have been trained in certain channel characteristic scenarios corresponding to PMI. For example, no data has been collected in the cell corresponding to the PMI indication. Since the eigenvectors corresponding to PMIs defined by the standard are actually a series of vectors uniformly distributed in a high-dimensional space, the channel characteristics of PMIs that are closer in cosine similarity will generally tend to be similar. Therefore, if the terminal device cannot obtain the first AI model corresponding to the PMI indicated by the network device, it can also determine its own first AI model by comparing the cosine similarity of the channel feature vector corresponding to the indicated PMI. Among them, the AI model that may be the most similar is used as the corresponding first AI model.
  • the predefined terminal device selects a PMI corresponding to the closest feature vector.
  • a threshold is predefined. For example, if the cosine similarity is less than the threshold, the corresponding first AI model that is similar can be selected.
  • the terminal device can also calculate the current PMI by itself. If there is a large gap with the PMI in the first PMI identifier sent by the network device, the PMI sent by the network device shall prevail.
  • the terminal device can measure the downlink channel through CSIRS and obtain the corresponding PMI. Therefore, there may be a certain difference between the first PMI identifier used by the network device to represent channel characteristics and the PMI obtained by the terminal device when measuring the downlink channel. This is because the first PMI identifier used to identify channel characteristics is generally a statistic of the overall channel within a certain coverage of the network device itself, which represents the overall channel characteristics of a large number of terminal devices in the area, and uses a PMI describes the statistical characteristics of the situation. In addition, the PMI measured and calculated by the terminal is actually an instantaneous channel characteristic, which may be somewhat different from the overall statistical channel characteristics due to multipath effects at the measurement time.
  • the terminal device After receiving the first PMI identifier sent by the network device, the terminal device performs channel measurement for a period of time and calculates several PMIs. If it is found that the calculated PMIs are different from the PMI sent by the base station, the network device is notified. , the terminal device does not support this scene representation.
  • the terminal device can also obtain channel characteristics by measuring the channel, such as delay domain power spectrum, angle domain power spectrum, etc., directly based on the delay angle distribution corresponding to the PMI indicated by the network device and its own measured time. If the difference is large, the network device can also be informed that the terminal device does not support this scenario.
  • channel characteristics such as delay domain power spectrum, angle domain power spectrum, etc.
  • the terminal device may consider that the first AI model is not used in the current cell.
  • the terminal device can collect training data in the current cell and associate the training data with the PMI identifier determined by the terminal device itself as a training data set for the AI model corresponding to the PMI identifier determined by the terminal device.
  • Embodiment 3 uses channel characteristics as scene identifiers. Compared with scene types, the channel characteristics of the corresponding scene can be described more accurately. At the same time, compared to indicating directly through characteristics such as delay domain and angle domain distribution, using the existing PMI as an indicator can not only describe the distribution characteristics of the channel to a certain extent, but also avoid the standard redefinition of how to describe the distribution characteristics. Waiting for work.
  • Embodiment 4 is to use scene mark identifiers to indicate similar identifiers, and the network device sends the first identifier through a broadcast message.
  • the first identifier used is applicable to the terminal equipment of the manufacturer and the network equipment of multiple manufacturers.
  • the scene type identifier and the identifier indicated by the network device can be understood by terminal devices of multiple different manufacturers.
  • the scene type identifier is "UMA”
  • no matter which manufacturer generates the terminal device they can know that the scene corresponds to "UMA”.
  • UMA s first AI model.
  • different terminal equipment manufacturers may implement different first AI models in UMA scenarios. In dual-end model scenarios, the first AI models used by network devices to match different terminal equipment manufacturers may also be different.
  • the indexes have clear physical meanings.
  • an index without physical meaning, such as a mark is directly used as the index of the scene for the terminal device to obtain the corresponding first AI model.
  • the terminal device and the network device need to agree on the corresponding relationship between the marker and the corresponding first AI model.
  • tags need to be unique whenever possible.
  • mark each scene Different scenes have different marks.
  • Each AI model is also marked. Different AI models have different marks. The difference between the mark of the scene and the AI model corresponding to the scene is There is a mapping relationship.
  • each scene is marked, and different scenes correspond to different marks; the training data set of each AI model is also marked, and the training data sets corresponding to different AI models are also different.
  • the marks of the scenes correspond to the scenes. There is a mapping relationship between the training data sets of the AI models.
  • the mapping relationship can be predetermined between manufacturers and sent to the network equipment and/or terminal equipment produced by the manufacturers.
  • the network device and/or the terminal device are in the same scene, and the scene tag of the scene is registered in a list known to both parties.
  • the mapping relationship can be a preset list, stored in the network device or terminal device.
  • network equipment manufacturer A and terminal equipment manufacturer B have agreed in advance and trained the dual-terminal AI model, forming the relationship between network equipment manufacturer A's network equipment A' and terminal equipment manufacturer B.
  • the network device A' and the terminal device B' can use the same set of preset lists to represent the mapping relationship between different scenarios and the corresponding first AI model. For example, the mark of a certain scene 1 is "1", the mark of the first AI model 1 corresponding to this scene 1 can be "1'", and the marks of scene 1 and the first AI model 1 can be recorded in the preset List.
  • FIG 14 it is a flow chart for determining the AI model, including steps S401-S403.
  • S401 The terminal device and the network device determine the scene mark identifiers in different scenarios through a pre-agreed method
  • both parties agree on a common scene mark identifier.
  • the terminal device registers on the network device side and obtains the corresponding scene mark identifier, so that the terminal device itself can establish a scene mark identifier. mapping relationship to specific scenarios.
  • the mapping relationship can be a preset list, stored in the network device or terminal device.
  • the terminal device performs offline registration on the network device side for different scenarios.
  • the terminal equipment manufacturer completes the registration of scene tags in different scenarios in advance and corresponds to the corresponding scenarios, and sends the corresponding results between the corresponding tags and the scenarios to the network equipment manufacturer to complete the registration. This allows the terminal device to directly use the scene mark after accessing the network.
  • the terminal device registers online on the network device side for different scenarios.
  • the terminal device when the terminal device is in a new scene, that is, the terminal device considers that the new scene is not stored in the preset list. Then the terminal device initiates a registration behavior to the network device, and the network device allocates a corresponding scene tag to the terminal device in this scenario, so that the network device and the terminal device can establish a mapping relationship between the scene and the scene tag.
  • the terminal device and the network device are registered in a preset list managed by a third party for different scenarios. For example, before the terminal device and the network device are put on the market, they jointly agree on different scene mark identifiers for different scenarios, and register the scene mark identifier and corresponding scene information in a list managed by a third party. After the terminal device accesses the network, the network device determines a scene mark identifier common to both parties corresponding to the current scene based on the current scene, and indicates it to the terminal device. This allows network equipment and terminal equipment to establish a mapping relationship between scenes and scene tags.
  • the network device sends the scene mark identifier corresponding to the current network device to the terminal device through dedicated signaling.
  • the dedicated signaling may be dedicated signaling between the network device and the terminal device.
  • the message may be configured for RRC.
  • the scene mark identifier is carried through dedicated signaling.
  • both the network device and the terminal device store a list of scene tags corresponding to the current scene.
  • the terminal device receives the scene mark identifier from the network device, and determines the corresponding first AI model according to the scene mark identifier.
  • the terminal device obtains the first AI model corresponding to the identifier in the preset list according to the scene mark identifier.
  • the terminal device determines a training data set corresponding to the scene according to the scene mark identifier and the collected training data, and the training data set is used to train the first AI model.
  • the terminal device and the network device may determine in a pre-agreed manner that the first AI models corresponding to different scene mark identifiers are different.
  • the terminal device and the network device may determine in a pre-agreed manner that the training data sets of the first AI model corresponding to different scene mark identifiers are different.
  • the terminal device and the network device may determine in a pre-agreed manner that the first AI models corresponding to different scene mark identifiers are the same.
  • the terminal device and the network device may determine in a pre-agreed manner that the training data sets of the first AI models corresponding to different scene mark identifiers are the same.
  • the network side actually believes that the same or similar network configuration parameters are currently used, such as network port parameters and beam configuration parameters. Therefore, different scene tags correspond to the same or similar network configurations. parameter. Since the terminal cannot directly distinguish whether it is appropriate to divide the data sets into different data sets through the same or different network configuration parameters, it needs to use different identifiers to classify the data sets; for example, the terminal itself can classify data collected under different network configuration parameters. After the network device provides identification information, the data types under multiple configuration parameters can be considered to be the same, and the collected data can be merged into a training data set.
  • different terminal devices may register different scene tags in the same scene.
  • the network device can clearly indicate the first AI model that the terminal device should use, which is more efficient.
  • the terminal device when the terminal device does not receive the first identification, the terminal device sends a first message to the network device, and the first message is used to indicate to the network device Send the first identification.
  • the first message carries one or more of the first identifiers supported by the terminal device, the current scene information of the terminal device, and the request information of the terminal device requesting the network device to send the first identifier. at least one.
  • the current scene can be expressed as multiple scene identifiers, and the first identifier is the general name of these scene identifiers.
  • the terminal device sends the first identification to the network device through the first message for obtaining the first AI model
  • the terminal device reports all scenarios supported by the network device itself according to the first message, that is, all scenarios supported by the terminal device.
  • the network device further determines the first identifier of the terminal device based on the scene information reported by the terminal device and the scene information of the network device.
  • the terminal device after receiving the terminal device capability query information of the network device, the terminal device sends the first message to the network device.
  • the terminal device sends the current scene information through the first message.
  • the current scene information may be the scene information or scene identifier that the terminal device itself determines is currently in.
  • the network device reports the scene information and the scene information of the network device according to the terminal device, Further determine the first identification of the terminal device.
  • the terminal device when the terminal device receives the first identification, the terminal device sends a second message to the network device, and the second message is used to instruct the network device to send Second identification.
  • the second message carries one or more first identifiers supported by the terminal device, current scene information of the terminal device, and request information of the terminal device requesting the network device to send the second identifier. at least one.
  • the second identification is different from the first identification.
  • the terminal device cannot obtain the corresponding AI model.
  • the terminal device can report the first identifier supported by the terminal device to the network device by sending a second message.
  • the terminal device considers that the scene corresponding to the first identifier is inconsistent with the scene in which the terminal device is currently located, and sends a request message to the network device to request the network device to send the second identifier.
  • the second identification is different from the first identification.
  • the terminal device can obtain the current scene information and report the scene information to the network device through the second message.
  • the network device further determines the second identity of the terminal device based on the scene information reported by the terminal device and the scene information of the network device.
  • the network device sends the second identification to the terminal device.
  • the second identification is different from the first identification.
  • the terminal device can also report all scenarios supported by the network device itself according to the second message, that is, scenarios corresponding to all first AI models supported by the terminal device.
  • the network device further determines the second identification of the terminal device based on the scene information reported by the terminal device and the scene information of the network device.
  • the network device sends the second identification to the terminal device.
  • the second identification is different from the first identification.
  • the embodiments of this application do not limit the specific content reported by the terminal device.
  • the terminal device after receiving the terminal device capability query information of the network device, the terminal device sends the second message to the network device.
  • the network device determines a first identifier based on the multiple scenarios reported by the terminal device and the multiple scenarios supported by the network device. For example, the coverage of the network device is 80% of indoor scenes and 20% of outdoor scenes. At this time, the network device sends the indoor scene identifier through the broadcast message. At this time, the terminal device receives the indoor scene identifier from the network device. However, the terminal device considers the scene around the terminal device to be an outdoor scene based on the surrounding scene information. If the terminal device believes that the first identifier sent by the network device is inconsistent with the scene in which the terminal device is currently located, it can report scene information around the terminal device and instruct the network device to send the outdoor scene identifier.
  • the terminal device may report the scene information of the terminal device through the first message.
  • the terminal device is located in an outdoor scene.
  • the terminal device may report the scenario information supported by the terminal device through the first message.
  • the terminal device supports outdoor scenarios and urban macro cell scenarios.
  • the scene where the network device is located includes indoor scenes and outdoor scenes. Therefore, the network device indicates the outdoor scene identifier of the terminal device.
  • the first message is medium access control-control element (MAC-CE) signaling.
  • MAC-CE medium access control-control element
  • the first message is a UE capability reporting message.
  • the terminal device may also send scene information around the terminal device or the first message before the network device sends the first identification.
  • the terminal device considers that the surrounding scene has changed, and may also send scene information or the first message surrounding the terminal device to the network device.
  • the terminal device can collect training data in the current scene for use in training the first AI model, and the first AI model obtained by training is The AI model corresponds to the first identifier sent by the network device.
  • the second identification corresponds to the second AI model.
  • the network device and/or the terminal device when the scene in which the network device and/or the terminal device is located changes, the network device and/or the terminal device performs switching of the first AI model.
  • the embodiment of this application takes the scene change of the terminal device as an example.
  • the terminal device switches from the coverage area of the network device 1 to the network device 1. It is within the coverage of Preparation 2. Afterwards, the terminal device switches the first AI model and determines that the training data set to which the collected training data belongs has changed.
  • the network device 2 directly indicates the terminal device, the first identifier corresponding to the network device 2 .
  • the terminal device uses the corresponding first AI model according to the first identification corresponding to the network device 2, or determines that the training data collected in the cell where the network device 2 is located belongs to the training data set used to train the first AI model.
  • the network device 2 can indicate the terminal device through a scene mark identifier.
  • the terminal device trains or uses the corresponding first AI model according to the first identifier corresponding to the network device 2 .
  • the scene mark identifier of network device 1 is 1, and the scene mark identifier of network device 2 is 2.
  • the terminal device trains or uses the scene mark identifier 2 to correspond to the second AI model according to the preset list, or determines the location of network device 2.
  • the training data collected in the community belongs to the training data set corresponding to the scene mark identifier 2.
  • the network device 2 may indicate the terminal device through a scene type identifier, in this case, the first identifier of the network device 2.
  • the terminal device trains or uses the corresponding first AI model according to the first identification corresponding to the network device 2 .
  • the scene type identifier of network device 1 is an indoor scene identifier
  • the scene type identifier of network device 2 is an outdoor scene identifier.
  • the terminal device learns the scene type of network device 2 according to the broadcast message or dedicated signaling message sent by network device 2.
  • the scene type identifier is an outdoor scene identifier
  • the terminal device uses the first AI model corresponding to the outdoor scene identifier, or attributes the training data collected in the community where the network device 2 is located to the training data set corresponding to the outdoor scene.
  • the terminal device learns about the network based on the broadcast message or dedicated signaling message sent by network device 2
  • the scene type identifier of device 2 is an indoor office scene identifier.
  • the terminal device can train or use the first AI model corresponding to the indoor scene identifier, or can train or use the first AI model corresponding to the indoor office scene identifier.
  • the network device sends scene indication information, and the terminal device trains or uses the first AI model according to the scene indication information.
  • the scene indication information may be a neighbor cell list.
  • the terminal device may receive the first identification of neighboring cell 2. If the first identifier of cell 2 is the same as the first identifier of cell 1. When the terminal device moves to cell 2 adjacent to cell 1, the terminal device does not need to switch the first AI model, or in other words, the terminal device can combine the training data collected in cell 2 and the training data collected in cell 1 into a common Training data collection. If the first identifier of cell 2 is different from the first identifier of cell 1.
  • the terminal device When the terminal device moves to Cell 2 adjacent to Cell 1, the terminal device performs switching of the first AI model according to the first identifier of Cell 2, or in other words, the terminal device can combine the training data collected in Cell 2 with the training data collected in Cell 1.
  • the training data belong to different training data sets.
  • the terminal device when the scene in which the terminal device is located changes, the terminal device reports new scene information to the network device, and the network device indicates the second identifier according to the scene information sent by the terminal device.
  • the terminal device when the scene in which the terminal device is located changes, the terminal device does not change the first AI model, or in other words, the terminal device can attribute the training data collected in the new scene to the same training data set. For example, the coverage of network equipment is 80% of indoor factory scenes and 20% of indoor office scenes.
  • the network device sends the indoor factory scene identification through broadcast messages.
  • the terminal device receives the indoor factory scene identifier from the network device.
  • the terminal device may select the first indoor factory AI model corresponding to the indoor factory scene identifier, or may select the first indoor AI model corresponding to the indoor scene identifier.
  • the terminal device uses the indoor first AI model corresponding to the indoor scene identifier, when the terminal device moves from the indoor factory scene to the indoor office scene, the indoor office scene can also correspond to the indoor first AI model, and the terminal device does not need to perform the first AI model. switch.
  • the terminal device can also attribute the training data collected in the indoor office scene to the training data set corresponding to the indoor scene.
  • the network device when the network device uses a broadcast message to send the first identifier of the network device, it can determine whether the terminal device supports the first identifier broadcast by the network device by predividing a random access preamble sequence. Further determine whether to activate subsequent AI air interface applications.
  • the 64 random access preamble sequences can be divided into two groups, and random access preamble sequences 0 to 32 are specified as group A, indicating that the terminal device supports the first identifier broadcast by the network device, that is, the terminal device has the first identifier.
  • group A indicating that the terminal device supports the first identifier broadcast by the network device, that is, the terminal device has the first identifier.
  • Random access preamble sequences 32 to 64 are group B, indicating that the terminal device does not support the first identifier broadcast by the network device, that is, there is no There is a first AI model corresponding to the first identifier.
  • the classification of random access preamble sequences can also be in other forms.
  • access preamble sequences 32 to 48 indicate that the terminal device does not support the first identifier broadcast by the network device, but supports the urban micro cell scenario type. The embodiments of this application do not limit this.
  • Embodiment 5 The terminal device reports the scenario information it supports, and the network device determines the scenarios that can be indicated.
  • the network device actively sends the first identifier to the terminal device.
  • the terminal device can further request the network device to send the first identifier, and the terminal device reports the scenario that it supports.
  • a flow chart for a terminal device to report scene information includes steps S501-S504.
  • the terminal device accepts a scenario support capability query message from the network device.
  • the terminal device before accepting the capability query message, the terminal device requests the network device to send the first identifier;
  • the terminal device queries the message according to the scene support capability, sends a reporting message, and reports the scene types it supports.
  • the reporting message may include: the terminal device's support capabilities for various scenarios.
  • the terminal device's support capabilities for scene types If the scene type identifier has been defined, the terminal device can report some of the scene types in the scene type identifier it supports, indicating its support capability for the scene type.
  • the terminal device can also request the network device to send specific scene information.
  • the terminal may send a request for specific scenario information through a MAC-CE.
  • the MAC-CE may also carry the scenario types supported by the terminal device.
  • the network device sends a broadcast message or dedicated signaling to the terminal device, and the broadcast message or dedicated signaling carries the first identifier.
  • the terminal device confirms that the first identifier sent by the network device determines the corresponding second AI model.
  • step S502 can be performed again, or the scene information of the terminal device can be used to train the first AI model.
  • the terminal device can handle the situation where the first identifier sent by the terminal device to the network device is not sent, or the first identifier sent is not supported. Based on the scenario support reported by the terminal device, the network device may further determine the first identifier or the second identifier available to the terminal device.
  • FIG. 16 is a schematic structural diagram of a communication device that can be used to execute the algorithm model acquisition method provided by the embodiment of the present application.
  • the communication device 500 may be a network device or a terminal device, or may be a chip or other component with corresponding functions in the network device or terminal device.
  • the communication device 500 may include a processor 501 .
  • the communication device 500 may also include one or more of a memory 502 and a transceiver 503.
  • the processor 501 may be coupled to one or more of the memory 502 and the transceiver 503, for example, through a communication bus, or the processor 501 may be used alone.
  • the processor 501 is the control center of the communication device 500, and may be a processor or a collective name for multiple processing elements.
  • the processor 501 is one or more central processing units (CPUs), may also be an application specific integrated circuit (ASIC), or may be one or more processors configured to implement the embodiments of the present application.
  • An integrated circuit such as one or more microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA).
  • the processor 501 can perform various functions of the communication device 500 by running or executing software programs stored in the memory 502 and calling data stored in the memory 502 .
  • the processor 501 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 16 .
  • the communication device 500 may also include multiple processors, such as the processor 501 and the processor 504 shown in FIG. 16 .
  • processors can be a single-core processor (single-CPU) or a multi-core processor (multi-CPU).
  • a processor here may refer to one or more communications devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the memory 502 may be a read-only memory (ROM) or other type of static storage communication device that can store static information and instructions, a random access memory (random access memory, RAM) or a device that can store information. and other types of dynamic storage communication devices of instructions, which may also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or Other optical disc storage, optical disc storage (including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage communication devices, or can be used to carry or store expectations in the form of instructions or data structures program code and any other medium capable of being accessed by a computer, without limitation.
  • the memory 502 may be integrated with the processor 501 or may exist independently and be coupled to the processor 501 through the input/output port (not shown in Figure 16) of the communication device 500. This is not specifically limited in the embodiment of the present application.
  • the input port can be used to implement the receiving function performed by the network device or the terminal device in any of the above method embodiments
  • the output port can be used to implement the sending function performed by the network device or the terminal device in any of the above method embodiments.
  • the memory 502 can be used to store software programs for executing the solution of the present application, and the processor 501 controls the execution.
  • the processor 501 controls the execution.
  • the transceiver 503 is used for communication with other communication devices.
  • the transceiver 503 may include a receiver and a transmitter (not shown separately in FIG. 16). Among them, the receiver is used to implement the receiving function, and the transmitter is used to implement the sending function.
  • the transceiver 503 may be integrated with the processor 501, or may exist independently and be coupled to the processor 501 through the input/output port (not shown in Figure 16) of the communication device 500. This is not specifically limited in the embodiment of the present application. .
  • the structure of the communication device 500 shown in FIG. 16 does not constitute a limitation on the communication device.
  • the actual communication device may include more or less components than shown in the figure, or combine certain components, or arrange different components.
  • the above-mentioned actions of the network device in FIGS. 1-15 can be executed by the processor 501 in the communication device 500 shown in FIG. 16 by calling the application code stored in the memory 502 to instruct the network device.
  • the above-mentioned actions of the terminal device in Figures 2 to 15 can be executed by the processor 501 in the communication device 500 shown in Figure 16 by calling the application code stored in the memory 502 to instruct the terminal device to execute.
  • the communication device 500 can execute any one or more implementations related to the network device in the above method embodiments; when the communication device is a terminal device, the communication device 500 can execute the above method embodiments. Any one or more embodiments related to the terminal equipment in .
  • An embodiment of the present application provides a communication system.
  • the communication system includes: terminal equipment and network equipment.
  • the terminal device is used to perform the actions of the terminal device in the above method embodiments.
  • the network device is used to perform the actions of the network device in the above method embodiments.
  • Embodiments of the present application provide a chip system, which includes a logic circuit and an input/output port.
  • the logic circuit can be used to implement the processing functions involved in the methods provided by the embodiments of the present application, and the input/output ports can be used for the transceiver functions involved in the methods provided by the embodiments of the present application.
  • the input port can be used to implement the receiving function involved in the method provided by the embodiment of the present application
  • the output port can be used to implement the sending function involved in the method provided by the embodiment of the present application.
  • the processor in the communication device 500 may be used to perform, for example, but not limited to, baseband related processing, and the transceiver in the communication device 500 may be used to perform, for example, but not limited to, radio frequency transceiver.
  • the above-mentioned devices may be arranged on separate chips, or at least part or all of them may be arranged on the same chip.
  • processors can be further divided into analog baseband processors and digital baseband processors. Among them, the analog baseband processor can be integrated with the transceiver on the same chip, and the digital baseband processor can be set on an independent chip. With the continuous development of integrated circuit technology, it is possible to integrate There are more and more devices.
  • a digital baseband processor can be integrated with a variety of application processors (such as but not limited to graphics processors, multimedia processors, etc.) on the same chip.
  • application processors such as but not limited to graphics processors, multimedia processors, etc.
  • Such a chip can be called a system on chip. Whether each device is independently installed on different chips or integrated on one or more chips often depends on the specific needs of product design. The embodiments of the present invention do not limit the specific implementation forms of the above devices.
  • the chip system also includes a memory, which is used to store program instructions and data for implementing the functions involved in the methods provided by the embodiments of this application.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • Embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer programs or instructions. When the computer program or instructions are run on a computer, the method provided by the embodiments of the present application is executed.
  • An embodiment of the present application provides a computer program product.
  • the computer program product includes: a computer program or instructions. When the computer program or instructions are run on a computer, the method provided by the embodiment of the present application is executed.
  • the processor in the embodiment of the present application can be a central processing unit (CPU).
  • the processor can also be other general-purpose processors, digital signal processors (DSP), special-purpose integrated processors, etc.
  • Circuit application specific integrated circuit, ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • non-volatile memory may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • DRAM dynamic random access memory
  • RAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • enhanced SDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous connection dynamic random access memory Access memory
  • direct rambus RAM direct rambus RAM, DR RAM
  • At least one refers to one or more, and “plurality” refers to two or more.
  • At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • at least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only for A logical functional division. In actual implementation, there may be other divisions.
  • multiple units or components may be combined or integrated into another system, or some features may be ignored or not executed.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne un procédé et un appareil de détermination de modèle d'intelligence artificielle (IA). Au moyen de la présente demande, un identifiant de scénario correspondant à un scénario est construit pour le scénario, de sorte qu'un modèle d'IA correspondant à l'identifiant de scénario puisse être rapidement acquis de manière précise. Le procédé comprend les étapes suivantes : un dispositif terminal acquiert un premier identifiant, le premier identifiant étant utilisé pour indiquer un premier scénario ; et le dispositif terminal acquiert un premier modèle d'IA, le premier modèle d'IA correspondant au premier scénario.
PCT/CN2023/105923 2022-07-13 2023-07-05 Procédé et appareil de détermination de modèle d'intelligence artificielle (ia) WO2024012331A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210823771 2022-07-13
CN202210823771.5 2022-07-13
CN202210970366.6 2022-08-12
CN202210970366.6A CN117459409A (zh) 2022-07-13 2022-08-12 一种确定人工智能ai模型的方法及装置

Publications (1)

Publication Number Publication Date
WO2024012331A1 true WO2024012331A1 (fr) 2024-01-18

Family

ID=89535548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105923 WO2024012331A1 (fr) 2022-07-13 2023-07-05 Procédé et appareil de détermination de modèle d'intelligence artificielle (ia)

Country Status (1)

Country Link
WO (1) WO2024012331A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676716A (zh) * 2024-02-01 2024-03-08 荣耀终端有限公司 通信方法、系统及相关设备
CN117676716B (zh) * 2024-02-01 2024-06-11 荣耀终端有限公司 通信方法、系统及相关设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111479314A (zh) * 2020-06-02 2020-07-31 三星电子(中国)研发中心 一种终端功耗调节方法及终端设备
CN112751648A (zh) * 2020-04-03 2021-05-04 腾讯科技(深圳)有限公司 一种丢包数据恢复方法和相关装置
WO2021142609A1 (fr) * 2020-01-14 2021-07-22 Oppo广东移动通信有限公司 Procédé, appareil et dispositif de rapport d'informations, et support d'enregistrement
CN114071484A (zh) * 2020-07-30 2022-02-18 华为技术有限公司 基于人工智能的通信方法和通信装置
US20220103211A1 (en) * 2020-09-27 2022-03-31 Samsung Electronics Co., Ltd. Method and device for switching transmission methods in massive mimo system
CN114696925A (zh) * 2020-12-31 2022-07-01 华为技术有限公司 一种信道质量评估方法以及相关装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021142609A1 (fr) * 2020-01-14 2021-07-22 Oppo广东移动通信有限公司 Procédé, appareil et dispositif de rapport d'informations, et support d'enregistrement
CN112751648A (zh) * 2020-04-03 2021-05-04 腾讯科技(深圳)有限公司 一种丢包数据恢复方法和相关装置
CN111479314A (zh) * 2020-06-02 2020-07-31 三星电子(中国)研发中心 一种终端功耗调节方法及终端设备
CN114071484A (zh) * 2020-07-30 2022-02-18 华为技术有限公司 基于人工智能的通信方法和通信装置
US20220103211A1 (en) * 2020-09-27 2022-03-31 Samsung Electronics Co., Ltd. Method and device for switching transmission methods in massive mimo system
CN114696925A (zh) * 2020-12-31 2022-07-01 华为技术有限公司 一种信道质量评估方法以及相关装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676716A (zh) * 2024-02-01 2024-03-08 荣耀终端有限公司 通信方法、系统及相关设备
CN117676716B (zh) * 2024-02-01 2024-06-11 荣耀终端有限公司 通信方法、系统及相关设备

Similar Documents

Publication Publication Date Title
EP4160995A1 (fr) Procédé et dispositif de traitement de données
US20230090022A1 (en) Method and device for selecting service in wireless communication system
WO2020042081A1 (fr) Procédé et appareil pour des services de localisation
EP4132066A1 (fr) Procédé, appareil et système de communication
WO2020147681A1 (fr) Procédé et appareil de gestion d'étiquette pour dispositif terminal
EP4203542A1 (fr) Procédé et appareil de transmission de données
US20220007275A1 (en) Method and network device for terminal device positioning with integrated access backhaul
US20200229055A1 (en) Base station and user equipment for mobile communication system
US20230189057A1 (en) Service traffic steering method and apparatus
Jain et al. User association and resource allocation in 5G (AURA-5G): A joint optimization framework
CN116325686A (zh) 一种通信方法和装置
WO2022038760A1 (fr) Dispositif, procédé et programme de prédiction de qualité de communication
WO2024012331A1 (fr) Procédé et appareil de détermination de modèle d'intelligence artificielle (ia)
WO2021134682A1 (fr) Procédé et dispositif de mesure directionnelle
Kaur et al. OCTRA‐5G: osmotic computing based task scheduling and resource allocation framework for 5G
WO2022165721A1 (fr) Procédé et appareil de partage de modèle dans un domaine ran
CN117459961A (zh) 一种通信方法、装置及系统
CN117459409A (zh) 一种确定人工智能ai模型的方法及装置
WO2021114192A1 (fr) Procédé de réglage de paramètre de réseau et dispositif de gestion de réseau
WO2018121220A1 (fr) Procédé de transmission d'informations système, terminal d'utilisateur et nœud de transmission
WO2024067248A1 (fr) Procédé et appareil d'acquisition d'ensemble de données d'entraînement
WO2023078183A1 (fr) Procédé de collecte de données et appareil de communication
WO2023236774A1 (fr) Procédé et appareil de gestion d'intention
WO2024032202A1 (fr) Procédé et appareil d'indication de point d'émission/réception coordonnée
WO2023208043A1 (fr) Dispositif électronique et procédé pour système de communication sans fil, et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23838823

Country of ref document: EP

Kind code of ref document: A1