WO2024092660A1 - 模型选择方法、装置 - Google Patents

模型选择方法、装置 Download PDF

Info

Publication number
WO2024092660A1
WO2024092660A1 PCT/CN2022/129668 CN2022129668W WO2024092660A1 WO 2024092660 A1 WO2024092660 A1 WO 2024092660A1 CN 2022129668 W CN2022129668 W CN 2022129668W WO 2024092660 A1 WO2024092660 A1 WO 2024092660A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
model
model selection
information
present disclosure
Prior art date
Application number
PCT/CN2022/129668
Other languages
English (en)
French (fr)
Inventor
牟勤
李小龙
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to CN202280004229.5A priority Critical patent/CN118302772A/zh
Priority to PCT/CN2022/129668 priority patent/WO2024092660A1/zh
Publication of WO2024092660A1 publication Critical patent/WO2024092660A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions

Definitions

  • the present disclosure relates to the field of communication technology, and in particular to a model selection method, device, equipment and storage medium.
  • model technology In the communication system, the widespread application of mobile communication technology has brought great changes to all aspects of people's lives. Among them, the continuous development of model technology not only brings a variety of rich and colorful applications to smart terminal devices, but also promotes industrial upgrading in various industries. In the operation of the model, there can be multiple trained models. When using the model, you can select a model from them for model reasoning. However, since different models have different reasoning functions, when the nodes for executing reasoning are different, the model selection time is increased, resulting in low accuracy and efficiency of model selection.
  • the present disclosure proposes a model selection method, device, equipment and storage medium to select a model based on information used for model selection, thereby reducing the model selection time and eliminating the need for multiple nodes to participate in the selection. This can reduce the situation where inaccurate model selection is caused by different inference nodes, and can improve the efficiency and accuracy of model selection.
  • An embodiment of the present disclosure provides a model selection method, which is executed by a first node and includes:
  • a model is selected according to the information for model selection.
  • Another aspect of the present disclosure provides a model selection method, which is performed by a second node and includes:
  • Information for model selection is sent to a first node, wherein the information for model selection is used to instruct the first node to select a model.
  • Another aspect of the present disclosure provides a model selection method, which is performed by a third node and includes:
  • an embodiment provides a model selection device, which is arranged at a first node side and includes:
  • a determination module for determining information for model selection
  • the selection module is used to select a model according to the information used for model selection.
  • an embodiment provides a model selection device, which is arranged at the second node side, and includes:
  • a sending module is used to send information for model selection to a first node, wherein the information for model selection is used to instruct the first node to select a model.
  • an embodiment provides a model selection device, which is arranged on a third node side and includes:
  • a receiving module used for receiving the model selection result sent by the first node
  • the execution module is used to perform relevant operations according to the model selection result.
  • Another aspect of the present disclosure provides a first node, wherein the device includes a processor and a memory, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory so that the device performs the method as provided in the above aspect.
  • a second node is proposed in yet another embodiment of the present disclosure, wherein the device includes a processor and a memory, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory so that the device executes the method proposed in the above embodiment.
  • a third node is proposed in yet another embodiment of the present disclosure, wherein the device includes a processor and a memory, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory so that the device executes the method proposed in the above embodiment.
  • a communication device provided in another aspect of the present disclosure includes: a processor and an interface circuit
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is used to run the code instructions to execute the method proposed in an embodiment of one aspect.
  • a computer-readable storage medium provided in yet another aspect of the present disclosure is used to store instructions, and when the instructions are executed, the method provided in the embodiment of the first aspect is implemented.
  • a model selection system is provided in another embodiment of the present disclosure, the system comprising:
  • a second node configured to send information for model selection to the first node
  • the first node is used to receive the information for model selection sent by the second node;
  • the first node is further used to select a model according to the information for model selection.
  • a model selection system is provided in another embodiment of the present disclosure, the system comprising:
  • a first node is used to determine information for model selection
  • the first node is further used to select a model according to the information for model selection;
  • the first node is further used to send the model selection result to the third node;
  • the third node is used to receive the model selection result sent by the first node
  • the third node is used to perform relevant operations according to the model selection result.
  • information for model selection is determined; and a model is selected based on the information for model selection.
  • a model selection mechanism can be provided, information for model selection can be determined, situations where inaccurate model selection is reduced, and model selection efficiency can be improved.
  • the present disclosure provides a processing method for a "model selection" scenario, so that model selection is performed based on information for model selection, the model selection time is reduced, and there is no need for multiple nodes to participate in the selection, which can reduce situations where inaccurate model selection is caused by different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG1 is a schematic diagram showing an example of an artificial intelligence framework in a wireless air interface provided by an embodiment of the present disclosure
  • FIG2 is a separation architecture of a wireless network provided by an embodiment of the present disclosure
  • FIG3 is a flow chart of a model selection method provided by an embodiment of the present disclosure.
  • FIG4 is a schematic flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG5 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG6 is a schematic flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG7 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG8 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG9 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG10 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG11 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG12 is a schematic flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG13 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG14 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG15 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG16 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG17 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG18 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG19 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG20 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG21 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG22 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG23 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG24 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG25 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG26 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG27 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG28 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG29 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG30 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG31 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG32 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG33 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG34 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG35 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG36 is an interactive schematic diagram of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG37 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG38 is a flow chart of a model selection method provided by yet another embodiment of the present disclosure.
  • FIG39 is a schematic diagram of the structure of a model selection system provided by an embodiment of the present disclosure.
  • FIG40 is a schematic diagram showing the structure of a model selection system provided by yet another embodiment of the present disclosure.
  • FIG41 is a schematic diagram of the structure of a model selection device provided by an embodiment of the present disclosure.
  • FIG42 is a schematic diagram of the structure of a model selection device provided by another embodiment of the present disclosure.
  • FIG43 is a schematic diagram of the structure of a model selection device provided by another embodiment of the present disclosure.
  • FIG44 is a block diagram of a terminal device provided by an embodiment of the present disclosure.
  • Figure 45 is a block diagram of a network side device provided by an embodiment of the present disclosure.
  • first, second, third, etc. may be used to describe various information in the disclosed embodiments, these information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • the words "if” and “if” as used herein may be interpreted as “at” or "when” or "in response to determination".
  • the network elements or network functions involved in the embodiments of the present disclosure may be implemented by independent hardware devices or by software in the hardware devices, and this is not limited in the embodiments of the present disclosure.
  • the widespread application of 5G technology has brought great changes to all aspects of people's lives.
  • the fifth generation of mobile communication technology (5th Generation Mobile Communication Technology, 5G) will penetrate into all areas of the future society and build a comprehensive information ecosystem with users as the center.
  • the user experience rate of 5G can reach 100Mbit/s to 1Gbit/s, which can support the ultimate business experience such as mobile virtual reality (VR).
  • the peak rate of 5G can reach 10Gbit/s ⁇ 20Gbit/s
  • the traffic density can reach 10Mbit/s/m2, which can support the growth of more than a thousand times of mobile business traffic in the future.
  • the number of 5G connections can reach 1 million/m2, which can effectively support a large number of IoT devices.
  • the transmission delay of 5G can reach the millisecond level, which can meet the stringent requirements of the Internet of Vehicles and industrial control.
  • 5G can support a mobile speed of 500km/h, which can provide a good user experience in the high-speed rail environment. It can be seen that 5G, as a representative of new infrastructure, will rebuild the future information society.
  • model technology has made continuous breakthroughs in many fields.
  • the continuous development of intelligent voice, computer vision and other fields has not only brought a variety of rich and colorful applications to smart terminals, but also has been widely used in education, transportation, home, medical care, retail, security and other fields, bringing convenience to people's lives while promoting industrial upgrading in various industries.
  • Model technology is also accelerating its cross-penetration with other disciplines. While its development integrates knowledge from different disciplines, it also provides new directions and methods for the development of different disciplines.
  • CSI Channel State Information
  • RAN1 A research project on artificial intelligence technology in wireless air interface was established in the radio access network RAN1. The project aims to study how to introduce artificial intelligence technology in the wireless air interface, and explore how artificial intelligence technology can assist in improving the transmission technology of the wireless air interface.
  • RAN1's discussion on the model includes that after the model training is completed, there may be multiple trained models for the same function, and the best model can be selected from these models for the terminal device UE and/or base station for model inference.
  • FIG1 is a schematic diagram of an example of an artificial intelligence framework in a wireless air interface provided by an embodiment of the present disclosure.
  • the process may include, for example, data collection; training data; model training; model deployment or update; inference data; model inference; output; model performance feedback; (Actor) actuator and feedback.
  • AI artificial intelligence
  • the collection of training data refers to data collected from network nodes, management entities or terminals, which serves as the basis for AI/ML model training, data analysis and reasoning.
  • An AI/ML model is a data-driven algorithm that applies machine learning techniques to generate a set of outputs consisting of prediction information and/or decision parameters based on a set of inputs.
  • AI/ML training refers to the online or offline process of training AI/ML models by learning the features and patterns that best represent the data, and obtaining the trained AI/ML models for inference.
  • AI/ML inference refers to the process of using a trained AI/ML model to make predictions or guide decisions based on the collected data and the AI/ML model.
  • Figure 2 is a separation architecture of a wireless network provided by an embodiment of the present disclosure.
  • the next generation base station the next Generation Node B, gNB
  • the central unit Central Unit, CU
  • the distributed unit Distributed Unit, DU
  • the control plane control plane
  • gNB-CU-CP is the control plane of the control unit
  • gNB-CU-UP is the user plane of the control unit
  • E1 is used for the interface connection between gNB-CU-CP and gNB-CU-UP
  • F1-C is used for the control plane connection between gNB-CU and gNB-DU
  • F1-U is used for the user plane connection between gNB-CU and gNB-DU.
  • gNB-CU-CP is responsible for the functions of RRC and PDCP control planes
  • gNB-CU-UP is responsible for the functions of GTP-U, Service Data Adaptation Protocol (SDAP) and Packet Data Convergence Protocol (PDCP) user planes
  • gNB-DU is responsible for the functions of Radio Link Control (RLC), Multiple Access Channel (MAC) and Physical Layer (PHY).
  • RLC Radio Link Control
  • MAC Multiple Access Channel
  • PHY Physical Layer
  • the reasoning of the AI model may be performed at the physical layer, MAC layer, RLC layer, PDCP layer, RRC layer or a new AI layer, if it is under a wireless network separation architecture or a multi-connection scenario, the nodes performing reasoning are different, so the accuracy of model selection will be low.
  • FIG3 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG3 , the method may include the following steps:
  • Step 301 Determine information for model selection
  • Step 302 Select a model based on the information used for model selection.
  • the technical solution of the embodiment of the present disclosure can be applied to different network architectures, including but not limited to separation architecture and multi-connection scenarios.
  • the first node when the first node selects a model according to the information for model selection, for example, the first node may select a suitable model according to the information for model selection.
  • the first node when the first node selects a model according to the information for model selection, for example, the first node may select a model corresponding to the information according to the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the priority level information may be represented by an integer type INTEGER, where the integer type may be, for example, a positive integer of (1..X). Where X is an integer greater than 1.
  • the priority levels may be arranged in descending order, i.e., 1 is the highest priority and X is the lowest priority, or they may be arranged in ascending order, i.e., 1 is the lowest priority and X is the highest priority.
  • the information used for model selection includes priority level information, and wherein,
  • the priority information fall back to the third model, wherein the third model is the lowest priority model or the default model.
  • the use condition does not specifically refer to a fixed use condition.
  • the use condition may also change accordingly.
  • the first model may be, for example, a currently used model, and the first model does not specifically refer to a fixed model.
  • the first in the first model is only used to distinguish it from other models.
  • the second model may be a second-lowest priority model that meets the usage conditions, and the second model does not specifically refer to a fixed model. For example, when the priority information of each model in the model set changes, the second model may also change accordingly.
  • the third model is the lowest priority model or the default model.
  • the information used for model selection includes priority level information, and wherein,
  • a model with the highest priority level is selected from at least one model according to the priority level information, wherein the model selection information includes at least one item of information for model selection in addition to the priority level information.
  • a model set refers to a group formed by at least one model.
  • the model set does not specifically refer to a fixed set. For example, when the number of models included in the model set changes, the model set may also change accordingly. For example, when the type of models included in the model set changes, the model set may also change accordingly.
  • the model selection information includes at least one item of information for model selection in addition to the priority level information. Since the information for model selection includes multiple information, the model selection information does not specifically refer to a fixed information. For example, when the amount of information corresponding to the model selection information changes, the model selection information may also change accordingly. For example, when the specific information corresponding to the model selection information changes, the model selection information may also change accordingly.
  • the area range information selected by the model includes a network identifier.
  • the network identification includes at least one of the following:
  • PLMN list Public Land Mobile Network list
  • TAC Tracking Area Code
  • Radio Access Network Notification Area (RAN Notification Area, RNA);
  • the model corresponding to the condition when the network where the terminal device is located meets a condition, the model corresponding to the condition can be used, but when the network where the terminal device is located does not meet the condition, the model corresponding to the condition cannot be used.
  • the area range information may be, for example, an actual geographical location area.
  • the model corresponding to the geographical location area may be used, but when the geographical location of the terminal device is not within the geographical location area, the model corresponding to the geographical location area may not be used.
  • the area range information of model selection may be, for example, longitude and latitude information.
  • the longitude and latitude information of the terminal device determined by the first node may be, for example, 100°E, 40°N.
  • the first node may select model A.
  • the usage time information is used to indicate the model available time information
  • the usage time information may be a specific time interval, that is, the corresponding model can only be used within a specified time.
  • the usage time information may also be a specific duration, that is, the model is stopped from being used when the usage model meets the specific duration.
  • the state of the terminal device includes at least one of the following:
  • RRC_idle state RRC_IDLE.
  • the terminal device may use a model corresponding to a specific network state in the network state, or different models may correspond to different states of the terminal device.
  • the functional types include but are not limited to positioning, CSI compression, beam management, etc.
  • the event criterion may be, for example, an event related to the mobility of the terminal device, for example, when the A1, A2 or A3 event is satisfied, or the event may be that the terminal device sends a signaling related to handover.
  • the threshold criterion related to the wireless environment includes at least one of the following:
  • Uplink signal interference measured by the base station is measured by the base station.
  • the signal strength measured by the terminal device may be greater than a certain signal strength threshold, and a corresponding model may be used, or for example, the signal strength measured by the terminal device may be less than a certain signal strength threshold, and a corresponding model may be used.
  • the signal strength may be, for example, a reference signal receiving power (RSRP).
  • the signal interference measured by the terminal device may be, for example, the LTE reference signal receiving quality (RSRQ).
  • RSRQ LTE reference signal receiving quality
  • the corresponding model may be used, or, for example, when the RSRQ is less than a certain RSRQ threshold, the corresponding model may be used.
  • the uplink signal interference measured by the base station may be greater than a certain uplink signal interference threshold, and the corresponding model may be used, or, for example, the uplink signal interference measured by the base station may be less than a certain uplink signal interference threshold, and the corresponding model may be used.
  • the business-related criteria include at least one of the following:
  • QoS Quality of Service
  • QoE Quality of experience
  • QoS Quality of Service
  • the corresponding model can be used only when the terminal device uses the corresponding PDU session and/or network slice.
  • the corresponding model can be used only when the QoE of the terminal device is lower than a certain QoE threshold or higher than a certain QoE threshold.
  • the QoE threshold refers to the value of at least one QoEmetric QoE metric measured in QoE, or the QoE threshold refers to the QoE value measured and calculated by QoE, representing the overall experience of QoE.
  • the QoE value measured and calculated by QoE can be, for example, a Mean Opinion Score (MOS).
  • the corresponding model can be used only when the QoS of the terminal device is lower than a certain QoS threshold or higher than a certain QoS threshold.
  • the QoS threshold may refer to, for example, a value of throughput, delay and/or packet loss corresponding to the bearer.
  • the model performance-related criterion may be a specific inference accuracy threshold, and in response to the accuracy being lower than a certain accuracy threshold or higher than a certain accuracy threshold, the corresponding model may be used.
  • the terminal device moving speed criterion may be, for example, a specific rate threshold, and in response to the rate of the terminal device being lower than a certain rate threshold or higher than a certain rate threshold, a corresponding model may be used.
  • the PDU session information can be, for example, a protocol data unit (Protocol Data Unit) session list PDU session list.
  • protocol data unit Protocol Data Unit
  • the QoS flow information may be, for example, a quality of service flow identification QoS flow ID list.
  • the wireless bearer information may be, for example, a data wireless bearer DRB list.
  • the network slice information may be, for example, a Single Network Slice Selection Assistance information (S-NSSAI) list or network slice group information (networkslicegroup).
  • S-NSSAI Single Network Slice Selection Assistance information
  • network slice group information network sliceslicegroup
  • the terminal computing power criterion includes a usage threshold of a central processing unit (CPU).
  • CPU central processing unit
  • the corresponding model in response to the current CPU usage of the terminal device being lower than a certain usage threshold or higher than a certain usage threshold, the corresponding model may be used.
  • the power consumption criterion may be, for example, a remaining power threshold.
  • a corresponding model may be used.
  • the geographic coverage scenarios include but are not limited to dense urban, urban, suburban, rural, indoor, etc.
  • the information used for model selection is for each specific model (per mode) and/or for each specific model identifier (per model ID).
  • the model identifier is used to uniquely identify the model, that is, one model corresponds to only one model identifier.
  • the information used for model selection may be for each specific model.
  • the information used for model selection may be for model A.
  • the information used for model selection may be for each specific model identifier.
  • the identifier of model A is 123456.
  • the information used for model selection may be for 123456.
  • determining information for model selection includes:
  • the information for model selection sent by the second node is received.
  • the first node and the second node are selected from at least one of the following combinations:
  • the first node is a terminal device, and the second node is a base station;
  • the first node is a terminal device, and the second node is a core network node;
  • the first node is a base station, and the second node is a core network node;
  • the first node is a base station, and the second node is an operations, administration, maintenance (OAM) node;
  • OAM operations, administration, maintenance
  • the first node is a destination base station in the handover process
  • the second node is a source base station in the handover process
  • the first node is the master node (MN) in a multi-connection scenario
  • the second node is the secondary node (SN) in a multi-connection scenario
  • the first node is the new serving gNB, and the second node is the last serving gNB.
  • the first node is a centralized unit CU under the separation architecture
  • the second node is a distributed unit DU under the separation architecture.
  • receiving information for model selection sent by the second node includes at least one of the following:
  • the first node is a terminal device and the second node is a base station, receiving an RRC message sent by the second node, wherein the RRC message includes information for model selection;
  • the first node is a terminal device and the second node is a core network node, receiving a non-access stratum (NAS) message sent by the second node, wherein the NAS message includes information for model selection;
  • NAS non-access stratum
  • the first node is a base station and the second node is a core network node, receiving a Next Generation Application Protocol (NGAP) message sent by the second node, wherein the NGAP message includes information for model selection;
  • NGAP Next Generation Application Protocol
  • the first node is a base station and the second node is an OAM node, receiving information for model selection sent by the second node;
  • the first node is a destination base station in a handover process and the second node is a source base station in a handover process, receiving an Xn Application Proposal (XnAP) message sent by the second node, wherein the XnAP message includes information for model selection;
  • XnAP Xn Application Proposal
  • the first node is an MN in a multi-connection scenario and the second node is an SN in a multi-connection scenario, receiving an XnAP message sent by the second node, wherein the XnAP message includes information for model selection;
  • the first node is a new serving gNB (new serving gNB) and the second node is a last serving gNB (last serving gNB), receiving an XnAP message sent by the second node, wherein the XnAP message includes information for model selection;
  • the first node is a centralized unit CU under a separation architecture and the second node is a distributed unit DU under a separation architecture
  • a (F1Application Proposal, F1AP) message sent by the second node is received, wherein the F1AP message includes information for model selection.
  • a multi-connection scenario may include, for example, a dual connectivity (DC) scenario.
  • DC dual connectivity
  • the first node When the first node is a base station and the second node is an operation, maintenance and management (OAM) node, the first node may be, for example, a node on the base station, and the nodes on the base station include but are not limited to gNB-CU, gNB-DU, gNB-CU-UP, etc.
  • OAM operation, maintenance and management
  • the method further includes:
  • the model selection result is sent to the third node.
  • the first node and the third node are selected from at least one of the following combinations:
  • the first node is a terminal device, and the third node is a base station;
  • the first node is a terminal device, and the third node is a core network node;
  • the first node is a base station, and the third node is a terminal device;
  • the first node is a CU under the split architecture
  • the third node is a DU under the split architecture
  • the first node is a DU under the separation architecture
  • the third node is a CU under the separation architecture
  • the first node is an MN in a multi-connection scenario
  • the third node is an SN in a multi-connection scenario
  • the first node is a SN in a multi-connection scenario
  • the third node is a MN in a multi-connection scenario.
  • sending the model selection result to the third node includes at least one of the following:
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling;
  • the first node is a terminal device and the third node is a core network node, sending the model selection result to the third node through NAS signaling;
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling;
  • the first node is a CU under the separation architecture and the third node is a DU under the separation architecture, sending an F1AP message to the third node, wherein the F1AP message includes a model selection result;
  • the first node is a DU under the separation architecture and the third node is a CU under the separation architecture, sending an F1AP message to the third node, wherein the F1AP message includes a model selection result;
  • the first node is an MN in a multi-connection scenario and the third node is an SN in a multi-connection scenario, sending an XnAP message to the third node, wherein the XnAP message includes a model selection result;
  • an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • the result of the model selection includes identification ID information for identifying the model.
  • the method also includes: wherein, the lower layer signaling can be PDCP layer signaling, RLC layer signaling, MAC layer signaling or physical layer signaling.
  • the lower layer signaling can be PDCP layer signaling, RLC layer signaling, MAC layer signaling or physical layer signaling.
  • the method also includes: wherein, optionally, the PDCP layer signaling may be a PDCP control protocol data unit (Protocol Data Unit, PDU); optionally, the RLC layer signaling may be an RLC control PDU; optionally, the MAC layer signaling may be a media access control layer control element (Media Access Control-Control Element, MAC-CE), or a downlink control message (Downlink Control Information, DCI), or an uplink control message (Uplink Control Information, UCI), or a random access request, or a random access feedback; optionally, the RRC layer signaling may be an RRC message.
  • PDU Packe Data Unit
  • the RLC layer signaling may be an RLC control PDU
  • the MAC layer signaling may be a media access control layer control element (Media Access Control-Control Element, MAC-CE), or a downlink control message (Downlink Control Information, DCI), or an uplink control message (Uplink Control Information, UCI), or a random access request, or a random access feedback
  • the second node and the third node are the same or different.
  • information for model selection is determined; and a model is selected based on the information for model selection.
  • a model selection mechanism can be provided, information for model selection can be determined, situations where inaccurate model selection is reduced, and model selection efficiency can be improved.
  • the present disclosure provides a processing method for a "model selection" scenario, so that model selection is performed based on information for model selection, the model selection time is reduced, and there is no need for multiple nodes to participate in the selection, which can reduce situations where inaccurate model selection is caused by different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG4 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG4 , the method may include the following steps:
  • Step 401 In response to the first model being used abnormally or no longer meeting the use condition, fall back to a second model meeting the use condition according to the priority information, wherein the second model is a second lowest priority model meeting the use condition; or
  • Step 402 according to the priority information, fall back to the third model, wherein the third model is the lowest priority model or the default model.
  • the information used for model selection includes priority level information, wherein the priority level is used to indicate the model selection priority and/or fallback priority.
  • the information used for model selection is for each specific model and/or for each specific model identifier.
  • step 401 and step 402 may be executed selectively, for example, when the first node executes step 401, the first node may not execute step 402; or when the first node executes step 402, step 401 may not be executed.
  • the use condition does not specifically refer to a fixed use condition.
  • the use condition may also change accordingly.
  • the first model may be, for example, a currently used model, and the first model does not specifically refer to a fixed model.
  • the first in the first model is only used to distinguish it from other models.
  • the second model may be a second-lowest priority model that meets the usage conditions, and the second model does not specifically refer to a fixed model. For example, when the priority information of each model in the model set changes, the second model may also change accordingly.
  • the third model is the lowest priority model or the default model.
  • a model selection mechanism can be provided, which can determine the information used for model selection, reduce the situation of inaccurate model selection, and improve the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a solution for how to select a model when the first model is used abnormally or no longer meets the use conditions.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG5 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG5 , the method may include the following steps:
  • Step 501 When at least one model in the model set satisfies the model selection information, a model with the highest priority level is selected from at least one model according to the priority level information, wherein the model selection information includes at least one item of information for model selection in addition to the priority level information.
  • the information used for model selection includes priority level information.
  • the information used for model selection is for each specific model and/or for each specific model identifier.
  • a model set refers to a group formed by at least one model.
  • the model set does not specifically refer to a fixed set. For example, when the number of models included in the model set changes, the model set may also change accordingly. For example, when the type of models included in the model set changes, the model set may also change accordingly.
  • the model selection information includes at least one item of information for model selection in addition to the priority level information. Since the information for model selection includes multiple information, the model selection information does not specifically refer to a fixed information. For example, when the amount of information corresponding to the model selection information changes, the model selection information may also change accordingly. For example, when the specific information corresponding to the model selection information changes, the model selection information may also change accordingly.
  • the model with the highest priority level is selected from at least one model according to the priority level information, wherein the model selection information includes at least one item of information for model selection other than the priority level information.
  • a model selection mechanism can be provided, which can determine the information used for model selection, reduce the situation of inaccurate model selection, and improve the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for selecting a model in a model set.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG6 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG6 , the method may include the following steps:
  • Step 601 When the first node is a terminal device and the second node is a base station, receive an RRC message sent by the second node, wherein the RRC message includes information for model selection.
  • Figure 7 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the terminal device can receive an RRC message sent by the base station, wherein the RRC message includes information for model selection, that is, the terminal device can determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the embodiments of the present disclosure when the first node is a terminal device and the second node is a base station, an RRC message sent by the second node is received, wherein the RRC message includes information for model selection.
  • a model selection mechanism can be provided, and information for model selection can be determined, thereby reducing the situation where model selection is inaccurate, and improving the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining information for model selection when the first node is a terminal device and the second node is a base station.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, thereby reducing the situation where model selection is inaccurate due to different inference nodes, and improving the efficiency and accuracy of model selection.
  • FIG8 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG8 , the method may include the following steps:
  • Step 801 When the first node is a terminal device and the second node is a core network node, a non-access NAS message sent by the second node is received, wherein the NAS message includes information for model selection.
  • Figure 9 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the terminal device can receive a non-access NAS message sent by a core network node, wherein the NAS message includes information for model selection, that is, the terminal device can determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • a non-access NAS message sent by the second node is received, wherein the NAS message includes information for model selection.
  • a model selection mechanism can be provided, and the information used for model selection can be determined, thereby reducing the situation of inaccurate model selection and improving the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining information used for model selection when the first node is a terminal device and the second node is a core network node.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, thereby reducing the situation of inaccurate model selection due to different inference nodes, and improving the efficiency and accuracy of model selection.
  • FIG10 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG10 , the method may include the following steps:
  • Step 1001 When the first node is a base station and the second node is a core network node, a Next Generation Application Protocol NGAP message sent by the second node is received, wherein the NGAP message includes information for model selection.
  • Figure 11 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the base station can receive a next generation application protocol NGAP message sent by a core network node, wherein the NGAP message includes information for model selection, that is, the terminal device can determine the information for model selection.
  • NGAP message includes information for model selection, that is, the terminal device can determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • a next generation application protocol NGAP message sent by the second node is received, wherein the NGAP message includes information for model selection.
  • a model selection mechanism can be provided, and the information used for model selection can be determined, thereby reducing the situation where the model selection is inaccurate, and improving the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining the information used for model selection when the first node is a base station and the second node is a core network node.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, thereby reducing the situation where the model selection is inaccurate due to different inference nodes, and improving the efficiency and accuracy of model selection.
  • FIG12 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG12 , the method may include the following steps:
  • Step 1201 When the first node is a base station and the second node is an operation, maintenance and management (OAM) node, receive information for model selection sent by the second node.
  • OAM operation, maintenance and management
  • Fig. 13 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the base station may receive information for model selection sent by the OAM node.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the first node when the first node is a base station and the second node is an operation, maintenance and management (OAM) node, information for model selection sent by the second node is received.
  • OAM operation, maintenance and management
  • a model selection mechanism can be provided, and the information used for model selection can be determined, thereby reducing the situation of inaccurate model selection and improving the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining information for model selection when the first node is a base station and the second node is an operation, maintenance and management (OAM) node.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, thereby reducing the situation of inaccurate model selection due to different inference nodes, and improving the efficiency and accuracy of model selection.
  • FIG14 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG14 , the method may include the following steps:
  • Step 1401 When the first node is the destination base station in the switching process and the second node is the source base station in the switching process, receive an Xn application protocol XnAP message sent by the second node, wherein the XnAP message includes information for model selection.
  • Figure 15 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the destination base station in the switching process can receive an Xn application protocol XnAP message sent by the source base station in the switching process, wherein the XnAP message includes information for model selection, that is, the source base station in the switching process can determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • an Xn application protocol XnAP message sent by the second node is received, wherein the XnAP message includes information for model selection.
  • a model selection mechanism can be provided, and the information used for model selection can be determined, so as to reduce the situation of inaccurate model selection and improve the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining the information used for model selection when the first node is the destination base station in the switching process and the second node is the source base station in the switching process.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection caused by different inference nodes, and improve the efficiency and accuracy of model selection.
  • FIG16 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG16 , the method may include the following steps:
  • Step 1601 When the first node is a master node MN in a multi-connection scenario and the second node is a secondary node SN in a multi-connection scenario, receive an XnAP message sent by the second node, wherein the XnAP message includes information for model selection.
  • Figure 17 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the master node MN in a multi-connection scenario can receive an XnAP message sent by a secondary node SN in a multi-connection scenario, wherein the XnAP message includes information for model selection, that is, the master node MN in a multi-connection scenario can determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the first node when the first node is a master node MN in a multi-connection scenario and the second node is an auxiliary node SN in a multi-connection scenario, an XnAP message sent by the second node is received, wherein the XnAP message includes information for model selection.
  • a model selection mechanism can be provided, which can determine the information used for model selection, reduce the situation of inaccurate model selection, and improve the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining the information used for model selection when the first node is a master node MN in a multi-connection scenario and the second node is an auxiliary node SN in a multi-connection scenario.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG18 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG18 , the method may include the following steps:
  • Step 1801 When the first node is the new serving gNB new serving gNB and the second node is the last serving gNB last serving gNB, receive an XnAP message sent by the second node, where the XnAP message includes information for model selection.
  • FIG19 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the new serving gNB new serving gNB may receive an XnAP message sent by the last serving gNB last serving gNB, wherein the XnAP message includes information for model selection, that is, the new serving gNB new serving gNB may determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the first node when the first node is a new serving gNB new serving gNB and the second node is the last serving gNB last serving gNB, an XnAP message sent by the second node is received, wherein the XnAP message includes information for model selection.
  • a model selection mechanism can be provided, and information for model selection can be determined, so as to reduce the situation of inaccurate model selection and improve the efficiency of model selection.
  • the embodiment of the present disclosure specifically discloses a scheme for determining information for model selection when the first node is a new serving gNB new serving gNB and the second node is the last serving gNB last serving gNB.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, so as to reduce the situation of inaccurate model selection caused by different inference nodes, and improve the efficiency and accuracy of model selection.
  • FIG20 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG20 , the method may include the following steps:
  • Step 2001 When the first node is a centralized unit CU under a separation architecture and the second node is a distributed unit DU under a separation architecture, receive an F1AP message sent by the second node, wherein the F1AP message includes information for model selection.
  • Figure 21 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the centralized unit CU under the separation architecture can receive an F1AP message sent by the distributed unit DU under the separation architecture, wherein the F1AP message includes information for model selection, that is, the centralized unit CU under the separation architecture can determine the information for model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the first node when the first node is a centralized unit CU under a separation architecture and the second node is a distributed unit DU under a separation architecture, an F1AP message sent by the second node is received, wherein the F1AP message includes information for model selection.
  • a model selection mechanism can be provided, which can determine the information used for model selection, reduce the situation of inaccurate model selection, and improve the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for determining the information used for model selection when the first node is a centralized unit CU under a separation architecture and the second node is a distributed unit DU under a separation architecture.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG22 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG22 , the method may include the following steps:
  • Step 2201 In response to completing the model selection, send the model selection result to the third node.
  • the first node and the third node are selected from at least one of the following combinations:
  • the first node is a terminal device, and the third node is a base station;
  • the first node is a terminal device, and the third node is a core network node;
  • the first node is a base station, and the third node is a terminal device;
  • the first node is a CU under the split architecture
  • the third node is a DU under the split architecture
  • the first node is a DU under the separation architecture
  • the third node is a CU under the separation architecture
  • the first node is an MN in a multi-connection scenario
  • the third node is an SN in a multi-connection scenario
  • the first node is a SN in a multi-connection scenario
  • the third node is a MN in a multi-connection scenario.
  • sending the model selection result to the third node includes at least one of the following:
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling;
  • the first node is a terminal device and the third node is a core network node, sending the model selection result to the third node through NAS signaling;
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling;
  • the first node is a CU under the separation architecture and the third node is a DU under the separation architecture, sending an F1AP message to the third node, wherein the F1AP message includes a model selection result;
  • the first node is a DU under the separation architecture and the third node is a CU under the separation architecture, sending an F1AP message to the third node, wherein the F1AP message includes a model selection result;
  • the first node is an MN in a multi-connection scenario and the third node is an SN in a multi-connection scenario, sending an XnAP message to the third node, wherein the XnAP message includes a model selection result;
  • an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • the result of the model selection includes identification ID information for identifying the model.
  • the model selection result is sent to the third node.
  • a model selection mechanism can be provided, the information used for model selection can be determined, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the embodiments of the present disclosure specifically disclose that in response to completing the model selection, the model selection result is sent to the third node, so that the third node can perform related operations, and a solution for model selection and synchronization in a separated architecture or multi-connection scenario can be implemented.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG23 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG23 , the method may include the following steps:
  • Step 2301 When the first node is a terminal device and the third node is a base station, the model selection result is sent to the third node via RRC signaling and/or lower layer signaling.
  • Figure 24 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the terminal device sends the model selection result to the base station through RRC signaling and/or lower layer signaling.
  • the result of the model selection includes identification ID information for identifying the model.
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling.
  • a model selection mechanism can be provided, information used for model selection can be determined, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the embodiments of the present disclosure specifically disclose a scheme for sending the model selection result to the third node when the first node is a terminal device and the third node is a base station.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection caused by different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG25 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG25 , the method may include the following steps:
  • Step 2501 When the first node is a terminal device and the third node is a core network node, the model selection result is sent to the third node via NAS signaling.
  • Figure 26 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the terminal device sends the model selection result to the core network node through NAS signaling.
  • the result of the model selection includes identification ID information for identifying the model.
  • the model selection result is sent to the third node through NAS signaling.
  • a model selection mechanism can be provided, and the information used for model selection can be determined, thereby reducing the situation of inaccurate model selection and improving the efficiency of model selection.
  • the embodiments of the present disclosure specifically disclose a scheme for sending the model selection result to the third node when the first node is a terminal device and the third node is a core network node.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG27 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG27 , the method may include the following steps:
  • Step 2701 When the first node is a base station and the third node is a terminal device, the model selection result is sent to the third node via RRC signaling and/or lower layer signaling.
  • Figure 28 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the base station sends the model selection result to the terminal device through RRC signaling and/or lower layer signaling.
  • the result of the model selection includes identification ID information for identifying the model.
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling.
  • a model selection mechanism can be provided, the information used for model selection can be determined, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the embodiments of the present disclosure specifically disclose a scheme for sending the model selection result to the third node when the first node is a base station and the third node is a terminal device.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG29 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG29 , the method may include the following steps:
  • Step 2901 When the first node is a CU under a separation architecture and the third node is a DU under a separation architecture, send an F1AP message to the third node, wherein the F1AP message includes a model selection result.
  • Figure 30 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the CU under the separation architecture sends an F1AP message to the DU under the separation architecture, wherein the F1AP message includes the model selection result, that is, the CU under the separation architecture can send the model selection result to the DU under the separation architecture.
  • the result of the model selection includes identification ID information for identifying the model.
  • an F1AP message is sent to the third node, wherein the F1AP message includes a model selection result.
  • a model selection mechanism can be provided, information used for model selection can be determined, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the embodiments of the present disclosure specifically disclose a scheme for sending the model selection result to the third node when the first node is a CU under a separation architecture and the third node is a DU under a separation architecture.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG31 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG31 , the method may include the following steps:
  • Step 3101 When the first node is a DU under a separation architecture and the third node is a CU under a separation architecture, send an F1AP message to the third node, wherein the F1AP message includes a model selection result.
  • Figure 32 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the DU under the separation architecture sends an F1AP message to the CU under the separation architecture, wherein the F1AP message includes the model selection result, that is, the DU under the separation architecture can send the model selection result to the CU under the separation architecture.
  • the result of the model selection includes identification ID information for identifying the model.
  • an F1AP message is sent to the third node, wherein the F1AP message includes a model selection result.
  • a model selection mechanism can be provided, information used for model selection can be determined, situations where inaccurate model selection is reduced, and model selection efficiency can be improved.
  • the embodiments of the present disclosure specifically disclose a scheme for sending a model selection result to a third node when the first node is a DU under a separated architecture and the third node is a CU under a separated architecture.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce situations where inaccurate model selection is caused by different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG33 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG33 , the method may include the following steps:
  • Step 3301 When the first node is an MN in a multi-connection scenario and the third node is an SN in a multi-connection scenario, an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • Figure 34 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the MN in a multi-connection scenario sends an XnAP message to the SN in a multi-connection scenario, wherein the XnAP message includes a model selection result, that is, the MN in a multi-connection scenario can send the model selection result to the SN in the multi-connection scenario.
  • the result of the model selection includes identification ID information for identifying the model.
  • an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • a model selection mechanism can be provided, information used for model selection can be determined, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the embodiments of the present disclosure specifically disclose a scheme for sending the model selection result to the third node when the first node is an MN in a multi-connection scenario and the third node is an SN in a multi-connection scenario.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG35 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a first node. As shown in FIG35 , the method may include the following steps:
  • Step 3501 When the first node is an SN in a multi-connection scenario and the third node is an MN in a multi-connection scenario, an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • Figure 36 is an interactive schematic diagram of a model selection method provided by an embodiment of the present disclosure.
  • the SN in the multi-connection scenario sends an XnAP message to the MN in the multi-connection scenario, wherein the XnAP message includes the model selection result, that is, the SN in the multi-connection scenario can send the model selection result to the MN in the multi-connection scenario.
  • the result of the model selection includes identification ID information for identifying the model.
  • an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • a model selection mechanism can be provided, information used for model selection can be determined, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the embodiments of the present disclosure specifically disclose a scheme for sending the model selection result to the third node when the first node is an SN in a multi-connection scenario and the third node is an MN in a multi-connection scenario.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to select a model based on the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG37 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by the second node. As shown in FIG37 , the method may include the following steps:
  • Step 3701 Send information for model selection to a first node, wherein the information for model selection is used to instruct the first node to select a model.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the area range information selected by the model includes a network identifier
  • the network identification includes at least one of the following:
  • Radio access network notification area RNA Radio access network notification area RNA
  • the state of the terminal device includes at least one of the following:
  • RRC_idle state RRC_IDLE.
  • the threshold criterion related to the wireless environment includes at least one of the following:
  • Uplink signal interference measured by the base station is measured by the base station.
  • the business-related criteria include at least one of the following:
  • QoE Quality of experience
  • QoS Quality of Service
  • the terminal computing power criterion includes a utilization threshold of a central processing unit (CPU).
  • CPU central processing unit
  • the information used for model selection is for each specific model and/or for each specific model identifier.
  • information for model selection is sent to the first node, wherein the information for model selection is used to instruct the first node to select a model.
  • a model selection mechanism can be provided, and through the interaction of information for model selection, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the present disclosure provides a processing method for a "model selection" situation, in which information for model selection is sent to the first node, so that the first node can perform model selection, reducing the model selection time, and without the need for multiple nodes to participate in the selection, the situation of inaccurate model selection caused by different inference nodes can be reduced, and the efficiency and accuracy of model selection can be improved.
  • FIG38 is a flow chart of a model selection method provided by an embodiment of the present disclosure. The method is executed by a third node. As shown in FIG38 , the method may include the following steps:
  • Step 3801 receiving a model selection result sent by a first node
  • Step 3802 Execute relevant operations based on the model selection result.
  • the result of the model selection includes identification ID information for identifying the model.
  • the model selection result sent by the first node is received; and relevant operations are performed according to the model selection result.
  • a model selection mechanism can be provided, and the result of the model selection is applied to the corresponding third node, and the third node can perform relevant operations.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to perform relevant operations according to the model selection result sent by the first node, which can reduce the model selection time, and does not require multiple nodes to participate in the selection, which can reduce the situation where the model selection is inaccurate due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG39 is a schematic diagram of the structure of a model selection system provided by an embodiment of the present disclosure. As shown in FIG39 , the system includes:
  • a second node configured to send information for model selection to the first node
  • a first node used for receiving information for model selection sent by a second node
  • the first node is further used to select a model according to the information used for model selection.
  • the second node can send information for model selection to the first node; the first node can receive the information for model selection sent by the second node; the first node can select a model based on the information for model selection.
  • a model selection mechanism can be provided. Through the interaction of information for model selection, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the present disclosure provides a processing method for a "model selection" situation, which sends information for model selection to the first node, so that the first node can perform model selection, reduce the model selection time, and do not need multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG40 is a schematic diagram of the structure of a model selection system provided by an embodiment of the present disclosure. As shown in FIG40 , the system includes:
  • a first node is used to determine information for model selection
  • the first node is further used to select a model according to the information for model selection;
  • the first node is further used to send the model selection result to the third node;
  • a third node is used to receive the model selection result sent by the first node
  • the third node is used to perform related operations based on the model selection results.
  • the first node determines information for model selection; the first node selects a model based on the information for model selection; the first node sends the model selection result to the third node; the third node receives the model selection result sent by the first node; the third node performs relevant operations based on the model selection result.
  • a model selection mechanism can be provided, and the result of the model selection can be applied to the corresponding third node, and the third node can perform relevant operations.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to perform relevant operations based on the model selection result sent by the first node, which can reduce the model selection time, and does not require multiple nodes to participate in the selection, which can reduce the situation where the model selection is inaccurate due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG41 is a schematic diagram of the structure of a model selection device provided by an embodiment of the present disclosure. As shown in FIG41 , the device 4100 may be arranged at the first node side, and the device 4100 may include:
  • a determination module 4101 used to determine information for model selection
  • the selection module 4102 is used to select a model according to the information used for model selection.
  • the information used for model selection is determined by the determination module; the selection module selects the model according to the information used for model selection.
  • a model selection mechanism can be provided, which can determine the information used for model selection, reduce the situation of inaccurate model selection, and improve the efficiency of model selection.
  • the present disclosure provides a processing method for a "model selection" situation, so as to select a model according to the information used for model selection, reduce the model selection time, and do not require multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • the information used for model selection includes at least one of the following:
  • Priority level information wherein the priority level is used to indicate the model selection priority and/or fallback priority
  • the regional range information selected by the model wherein the regional range information is used to indicate the regional range in which the model is available;
  • Use time information wherein the use time information is used to indicate the model available time information
  • Terminal device status information wherein the terminal status information is used to indicate the status of the terminal device when the model is available
  • Function type information wherein the function type information is used to indicate the function targeted by the model
  • Event criteria where event criteria are used to indicate specific events to be used by the model
  • a wireless environment related threshold criterion wherein the wireless environment related threshold criterion is used to indicate a wireless environment in which the model is available;
  • Business-related criteria wherein the business-related criteria are used to indicate specific business and/or business experience situations where the model is applicable;
  • Model performance related criteria wherein the model performance related criteria are used to indicate the performance indicators available for the model
  • a terminal device moving speed criterion wherein the terminal device moving speed criterion is used to indicate a terminal device speed and/or a specific moving speed threshold available to the model;
  • Terminal computing power criteria and/or power consumption criteria wherein the terminal computing power criteria and/or power consumption criteria are used to indicate the terminal device capability requirements available for the model;
  • Model application scenario where the model application scenario is used to indicate the geographical coverage scenario in which the model can be used.
  • the information used for model selection includes priority level information, and wherein,
  • the selection module 4102 is used to select a model according to the information used for model selection, and is specifically used to:
  • the priority information fall back to the third model, wherein the third model is the lowest priority model or the default model.
  • the information used for model selection includes priority level information
  • the selection module 4102 is used to select a model according to the information used for model selection, and is specifically used to:
  • a model with the highest priority level is selected from at least one model according to the priority level information, wherein the model selection information includes at least one item of information for model selection in addition to the priority level information.
  • the area range information selected by the model includes a network identifier.
  • the network identification includes at least one of the following:
  • Radio access network notification area RNA Radio access network notification area RNA
  • the state of the terminal device includes at least one of the following:
  • RRC_idle state RRC_IDLE.
  • the threshold criterion related to the wireless environment includes at least one of the following:
  • Uplink signal interference measured by the base station is measured by the base station.
  • the business-related criteria include at least one of the following:
  • QoE Quality of experience
  • QoS Quality of Service
  • the terminal computing power criterion includes a usage threshold of a central processing unit CPU.
  • the information used for model selection is for each specific model and/or for each specific model identifier.
  • the determination module 4101 when used to determine information for model selection, is specifically used to:
  • the information for model selection sent by the second node is received.
  • the first node and the second node are selected from at least one of the following combinations:
  • the first node is a terminal device, and the second node is a base station;
  • the first node is a terminal device, and the second node is a core network node;
  • the first node is a base station, and the second node is a core network node;
  • the first node is a base station, and the second node is an operation, maintenance and management (OAM) node;
  • OAM operation, maintenance and management
  • the first node is a destination base station in the handover process
  • the second node is a source base station in the handover process
  • the first node is a master node MN in a multi-connection scenario
  • the second node is a secondary node SN in a multi-connection scenario
  • the first node is the new serving gNB new serving gNB, and the second node is the last serving gNB last serving gNB;
  • the first node is a centralized unit CU under the separation architecture
  • the second node is a distributed unit DU under the separation architecture.
  • the determination module 4101 is configured to receive information for model selection sent by the second node, specifically for at least one of the following:
  • the first node is a terminal device and the second node is a base station, receiving an RRC message sent by the second node, wherein the RRC message includes information for model selection;
  • the first node is a terminal device and the second node is a core network node, receiving a non-access NAS message sent by the second node, wherein the NAS message includes information for model selection;
  • the first node is a base station and the second node is a core network node, receiving a next generation application protocol NGAP message sent by the second node, wherein the NGAP message includes information for model selection;
  • the first node is a base station and the second node is an operation, maintenance and management (OAM) node, receiving information for model selection sent by the second node;
  • OAM operation, maintenance and management
  • the first node is a destination base station in a handover process and the second node is a source base station in a handover process, receiving an Xn application protocol XnAP message sent by the second node, wherein the XnAP message includes information for model selection;
  • the first node is a master node MN in a multi-connection scenario and the second node is a secondary node SN in a multi-connection scenario, receiving an XnAP message sent by the second node, wherein the XnAP message includes information for model selection;
  • the first node is the new serving gNB new serving gNB and the second node is the last serving gNB last serving gNB, receiving an XnAP message sent by the second node, wherein the XnAP message includes information for model selection;
  • the first node is a centralized unit CU under a separation architecture and the second node is a distributed unit DU under a separation architecture
  • an F1AP message sent by the second node is received, wherein the F1AP message includes information for model selection.
  • the determination module 4101 is further configured to:
  • the model selection result is sent to the third node.
  • the first node and the third node are selected from at least one of the following combinations:
  • the first node is a terminal device, and the third node is a base station;
  • the first node is a terminal device, and the third node is a core network node;
  • the first node is a base station, and the third node is a terminal device;
  • the first node is a CU under the split architecture
  • the third node is a DU under the split architecture
  • the first node is a DU under the separation architecture
  • the third node is a CU under the separation architecture
  • the first node is an MN in a multi-connection scenario
  • the third node is an SN in a multi-connection scenario
  • the first node is a SN in a multi-connection scenario
  • the third node is a MN in a multi-connection scenario.
  • the determination module 4101 when used to send the model selection result to the third node, it is specifically used for at least one of the following:
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling;
  • the first node is a terminal device and the third node is a core network node, sending the model selection result to the third node through NAS signaling;
  • the model selection result is sent to the third node through RRC signaling and/or lower layer signaling;
  • the first node is a CU under the separation architecture and the third node is a DU under the separation architecture, sending an F1AP message to the third node, wherein the F1AP message includes a model selection result;
  • the first node is a DU under the separation architecture and the third node is a CU under the separation architecture, sending an F1AP message to the third node, wherein the F1AP message includes a model selection result;
  • the first node is an MN in a multi-connection scenario and the third node is an SN in a multi-connection scenario, sending an XnAP message to the third node, wherein the XnAP message includes a model selection result;
  • an XnAP message is sent to the third node, wherein the XnAP message includes a model selection result.
  • the result of the model selection includes identification ID information for identifying the model.
  • FIG42 is a schematic diagram of the structure of a model selection device provided by an embodiment of the present disclosure. As shown in FIG42 , the device 4200 may be arranged at the second node side, and the device 4200 may include:
  • the sending module 4201 is used to send information for model selection to the first node, wherein the information for model selection is used to instruct the first node to select a model.
  • model selection device of the embodiment of the present disclosure information for model selection is sent to the first node through the sending module, wherein the information for model selection is used to instruct the first node to select a model.
  • a model selection mechanism can be provided, and through the interaction of information for model selection, the situation of inaccurate model selection can be reduced, and the efficiency of model selection can be improved.
  • the present disclosure provides a processing method for a "model selection" situation, in which information for model selection is sent to the first node, so that the first node can perform model selection, reduce the model selection time, and do not need multiple nodes to participate in the selection, which can reduce the situation of inaccurate model selection due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • FIG43 is a schematic diagram of the structure of a model selection device provided by an embodiment of the present disclosure. As shown in FIG43 , the device 4300 may be arranged at the third node side, and the device 4300 may include:
  • the receiving module 4301 is used to receive the model selection result sent by the first node
  • the execution module 4302 is used to perform related operations according to the model selection result.
  • the model selection result sent by the first node is received by the receiving module; the execution module performs relevant operations according to the model selection result.
  • a model selection mechanism can be provided, and the result of the model selection is applied to the corresponding third node, and the third node can perform relevant operations.
  • the present disclosure provides a processing method for a "model selection" scenario, so as to perform relevant operations according to the model selection result sent by the first node, which can reduce the model selection time, and does not require multiple nodes to participate in the selection, which can reduce the situation where the model selection is inaccurate due to different inference nodes, and can improve the efficiency and accuracy of model selection.
  • UE4400 may be a mobile phone, a computer, a digital broadcast terminal device, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • UE 4400 may include at least one of the following components: a processing component 4402 , a memory 4404 , a power component 4406 , a multimedia component 4408 , an audio component 4410 , an input/output (I/O) interface 4412 , a sensor component 4414 , and a communication component 4416 .
  • the processing component 4402 generally controls the overall operation of the UE 4400, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 4402 may include at least one processor 4420 to execute instructions to complete all or part of the steps of the above method.
  • the processing component 4402 may include at least one module to facilitate the interaction between the processing component 4402 and other components.
  • the processing component 4402 may include a multimedia module to facilitate the interaction between the multimedia component 4408 and the processing component 4402.
  • the memory 4404 is configured to store various types of data to support operations on the UE 4400. Examples of such data include instructions for any application or method operating on the UE 4400, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 4404 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power component 4406 provides power to various components of the UE 4400.
  • the power component 4406 may include a power management system, at least one power supply, and other components associated with generating, managing, and distributing power for the UE 4400.
  • the multimedia component 4408 includes a screen that provides an output interface between the UE4400 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes at least one touch sensor to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundaries of the touch or slide action, but also detect the wake-up time and pressure associated with the touch or slide operation.
  • the multimedia component 4408 includes a front camera and/or a rear camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
  • the audio component 4410 is configured to output and/or input audio signals.
  • the audio component 4410 includes a microphone (MIC), and when the UE 4400 is in an operating mode, such as a call mode, a recording mode, and a speech recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 4404 or sent via the communication component 4416.
  • the audio component 4410 also includes a speaker for outputting audio signals.
  • I/O interface 4412 provides an interface between processing component 4402 and peripheral interface modules, such as keyboards, click wheels, buttons, etc. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 4414 includes at least one sensor for providing various aspects of status assessment for the UE 4400.
  • the sensor component 4414 can detect the open/closed state of the device 2600, the relative positioning of the components, such as the display and keypad of the UE 4400, and the sensor component 4414 can also detect the position change of the UE 4400 or a component of the UE 4400, the presence or absence of contact between the user and the UE 4400, the orientation or acceleration/deceleration of the UE 4400, and the temperature change of the UE 4400.
  • the sensor component 4414 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 4414 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 4414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 4416 is configured to facilitate wired or wireless communication between the UE 4400 and other devices.
  • the UE 4400 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 4416 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel.
  • the communication component 4416 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • UE4400 may be implemented by at least one application-specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component to perform the above method.
  • ASIC application-specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • controller microcontroller, microprocessor or other electronic component to perform the above method.
  • Figure 45 is a block diagram of a base station 4500 provided in an embodiment of the present disclosure.
  • the base station 4500 can be provided as a network side device.
  • the base station 4500 includes a processing component 4522, which further includes at least one processor, and a memory resource represented by a memory 4532 for storing instructions that can be executed by the processing component 4522, such as an application.
  • the application stored in the memory 4532 may include one or more modules, each of which corresponds to a set of instructions.
  • the processing component 4522 is configured to execute instructions to execute any method of the aforementioned method applied to the base station, for example, the method shown in Figure 3.
  • the base station 4500 may also include a power supply component 4530 configured to perform power management of the base station 4500, a wired or wireless network interface 4550 configured to connect the base station 4500 to the network, and an input/output (I/O) interface 4558.
  • the base station 4500 may operate based on an operating system stored in the memory 4532, such as Windows Server TM, Mac OS X TM, Unix TM, Linux TM, Free BSD TM or the like.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of the network side device and the UE.
  • the network side device and the UE may include a hardware structure and a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • One of the above functions may be executed in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of the network side device and the UE.
  • the network side device and the UE may include a hardware structure and a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • One of the above functions may be executed in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of the network side device and the UE.
  • the network side device and the UE may include a hardware structure and a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • One of the above functions may be executed in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • the present disclosure provides a communication device.
  • the communication device may include a transceiver module and a processing module.
  • the transceiver module may include a sending module and/or a receiving module, the sending module is used to implement a sending function, the receiving module is used to implement a receiving function, and the transceiver module may implement a sending function and/or a receiving function.
  • the communication device may be a terminal device (such as the terminal device in the aforementioned method embodiment), or a device in the terminal device, or a device that can be used in conjunction with the terminal device.
  • the communication device may be a network device, or a device in the network device, or a device that can be used in conjunction with the network device.
  • the communication device may be a network device, or a terminal device (such as the terminal device in the aforementioned method embodiment), or a chip, a chip system, or a processor that supports the network device to implement the aforementioned method, or a chip, a chip system, or a processor that supports the terminal device to implement the aforementioned method.
  • the device may be used to implement the method described in the aforementioned method embodiment, and the details may refer to the description in the aforementioned method embodiment.
  • the communication device may include one or more processors.
  • the processor may be a general-purpose processor or a dedicated processor, etc.
  • it may be a baseband processor or a central processing unit.
  • the baseband processor may be used to process the communication protocol and communication data
  • the central processing unit may be used to control the communication device (such as a network side device, a baseband chip, a terminal device, a terminal device chip, a DU or a CU, etc.), execute a computer program, and process the data of the computer program.
  • the communication device may further include one or more memories, on which a computer program may be stored, and the processor executes the computer program so that the communication device performs the method described in the above method embodiment.
  • data may also be stored in the memory.
  • the communication device and the memory may be provided separately or integrated together.
  • the communication device may further include a transceiver and an antenna.
  • the transceiver may be referred to as a transceiver unit, a transceiver, or a transceiver circuit, etc., and is used to implement the transceiver function.
  • the transceiver may include a receiver and a transmitter, the receiver may be referred to as a receiver or a receiving circuit, etc., and is used to implement the receiving function; the transmitter may be referred to as a transmitter or a transmitting circuit, etc., and is used to implement the transmitting function.
  • the communication device may further include one or more interface circuits.
  • the interface circuit is used to receive code instructions and transmit them to the processor.
  • the processor runs the code instructions to enable the communication device to execute the method described in the above method embodiment.
  • the communication device is a first node: the processor is used to execute any one of the methods shown in Figures 3 to 36.
  • the communication device is a second node: the processor is used to execute any method shown in Figure 37.
  • the communication device is a third node: the processor is used to execute any method shown in Figure 38.
  • the processor may include a transceiver for implementing receiving and sending functions.
  • the transceiver may be a transceiver circuit, or an interface, or an interface circuit.
  • the transceiver circuit, interface, or interface circuit for implementing the receiving and sending functions may be separate or integrated.
  • the above-mentioned transceiver circuit, interface, or interface circuit may be used for reading and writing code/data, or the above-mentioned transceiver circuit, interface, or interface circuit may be used for transmitting or delivering signals.
  • the processor may store a computer program, which runs on the processor and enables the communication device to perform the method described in the above method embodiment.
  • the computer program may be fixed in the processor, in which case the processor may be implemented by hardware.
  • the communication device may include a circuit that can implement the functions of sending or receiving or communicating in the aforementioned method embodiments.
  • the processor and transceiver described in the present disclosure may be implemented in an integrated circuit (IC), an analog IC, a radio frequency integrated circuit RFIC, a mixed signal IC, an application specific integrated circuit (ASIC), a printed circuit board (PCB), an electronic device, etc.
  • the processor and transceiver may also be manufactured using various IC process technologies, such as complementary metal oxide semiconductor (CMOS), N-type metal oxide semiconductor (NMOS), P-type metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), etc.
  • CMOS complementary metal oxide semiconductor
  • NMOS N-type metal oxide semiconductor
  • PMOS P-type metal oxide semiconductor
  • BJT bipolar junction transistor
  • BiCMOS bipolar CMOS
  • SiGe silicon germanium
  • GaAs gallium arsenide
  • the communication device described in the above embodiments may be a network device or a terminal device (such as the terminal device in the aforementioned method embodiment), but the scope of the communication device described in the present disclosure is not limited thereto, and the structure of the communication device may not be limited thereto.
  • the communication device may be an independent device or may be part of a larger device.
  • the communication device may be:
  • the IC set may also include a storage component for storing data and computer programs;
  • ASIC such as modem
  • the communication device may be a chip or a chip system
  • the chip includes a processor and an interface, wherein the number of the processors may be one or more, and the number of the interfaces may be multiple.
  • the chip also includes a memory for storing necessary computer programs and data.
  • the present disclosure also provides a readable storage medium having instructions stored thereon, which implement the functions of any of the above method embodiments when executed by a computer.
  • the present disclosure also provides a computer program product, which implements the functions of any of the above method embodiments when executed by a computer.
  • the computer program product includes one or more computer programs.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer program can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program can be transmitted from a website site, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media integrated.
  • the available medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (DVD)), or a semiconductor medium (e.g., a solid state disk (SSD)), etc.
  • a magnetic medium e.g., a floppy disk, a hard disk, a magnetic tape
  • an optical medium e.g., a high-density digital video disc (DVD)
  • DVD high-density digital video disc
  • SSD solid state disk
  • At least one in the present disclosure may also be described as one or more, and a plurality may be two, three, four or more, which is not limited in the present disclosure.
  • the technical features in the technical feature are distinguished by “first”, “second”, “third”, “A”, “B”, “C” and “D”, etc., and there is no order of precedence or size between the technical features described by the "first”, “second”, “third”, “A”, “B”, “C” and “D”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本公开提出一种模型选择方法、装置、设备及存储介质,属于通信技术领域。该方法包括确定用于模型选择的信息;根据所述用于模型选择的信息,选择模型。本公开针对一种"模型选择"这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。

Description

模型选择方法、装置 技术领域
本公开涉及通信技术领域,尤其涉及一种模型选择方法、装置、设备及存储介质。
背景技术
在通信系统中,移动通信技术的广泛应用为人们生活的各方面带来巨大改变。其中,模型技术的持续发展不仅为智能终端设备带来丰富多彩的各种应用,也在促进各个行业进行产业升级。在针对模型的操作中,已经训练好的模型可以是多个,在使用模型时,可以从中选择一个模型进行模型推理。但是,由于不同的模型对应的推理功能不同,执行推理的节点不一样时,增加模型选择时长,使得模型选择的准确性和效率较低。
发明内容
本公开提出的一种模型选择方法、装置、设备及存储介质,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
本公开一方面实施例提出的一种模型选择方法,所述方法由第一节点执行,所述方法包括:
确定用于模型选择的信息;
根据所述用于模型选择的信息,选择模型。
本公开另一方面实施例提出的一种模型选择方法,所述方法由第二节点执行,所述方法包括:
发送用于模型选择的信息至第一节点,其中,所述用于模型选择的信息用于指示所述第一节点选择模型。
本公开另一方面实施例提出的一种模型选择方法,所述方法由第三节点执行,所述方法包括:
接收第一节点发送的模型选择结果;
根据所述模型选择结果,执行相关操作。
本公开又一方面实施例提出的一种模型选择装置,所述装置设置于第一节点侧,所述装置包括:
确定模块,用于确定用于模型选择的信息;
选择模块,用于根据所述用于模型选择的信息,选择模型。
本公开又一方面实施例提出的一种模型选择装置,所述装置设置于第二节点侧,所述装置包括:
发送模块,用于发送用于模型选择的信息至第一节点,其中,所述用于模型选择的信息用于指示所述第一节点选择模型。
本公开又一方面实施例提出的一种模型选择装置,所述装置设置于第三节点侧,所述装置包括:
接收模块,用于接收第一节点发送的模型选择结果;
执行模块,用于根据所述模型选择结果,执行相关操作。
本公开又一方面实施例提出的一种第一节点,所述设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上一方面实施例提出的方法。
本公开又一方面实施例提出的一种第二节点,所述设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上一方面实施例提出的方法。
本公开又一方面实施例提出的一种第三节点,所述设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如一方面实施例提出的方法。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如一方面实施例提出的方法被实现。
本公开又一方面实施例提出的一种模型选择系统,所述系统包括:
第二节点,用于发送用于模型选择的信息至第一节点;
所述第一节点,用于接收所述第二节点发送的所述用于模型选择的信息;
所述第一节点,还用于根据所述用于模型选择的信息,选择模型。
本公开又一方面实施例提出的一种模型选择系统,所述系统包括:
第一节点,用于确定用于模型选择的信息;
所述第一节点,还用于根据所述用于模型选择的信息,选择模型;
所述第一节点,还用于发送模型选择结果至第三节点;
所述第三节点,用于接收所述第一节点发送的模型选择结果;
所述第三节点,用于根据所述模型选择结果,执行相关操作。
综上所述,在本公开实施例之中,确定用于模型选择的信息;根据用于模型选择的信息,选择模型。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开实施例所提供的一种人工智能在无线空口中框架的举例示意图;
图2为本公开一个实施例所提供的一种无线网络的分离架构;
图3为本公开一个实施例所提供的一种模型选择方法的流程示意图;
图4为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图5为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图6为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图7为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图8为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图9为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图10为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图11为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图12为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图13为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图14为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图15为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图16为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图17为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图18为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图19为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图20为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图21为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图22为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图23为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图24为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图25为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图26为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图27为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图28为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图29为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图30为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图31为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图32为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图33为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图34为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图35为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图36为本公开又一个实施例所提供的一种模型选择方法的交互示意图;
图37为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图38为本公开又一个实施例所提供的一种模型选择方法的流程示意图;
图39为本公开一个实施例所提供的一种模型选择系统的结构示意图;
图40为本公开又一个实施例所提供的一种模型选择系统的结构示示意图;
图41为本公开一个实施例所提供的一种模型选择装置的结构示意图;
图42为本公开另一个实施例所提供的一种模型选择装置的结构示意图;
图43为本公开另一个实施例所提供的一种模型选择装置的结构示意图;
图44为本公开一个实施例所提供的一种终端设备的框图;
图45为本公开一个实施例所提供的一种网络侧设备的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开实施例的一些方面相一致的装置和方法的例子。
在本公开实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开实施例。在本公开实施例和所附权利要求书中所使用的单数形式的“一种”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”及“若”可以被解释成为“在……时”或“当……时”或“响应于确定”。
在本公开实施例中涉及的网元或是网络功能,其既可以是独立的硬件设备实现,也可以通过硬件设备中的软件实现,本公开实施例中并不对此做出限定。
在通信系统中,5G技术的广泛应用为人们生活的方方面面带来巨大改变。根据国际电信联盟(International Telecommunication Union,ITU)的愿景,第五代移动通信技术(5th Generation Mobile Communication Technology,5G)将渗透到未来社会的各个领域,以用户为中心构建全方位的信息生态系统。其中,例如5G用户体验速率可达100Mbit/s至1Gbit/s,能够支持移动虚拟现实(Virtual Reality,VR)等极致业务体验。例如5G峰值速率可达10Gbit/s~20Gbit/s,流量密度可达10Mbit/s/m2,能够支持未来千倍以上移动业务流量的增长。例如5G联接数密度可达100万个/m2,能够有效支持海量的物联网设备。例如5G传输时延可达毫秒量级,可满足车联网和工业控制的严苛要求。例如5G能够支持500km/h的移动速度,能够在高铁环境下满足良好的用户体验。由此可见,5G作为新型基础设施代表将重新构建未来的信息化社会。
近年来,模型技术在多个领域取得不断突破。智能语音、计算机视觉等领域的持续发展不仅为智能终端带来丰富多彩的各种应用,在教育、交通、家居、医疗、零售、安防等多个领域也有广泛应用,给人们生活带来便利同时,也在促进各个行业进行产业升级。模型技术也正在加速与其他学科领域交叉渗透,其发展融合不同学科知识同时,也为不同学科的发展提供了新的方向和方法。
在第三代合作伙伴计划3GPP(3rd Generation Partnership Project,)3GPP版本(Release)18阶段,考虑的用例包括信道状态信息(Channel State Information,CSI)压缩、定位和波束管理等。在无线接入网RAN1设立了关于人工智能技术在无线空口中的研究项目。该项目旨在研究如何在无线空口中引入人工智能技术,同时探讨人工智能技术如何对无线空口的传输技术进行辅助提高。RAN1关于模型的讨论包括在模型训练完成后,针对同一种功能,可能有多个训练好的模型可以用,可以为终端设备UE和/或基站在这些模型中选择最模型进行模型推理。
图1为本公开实施例所提供的一种人工智能在无线空口中框架的举例示意图。如图1所示,该流程例如可以包括数据收集(Data collection);训练数据(Training data);模型训练(model training);模型部署或者更新(Model deployment/Update);推理数据(Inference data);模型推理(Model inference);输出(Output);模型性能反馈(Model performance feedback);(Actor)执行器和反馈(Feedback)。
以及,在本公开的一个实施例之中,在人工智能(Artificial Intelligence,AI)操作AI operation中,涉及的流程包括以下至少一种:
训练数据的收集;
模型训练;
模型传输;
模型推理性能的监测;
AI模型的微调(fine tuning);
AI模型的推理;
模型更新。
其中,训练数据的收集是指从网络节点、管理实体或终端收集的数据,作为AI/ML模型训练、数据分析和推理的基础。
AI/ML模型是指一种应用机器学习技术的数据驱动算法,可以根据一组输入生成一组由预测信息和/或决策参数组成的输出。
AI/ML训练是指通过学习最能呈现数据的特征和模式来训练AI/ML模型,并得到经过训练的AI/ML模型进行推理的在线或离线过程。
AI/ML推理是指根据收集到的数据和AI/ML模型,使用经过训练的AI/ML模型进行预测或指导决策的过程。
图2为本公开实施例所提供的一种无线网络的分离架构,如图2所示,下一代基站(the next Generation Node B,gNB),中央单元(Central Unit,CU),分布单元(Distributed Unit,DU),控制平面(control plane,CP),gNB-CU-CP为控制单元控制平面,gNB-CU-UP为控制单元用户平面,E1用于gNB-CU-CP和gNB-CU-UP之间的接口连接,F1-C用于gNB-CU和gNB-DU之间的控制平面连接connected),F1-U用于gNB-CU和gNB-DU的用户平面连接。其中,gNB-CU-CP负责RRC和PDCP控制平面的功能,gNB-CU-UP负责GTP-U、服务数据适配协议(Service Data Adaptation Protocol,SDAP)和分组数据汇聚协议(Packet Data Convergence Protocol,PDCP)用户平面的功能,gNB-DU负责无线链路层控制协议(Radio Link Control,RLC)、多址接入信道(Multiple Access Channel,MAC)和端口物理层(Physical Layer,PHY)的功能。如果将AI模型应用到空口技术当中,根据不同的用例,基站和/或终端设备(User Equipment,UE)都可以根据AI模型进行推理来提升相应的技术功能。而由于执行推理的功能不同,即AI模型的推理可能在物理层、MAC层、RLC层、PDCP层、RRC层或一种新的AI层执行,如果是在无线网络分离架构下或多连接场景下,执行推理的节点是不一样的,因此会出现模型选择的准确性较低的情况。
下面参考附图对本公开实施例所提供的一种模型选择方法、装置、设备及存储介质进行详细描述。
图3为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图3所示,该方法可以包括以下步骤:
步骤301、确定用于模型选择的信息;
步骤302、根据用于模型选择的信息,选择模型。
其中,在本公开的一个实施例之中,本公开实施例的技术方案可以应用于不同的网络架构下。其中不同的网络架构包括但不限于分离架构和多连接场景。
以及,在本公开的一个实施例之中,第一节点根据用于模型选择的信息,选择模型时,例如可以是第一节点根据用于模型选择的信息,选择合适的模型。
以及,在本公开的一个实施例之中,第一节点根据用于模型选择的信息,选择模型时,例如可以是第一节点根据用于模型选择的信息,选择与该信息对应的模型。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
其中,在本公开的一个实施例之中,优先级等级信息可以由整型INTEGER表示,其中,该整型例如可以是(1..X)的正整数。其中,X为一个大于1的整数。例如,按优先级递减顺序排列,即1为最高优先级,X为最低优先级,或者按优先级递增顺序排列,即1为最低优先级,X为最高优先级。
以及,在本公开的一个实施例之中,其中,用于模型选择的信息包括优先级等级信息,并且其中,
根据用于模型选择的信息,选择模型,包括:
响应于第一模型使用异常或不再满足使用条件,根据优先等级信息,回退至满足使用条件的第二模型,其中,第二模型为满足使用条件的次低优先级等级模型;或
根据优先等级信息,回退至第三模型,其中,第三模型为最低优先级等级模型或缺省模型。
示例地,在本公开的一个实施例之中,使用条件并不特指某一固定使用条件。例如,当模型使用时的使用场景发生变化时,该使用条件也可以相应变化。
以及,在本公开的一个实施例之中,第一模型例如可以是当前使用的模型,第一模型并不特指某一固定模型。该第一模型中的第一仅用于与其余模型进行区分。
示例地,在本公开的一个实施例之中,第二模型例如可以是满足使用条件的次低优先级等级模型,该第二模型并不特指某一固定模型。例如当模型集合中各模型的优先级等级信息发生变化时,该第二模型也可以相应变化。
以及,在本公开的一个实施例之中,第三模型为最低优先级等级模型或缺省模型。
进一步地,在本公开的一个实施例之中,其中,用于模型选择的信息包括优先级等级信息,并且其中,
根据用于模型选择的信息,选择模型,包括:
在模型集合中至少一个模型满足模型选择信息时,根据优先级等级信息,在至少一个模型中选择优先级等级最高的模型,其中,模型选择信息包括除优先级等级信息之外的至少一项用于模型选择的信息。
以及,在本公开的一个实施例之中,模型集合是指由至少一个模型汇聚而成的集体。该模型集合并不特指某一固定集合。例如,当模型集合中包括的模型数量发生变化时,该模型集合也可以相应变化。例如,当模型集合中包括的模型类型发生变化时,该模型集合也可以相应变化。
以及,在本公开的一个实施例之中,模型选择信息包括除优先级等级信息之外的至少一项用于模型选择的信息。由于用于模型选择的信息包括多个信息,因此,模型选择信息并不特指某一固定信息。例如,当模型选择信息对应的信息数量发生变化时,该模型选择信息也可以相应变化。例如,当模型选择信息对应的具体信息发生变化时,该模型选择信息也可以相应变化。
示例地,在本公开的一个实施例之中,其中,模型选择的区域范围信息包括网络标识,
网络标识包括以下至少一项:
公共陆地移动网列表(Public Land Mobile Network list,PLMN list);
跟踪区代码(Tracking Area Code,TAC)列表list;
无线接入网通知区(RAN Notification Area,RNA);
下一代基站标识列表NG-RAN node ID list;
小区列表cell list;
经纬度和/或高度信息。
示例地,在本公开的一个实施例之中,终端设备所在的网络满足条件时,可以使用该条件对应的模型,但是终端设备所在的网络不满足改条件时,则不能使用该条件对应的模型。
其中,在本公开的一个实施例之中,区域范围信息例如可以是实际的地理位置区域。例如当终端设备的地理位置在该地理位置区域内时,可以使用该地理位置区域对应的模型,但是当终端设备的地理位置不在该地理位置区域内时,则不可以使用该地理位置区域对应的模型。
示例地,在本公开的一个实施例之中,模型选择的区域范围信息例如可以是经纬度信息。第一节点确定的终端设备的经纬度信息例如可以是100°E,40°N。第一节点例如确定A模型的区域范围覆盖该经纬度,确定B模型的区域范围未覆盖该经纬度时,第一节点可以选择A模型。
示例地,在本公开的一个实施例之中,使用时间信息用于指示模型可用时间信息,该使用时间信息可以是特定的时间区间,即只能在规定的时间内可以使用对应的模型。该使用时间信息例如还可以是特定时长,即使用模型满足特定时长则停止使用该模型。
示例地,在本公开的一个实施例之中,终端设备的状态包括以下至少一项:
无线资源控制(Radio Resource Control,RRC)_连接状态RRC_CONNECTED;
RRC_不活动状态RRC_INACTIVE;
RRC_空闲状态RRC_IDLE。
其中,在本公开的一个实施例之中,终端设备可以在特定的网络状态下使用与该网络状态对应的模型。或者不同的模型对应不同的终端设备的状态。
以及,在本公开的一个实施例之中,功能类型包括但不限于定位positioning、CSI压缩、波束管理beam management等。
以及,在本公开的一个实施例之中,事件准则例如可以是与终端设备移动性相关的事件。例如,当满足A1,A2或A3事件,或者该事件可以是终端设备发送了与切换相关的信令。
示例地,在本公开的一个实施例之中,其中,无线环境相关阈值准则包括以下至少一项:
终端设备测量的信号强度;
终端设备测量的信号干扰;
基站测量的上行信号干扰。
以及,在本公开的一个实施例之中,例如可以是终端设备测量的信号强度大于一定的信号强度门限阈值,可以使用对应的模型,或者例如可以是终端设备测量的信号强度小于一定的信号强度门限阈值,可以使用对应的模型。该信号强度例如可以是参考信号接收功率(Reference Signal Receiving Power,RSRP)。
以及,在本公开的一个实施例之中,终端设备测量的信号干扰例如可以是LTE参考信号接收质量(Reference Signal Receiving Quality,RSRQ),当RSRQ大于一定的RSRQ门限阈值,可以使用对应的模型,或者例如可以是RSRQ小于一定的RSRQ门限阈值,可以使用对应的模型。
以及,在本公开的一个实施例之中,例如可以是基站测量的上行信号干扰大于一定的上行信号干扰门限阈值,可以使用对应的模型,或者例如可以是基站测量的上行信号干扰小于一定的上行信号干扰门限阈值,可以使用对应的模型。
示例地,在本公开的一个实施例之中,其中,业务相关准则包括以下至少一项:
协议数据单元(Protocol Data Unit,PDU)会话信息;
服务质量(Quality of Service,QoS)流信息;
无线承载信息;
网络切片信息;
体验质量QoE门限;
服务质量QoS门限。
以及,在本公开的一个实施例之中,只有在终端设备使用对应的PDU会话和/或网络切片时,可以使用对应的模型。
以及,在本公开的一个实施例之中,只有终端设备的QoE低于一定的QoE门限或者高于一定的QoE门限才可以使用对应的模型。
更进一步,在本公开的一个实施例中,QoE门限是指QoE中测量的至少一个QoEmetric QoE度量的值,或者QoE门限是指通过QoE测量和计算出的QoE值,代表QoE的整体体验好坏。其中,通过QoE测量和计算出的QoE值例如可以是平均意见分(Mean Opinion Score,MOS)。
以及,在本公开的一个实施例之中,只有终端设备的QoS低于一定的QoS门限或者高于一定的QoS门限才可以使用对应的模型。
更进一步,在本公开的一个实施例中,QoS门限例如可以是指承载对应的吞吐量、时延和/或丢包的值。
以及,在本公开的一个实施例之中,模型性能相关准则可以是特定推理精确度Inference accuracy门限,响应于精确度accuracy低于一定的精确度门限或高于一定的精确度门限,则可以使用对应的模型。
以及,在本公开的一个实施例之中,终端设备移动速度准则例如可以是一个特定的速率门限,响应于终端设备的速率低于一定的速率门限或高于一定的速率门限,则可以使用对应的模型。
以及,在本公开的一个实施例之中,PDU会话信息例如可以是协议数据单元(Protocol Data Unit)会话列表PDU session list。
以及,在本公开的一个实施例之中,QoS流信息例如可以是服务质量流标识QoS flow ID list。
以及,在本公开的一个实施例之中,无线承载信息例如可以是数据无线承载DRB list。
以及,在本公开的一个实施例之中,网络切片信息例如可以是Single Network Slice Selection Assistance information(S-NSSAI)列表或者是网络切片组信息(networkslicegroup)。
示例地,在本公开的一个实施例之中,其中,终端算力准则包括中央处理器CPU的使用率门限。
以及,在本公开的一个实施例之中,响应于终端设备当前的CPU使用率低于一定的使用率门限或高于一定的使用率门限,则可以使用对应的模型。
以及,在本公开的一个实施例之中,电量消耗准则例如可以是剩余电量的门限,响应于终端设备当前的剩余电量低于一定的剩余电量门限或高于一定的剩余电量门限,则可以使用对应的模型。
以及,在本公开的一个实施例之中,地理覆盖场景包括但不限于密集城市Dense Urban,城市Urban,郊区Suburban,农村Rural,室内indoor等。
进一步地,在本公开的一个实施例之中,用于模型选择的信息为针对每一个特定的模型(per mode)和/或针对每一个特定的模型标识(per model ID)。
以及,在本公开的一个实施例之中,模型标识用于唯一标识模型。即一个模型仅对应一个模型标识。
其中,在本公开的一个实施例之中,用于模型选择的信息例如可以是针对每一个特定的模型。例如,用于模型选择的信息可以是针对A模型的。
其中,在本公开的一个实施例之中,用于模型选择的信息例如还可以是针对每一个特定的模型标识。例如,A模型的标识为123456。用于模型选择的信息可以是针对123456的。
进一步地,在本公开的一个实施例之中,确定用于模型选择的信息,包括:
接收第二节点发送的用于模型选择的信息。
进一步地,在本公开的一个实施例之中,第一节点、第二节点选自以下组合中的至少一项:
第一节点为终端设备,第二节点为基站;
第一节点为终端设备,第二节点为核心网节点;
第一节点为基站,第二节点为核心网节点;
第一节点为基站,第二节点为操作维护管理(Operations,Administration,Maintenance,OAM)节点;
第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站;
第一节点为多连接场景下的主节点(Master Node,MN),第二节点为多连接场景下的辅助节点(Secondary Node,SN);
第一节点为新服务gNB(new serving gNB),第二节点为上一次服务gNB(last serving gNB);
第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU。
进一步地,在本公开的一个实施例之中,接收第二节点发送的用于模型选择的信息,包括以下至少一项:
在第一节点为终端设备,第二节点为基站时,接收第二节点发送的RRC消息,其中,RRC消息包括用于模型选择的信息;
在第一节点为终端设备,第二节点为核心网节点时,接收第二节点发送的非接入(Non-access stratum,NAS)消息,其中,NAS消息包括用于模型选择的信息;
在第一节点为基站,第二节点为核心网节点时,接收第二节点发送的下一代应用协议(Next Generation Application Protocol,NGAP)消息,其中,NGAP消息包括用于模型选择的信息;
在第一节点为基站,第二节点为OAM节点时,接收第二节点发送的用于模型选择的信息;
在第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站时,接收第二节点发送的Xn应用协议(Xn Application Proposal,XnAP)消息,其中,XnAP消息包括用于模型选择的信息;
在第一节点为多连接场景下的MN,第二节点为多连接场景下的SN时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息;
在第一节点为新服务gNB(new serving gNB),第二节点为上一次服务gNB(last serving gNB)时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息;
在第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU时,接收第二节点发送的(F1Application Proposal,F1AP)消息,其中,F1AP消息包括用于模型选择的信息。
以及,在本公开的一个实施例之中,多连接场景例如可以包双连接(Dual Connectivity,DC)场景。
在第一节点为基站,第二节点为操作维护管理OAM节点时,第一节点例如可以是基站上的节点,该基站上的节点包括但不限于gNB-CU,gNB-DU,gNB-CU-UP等。
进一步地,在本公开的一个实施例之中,该方法还包括:
响应于完成模型选择,发送模型选择结果至第三节点。
进一步地,在本公开的一个实施例之中,第一节点、第三节点选自以下组合中的至少一项:
第一节点为终端设备,第三节点为基站;
第一节点为终端设备,第三节点为核心网节点;
第一节点为基站,第三节点为终端设备;
第一节点为分离架构下的CU,第三节点为分离架构下的DU;
第一节点为分离架构下的DU,第三节点为分离架构下的CU;
第一节点为多连接场景下的MN,第三节点为多连接场景下的SN;
第一节点为多连接场景下的SN,第三节点为多连接场景下的MN。
进一步地,在本公开的一个实施例之中,发送模型选择结果至第三节点,包括以下至少一项:
在第一节点为终端设备,第三节点为基站时,通过RRC信令和/或下层lower layer信令发送模型选择结果至第三节点;
在第一节点为终端设备,第三节点为核心网节点时,通过NAS信令发送模型选择结果至第三节点;
在第一节点为基站,第三节点为终端设备时,通过RRC信令和/或lower layer信令发送模型选择结果至第三节点;
在第一节点为分离架构下的CU,第三节点为分离架构下的DU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果;
在第一节点为分离架构下的DU,第三节点为分离架构下的CU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果;
在第一节点为多连接场景下的MN,第三节点为多连接场景下的SN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果;
在第一节点为多连接场景下的SN,第三节点为多连接场景下的MN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。
示例地,在本公开的一个实施例之中,其中,模型选择的结果包括用于标识模型的标识ID信息。
进一步地,在本公开的一个实施例之中,该方法还包括:其中,lower layer信令可以是PDCP层信令、RLC层信令、MAC层信令或物理层信令。
进一步地,在本公开的一个实施例之中,该方法还包括:其中,可选地,PDCP层信令可以是PDCP控制协议数据单元(Protocol Data Unit,PDU);可选地,RLC层信令可以是RLC控制PDU;可选地,MAC层信令可以是媒体接入控制层控制元素(Media Access Control-Control Element,MAC-CE),或者,下行控制消息(Downlink Control Information,DCI),或者,上行控制消息(Uplink Control Information,UCI),或者,随机接入请求,或者,随机接入反馈;可选地,RRC层信令可以是RRC消息。
进一步地,在本公开的一个实施例之中,第二节点和第三节点相同或者不同。
综上所述,在本公开实施例之中,确定用于模型选择的信息;根据用于模型选择的信息,选择模型。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图4为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图4所示,该方法可以包括以下步骤:
步骤401、响应于第一模型使用异常或不再满足使用条件,根据优先等级信息,回退至满足使用条件的第二模型,其中,第二模型为满足使用条件的次低优先级等级模型;或
步骤402、根据优先等级信息,回退至第三模型,其中,第三模型为最低优先级等级模型或缺省模型。
其中,在本公开的一个实施例之中,用于模型选择的信息包括优先级等级信息。其中,该优先级等级用于指示模型选择优先级和/或回退的优先级。
其中,在本公开的一个实施例之中,用于模型选择的信息为针对每一个特定的模型和/或针对每一个特定的模型标识。
以及,在本公开的一个实施例之中,步骤401和步骤402可以择一执行,例如当第一节点执行步骤401时,第一节点可以不执行步骤402;或者当第一节点执行步骤402时可以不执行步骤401。
示例地,在本公开的一个实施例之中,使用条件并不特指某一固定使用条件。例如,当模型使用时的使用场景发生变化时,该使用条件也可以相应变化。
以及,在本公开的一个实施例之中,第一模型例如可以是当前使用的模型,第一模型并不特指某一固定模型。该第一模型中的第一仅用于与其余模型进行区分。
示例地,在本公开的一个实施例之中,第二模型例如可以是满足使用条件的次低优先级等级模型,该第二模型并不特指某一固定模型。例如当模型集合中各模型的优先级等级信息发生变化时,该第二模型也可以相应变化。
以及,在本公开的一个实施例之中,第三模型为最低优先级等级模型或缺省模型。
综上所述,在本公开实施例之中,响应于第一模型使用异常或不再满足使用条件,根据优先等级信息,回退至满足使用条件的第二模型,其中,第二模型为满足使用条件的次低优先级等级模型;或根据优先等级信息,回退至第三模型,其中,第三模型为最低优先级等级模型或缺省模型。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一模型使用异常或不再满足使用条件时如何进行模型选择的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图5为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图5所示,该方法可以包括以下步骤:
步骤501、在模型集合中至少一个模型满足模型选择信息时,根据优先级等级信息,在至少一个模型中选择优先级等级最高的模型,其中,模型选择信息包括除优先级等级信息之外的至少一项用于模型选择的信息。
其中,在本公开的一个实施例之中,用于模型选择的信息包括优先级等级信息。
其中,在本公开的一个实施例之中,用于模型选择的信息为针对每一个特定的模型和/或针对每一个特定的模型标识。
以及,在本公开的一个实施例之中,模型集合是指由至少一个模型汇聚而成的集体。该模型集合并不特指某一固定集合。例如,当模型集合中包括的模型数量发生变化时,该模型集合也可以相应变化。例如,当模型集合中包括的模型类型发生变化时,该模型集合也可以相应变化。
以及,在本公开的一个实施例之中,模型选择信息包括除优先级等级信息之外的至少一项用于模型选择的信息。由于用于模型选择的信息包括多个信息,因此,模型选择信息并不特指某一固定信息。例如,当模型选择信息对应的信息数量发生变化时,该模型选择信息也可以相应变化。例如,当模型选择信息对应的具体信息发生变化时,该模型选择信息也可以相应变化。
综上所述,在本公开实施例之中,在模型集合中至少一个模型满足模型选择信息时,根据优先级等级信息,在至少一个模型中选择优先级等级最高的模型,其中,模型选择信息包括除优先级等级信息之外的至少一项用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在模型集合中选择模型的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图6为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图6所示,该方法可以包括以下步骤:
步骤601、在第一节点为终端设备,第二节点为基站时,接收第二节点发送的RRC消息,其中,RRC消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图7为本公开实施例所提供的一种模型选择方法的交互示意图。如图7所示,终端设备可以接收基站发送的RRC消息,其中,RRC消息包括用于模型选择的信息,即终端设备可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为终端设备,第二节点为基站时,接收第二节点发送的RRC消息,其中,RRC消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了第一节点为终端设备,第二节点为基站时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图8为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图8所示,该方法可以包括以下步骤:
步骤801、在第一节点为终端设备,第二节点为核心网节点时,接收第二节点发送的非接入NAS消息,其中,NAS消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图9为本公开实施例所提供的一种模型选择方法的交互示意图。终端设备可以接收核心网节点发送的非接入NAS消息,其中,NAS消息包括用于模型选择的信息,即终端设备可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为终端设备,第二节点为核心网节点时,接收第二节点发送的非接入NAS消息,其中,NAS消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了第一节点为终端设备,第二节点为核心网节点时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图10为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图10所示,该方法可以包括以下步骤:
步骤1001、在第一节点为基站,第二节点为核心网节点时,接收第二节点发送的下一代应用协议NGAP消息,其中,NGAP消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图11为本公开实施例所提供的一种模型选择方法的交互示意图。基站可以接收核心网节点发送的下一代应用协议NGAP消息,其中,NGAP消息包括用于模型选择的信息,即终端设备可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为基站,第二节点为核心网节点时,接收第二节点发送的下一代应用协议NGAP消息,其中,NGAP消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为基站,第二节点为核心网节点时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图12为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图12所示,该方法可以包括以下步骤:
步骤1201、在第一节点为基站,第二节点为操作维护管理OAM节点时,接收第二节点发送的用于模型选择的信息。
其中,在本公开的一个实施例之中,图13为本公开实施例所提供的一种模型选择方法的交互示意图。基站可以接收OAM节点发送的用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为基站,第二节点为操作维护管理OAM节点时,接收第二节点发送的用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为基站,第二节点为操作维护管理OAM节点时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图14为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图14所示,该方法可以包括以下步骤:
步骤1401、在第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站时,接收第二节点发送的Xn应用协议XnAP消息,其中,XnAP消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图15为本公开实施例所提供的一种模型选择方法的交互示意图。切换过程中的目的基站可以接收切换过程中的源基站发送的Xn应用协议XnAP消息,其中,XnAP消息包括用于模型选择的信息,即切换过程中的源基站可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站时,接收第二节点发送的Xn应用协议XnAP消息,其中,XnAP消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图16为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图16所示,该方法可以包括以下步骤:
步骤1601、在第一节点为多连接多连接场景下的主节点MN,第二节点为多连接场景下的辅助节点SN时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图17为本公开实施例所提供的一种模型选择方法的交互示意图。多连接场景下的主节点MN可以接收多连接场景下的辅助节点SN发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息,即多连接场景下的主节点MN可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为多连接场景下的主节点MN,第二节点为多连接场景下的辅助节点SN时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为多连接场景下的主节点MN,第二节点为多连接场景下的辅助节点SN时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图18为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图18所示,该方法可以包括以下步骤:
步骤1801、在第一节点为新服务gNB new serving gNB,第二节点为上一次服务gNB last serving gNB时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图19为本公开实施例所提供的一种模型选择方法的交互示意图。新服务gNB new serving gNB可以接收上一次服务gNB last serving gNB发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息,即新服务gNB new serving gNB可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为新服务gNB new serving gNB,第二节点为上一次服务gNB last serving gNB时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为新服务gNB new serving gNB,第二节点为上一次服务gNB last serving gNB时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图20为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图20所示,该方法可以包括以下步骤:
步骤2001、在第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU时,接收第二节点发送的F1AP消息,其中,F1AP消息包括用于模型选择的信息。
其中,在本公开的一个实施例之中,图21为本公开实施例所提供的一种模型选择方法的交互示意图。分离架构下的集中单元CU可以接收分离架构下的分布单元DU发送的F1AP消息,其中,F1AP消息包括用于模型选择的信息,即分离架构下的集中单元CU可以确定用于模型选择的信息。
其中,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力 要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
综上所述,在本公开实施例之中,在第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU时,接收第二节点发送的F1AP消息,其中,F1AP消息包括用于模型选择的信息。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU时确定用于模型选择的信息的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图22为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图22所示,该方法可以包括以下步骤:
步骤2201、响应于完成模型选择,发送模型选择结果至第三节点。
其中,在本公开的一个实施例之中,第一节点、第三节点选自以下组合中的至少一项:
第一节点为终端设备,第三节点为基站;
第一节点为终端设备,第三节点为核心网节点;
第一节点为基站,第三节点为终端设备;
第一节点为分离架构下的CU,第三节点为分离架构下的DU;
第一节点为分离架构下的DU,第三节点为分离架构下的CU;
第一节点为多连接场景下的MN,第三节点为多连接场景下的SN;
第一节点为多连接场景下的SN,第三节点为多连接场景下的MN。
其中,在本公开的一个实施例之中,发送模型选择结果至第三节点,包括以下至少一项:
在第一节点为终端设备,第三节点为基站时,通过RRC信令和/或下层lower layer信令发送模型选择结果至第三节点;
在第一节点为终端设备,第三节点为核心网节点时,通过NAS信令发送模型选择结果至第三节点;
在第一节点为基站,第三节点为终端设备时,通过RRC信令和/或lower layer信令发送模型选择结果至第三节点;
在第一节点为分离架构下的CU,第三节点为分离架构下的DU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果;
在第一节点为分离架构下的DU,第三节点为分离架构下的CU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果;
在第一节点为多连接场景下的MN,第三节点为多连接场景下的SN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果;
在第一节点为多连接场景下的SN,第三节点为多连接场景下的MN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,响应于完成模型选择,发送模型选择结果至第三节点。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了响应于完成模型选择,发送模型选择结果至第三节点,以第三节点可以执行相关操作,可以实现分离架构或多连接场景下进行模型选择和同步的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图23为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图23所示,该方法可以包括以下步骤:
步骤2301、在第一节点为终端设备,第三节点为基站时,通过RRC信令和/或下层lower layer信令发送模型选择结果至第三节点。
其中,在本公开的一个实施例之中,图24为本公开实施例所提供的一种模型选择方法的交互示意图。终端设备通过RRC信令和/或下层lower layer信令发送模型选择结果至基站。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为终端设备,第三节点为基站时,通过RRC信令和/或下层lower  layer信令发送模型选择结果至第三节点。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为终端设备,第三节点为基站时发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图25为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图25所示,该方法可以包括以下步骤:
步骤2501、在第一节点为终端设备,第三节点为核心网节点时,通过NAS信令发送模型选择结果至第三节点。
其中,在本公开的一个实施例之中,图26为本公开实施例所提供的一种模型选择方法的交互示意图。终端设备通过NAS信令发送模型选择结果至核心网节点。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为终端设备,第三节点为核心网节点时,通过NAS信令发送模型选择结果至第三节点。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为终端设备,第三节点为核心网节点时,发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图27为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图27所示,该方法可以包括以下步骤:
步骤2701、在第一节点为基站,第三节点为终端设备时,通过RRC信令和/或lower layer信令发送模型选择结果至第三节点。
其中,在本公开的一个实施例之中,图28为本公开实施例所提供的一种模型选择方法的交互示意图。基站通过RRC信令和/或lower layer信令发送模型选择结果至终端设备。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为基站,第三节点为终端设备时,通过RRC信令和/或lower layer信令发送模型选择结果至第三节点。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为基站,第三节点为终端设备时发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图29为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图29所示,该方法可以包括以下步骤:
步骤2901、在第一节点为分离架构下的CU,第三节点为分离架构下的DU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果。
其中,在本公开的一个实施例之中,图30为本公开实施例所提供的一种模型选择方法的交互示意图。分离架构下的CU发送F1AP消息至分离架构下的DU,其中,F1AP消息包括模型选择结果,即分离架构下的CU可以发送模型选择结果至分离架构下的DU。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为分离架构下的CU,第三节点为分离架构下的DU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为分离架构下的CU,第三节点为分离架构下的DU时发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图31为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图31所示,该方法可以包括以下步骤:
步骤3101、在第一节点为分离架构下的DU,第三节点为分离架构下的CU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果。
其中,在本公开的一个实施例之中,图32为本公开实施例所提供的一种模型选择方法的交互示意图。分离架 构下的DU发送F1AP消息至分离架构下的CU,其中,F1AP消息包括模型选择结果,即分离架构下的DU可以发送模型选择结果至分离架构下的CU。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为分离架构下的DU,第三节点为分离架构下的CU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为分离架构下的DU,第三节点为分离架构下的CU时发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图33为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图33所示,该方法可以包括以下步骤:
步骤3301、在第一节点为多连接场景下的MN,第三节点为多连接场景下的SN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。
其中,在本公开的一个实施例之中,图34为本公开实施例所提供的一种模型选择方法的交互示意图。多连接场景下的MN发送XnAP消息至多连接场景下的SN,其中,XnAP消息包括模型选择结果,即多连接场景下的MN可以发送模型选择结果至多连接场景下的SN。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为多连接场景下的MN,第三节点为多连接场景下的SN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为多连接场景下的MN,第三节点为多连接场景下的SN时发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图35为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第一节点执行,如图35所示,该方法可以包括以下步骤:
步骤3501、在第一节点为多连接场景下的SN,第三节点为多连接场景下的MN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。
其中,在本公开的一个实施例之中,图36为本公开实施例所提供的一种模型选择方法的交互示意图。多连接场景下的SN发送XnAP消息至多连接场景下的MN,其中,XnAP消息包括模型选择结果,即多连接场景下的SN可以发送模型选择结果至多连接场景下的MN。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,在第一节点为多连接场景下的SN,第三节点为多连接场景下的MN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开实施例具体公开了在第一节点为多连接场景下的SN,第三节点为多连接场景下的MN时发送模型选择结果至第三节点的方案。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图37为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第二节点执行,如图37所示,该方法可以包括以下步骤:
步骤3701、发送用于模型选择的信息至第一节点,其中,用于模型选择的信息用于指示第一节点选择模型。
其中,在本公开的一个实施例之中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
以及,在本公开的一个实施例之中,其中,模型选择的区域范围信息包括网络标识,
网络标识包括以下至少一项:
公共陆地移动网列表PLMN list;
跟踪区代码列表TAC list;
无线接入网通知区RNA;
下一代基站标识列表NG-RAN node ID list;
小区列表cell list;
经纬度和/或高度信息。
以及,在本公开的一个实施例之中,终端设备的状态包括以下至少一项:
无线资源控制RRC_连接状态RRC_CONNECTED;
RRC_不活动状态RRC_INACTIVE;
RRC_空闲状态RRC_IDLE。
以及,在本公开的一个实施例之中,其中,无线环境相关阈值准则包括以下至少一项:
终端设备测量的信号强度;
终端设备测量的信号干扰;
基站测量的上行信号干扰。
以及,在本公开的一个实施例之中,其中,业务相关准则包括以下至少一项:
PDU会话信息;
QoS流信息;
无线承载信息;
网络切片信息;
体验质量QoE门限;
服务质量QoS门限。
以及,在本公开的一个实施例之中,其中,终端算力准则包括中央处理器CPU的使用率门限。
以及,在本公开的一个实施例之中,用于模型选择的信息为针对每一个特定的模型和/或针对每一个特定的模型标识。
综上所述,在本公开实施例之中,发送用于模型选择的信息至第一节点,其中,用于模型选择的信息用于指示第一节点选择模型。本公开实施例之中,可以提供模型选择机制,通过用于模型选择的信息的交互,可以减少模型选择不准确的情况,可以提高模型选择效率。本公开针对一种“模型选择”这一情形提供了一种处理方法,以发送用于模型选择的信息至第一节点,以第一节点可以进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图38为本公开实施例所提供的一种模型选择方法的流程示意图,该方法由第三节点执行,如图38所示,该方法可以包括以下步骤:
步骤3801、接收第一节点发送的模型选择结果;
步骤3802、根据模型选择结果,执行相关操作。
其中,在本公开的一个实施例之中,模型选择的结果包括用于标识模型的标识ID信息。
综上所述,在本公开实施例之中,接收第一节点发送的模型选择结果;根据模型选择结果,执行相关操作。本公开实施例之中,可以提供模型选择机制,并将模型选择的结果应用到对应的第三节点,第三节点可以执行相关操作。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据第一节点发送的模型选择结果执行相关操作,可以减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图39为本公开实施例所提供的一种模型选择系统的结构示意图,如图39所示,该系统包括:
第二节点,用于发送用于模型选择的信息至第一节点;
第一节点,用于接收第二节点发送的用于模型选择的信息;
第一节点,还用于根据用于模型选择的信息,选择模型。
综上所述,在本公开实施例之中,第二节点可以发送用于模型选择的信息至第一节点;第一节点可以接收第二节点发送的用于模型选择的信息;第一节点可以根据用于模型选择的信息,选择模型。本公开实施例之中,可以提供模型选择机制,通过用于模型选择的信息的交互,可以减少模型选择不准确的情况,可以提高模型选择效率。本公开针对一种“模型选择”这一情形提供了一种处理方法,以发送用于模型选择的信息至第一节点,以第一节点可以进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图40为本公开实施例所提供的一种模型选择系统的结构示意图,如图40所示,该系统包括:
第一节点,用于确定用于模型选择的信息;
第一节点,还用于根据用于模型选择的信息,选择模型;
第一节点,还用于发送模型选择结果至第三节点;
第三节点,用于接收第一节点发送的模型选择结果;
第三节点,用于根据模型选择结果,执行相关操作。
综上所述,在本公开实施例之中,第一节点确定用于模型选择的信息;第一节点根据用于模型选择的信息,选择模型;第一节点发送模型选择结果至第三节点;第三节点接收第一节点发送的模型选择结果;第三节点根据模型选择结果,执行相关操作。本公开实施例之中,可以提供模型选择机制,并将模型选择的结果应用到对应的第三节点,第三节点可以执行相关操作。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据第一节点发送的模型选择结果执行相关操作,可以减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图41为本公开实施例所提供的一种模型选择装置的结构示意图,如图41所示,该装置4100可以设置于第一节点侧,该装置4100可以包括:
确定模块4101,用于确定用于模型选择的信息;
选择模块4102,用于根据用于模型选择的信息,选择模型。
综上所述,在本公开实施例的模型选择装置之中,通过确定模块确定用于模型选择的信息;选择模块根据用于模型选择的信息,选择模型。本公开实施例之中,可以提供模型选择机制,可以确定用于模型选择的信息,减少模型选择不准确的情况,可以提高模型选择效率。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据用于模型选择的信息进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
可选地,在本公开的一个实施例之中,其中,用于模型选择的信息,包括以下至少一项:
优先级等级信息,其中,优先级等级用于指示模型选择优先级和/或回退的优先级;
模型选择的区域范围信息,其中,区域范围信息用于指示模型可用的区域范围;
使用时间信息,其中,使用时间信息用于指示模型可用时间信息;
终端设备状态信息,其中,终端状态信息用于指示模型可用时终端设备的状态;
功能类型信息,其中,功能类型信息用于指示模型针对的功能;
事件准则,其中,事件准则用于指示模型使用的特定事件;
无线环境相关阈值准则,其中,无线环境相关阈值准则用于指示模型可用的无线环境;
业务相关准则,其中,业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
模型性能相关准则,其中,模型性能相关准则用于指示模型可用的性能指标;
终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
终端算力准则和/或电量消耗准则,其中,用终端算力准则和/或电量消耗准则于指示模型可用的终端设备能力要求;
模型应用场景,其中,模型应用场景用于指示模型可用的地理覆盖场景。
可选地,在本公开的一个实施例之中,其中,用于模型选择的信息包括优先级等级信息,并且其中,
选择模块4102,用于根据用于模型选择的信息,选择模型时,具体用于:
响应于第一模型使用异常或不再满足使用条件,根据优先等级信息,回退至满足使用条件的第二模型,其中, 第二模型为满足使用条件的次低优先级等级模型;或
根据优先等级信息,回退至第三模型,其中,第三模型为最低优先级等级模型或缺省模型。
可选地,在本公开的一个实施例之中,其中,用于模型选择的信息包括优先级等级信息,并且其中,
选择模块4102,用于根据用于模型选择的信息,选择模型时,具体用于:
在模型集合中至少一个模型满足模型选择信息时,根据优先级等级信息,在至少一个模型中选择优先级等级最高的模型,其中,模型选择信息包括除优先级等级信息之外的至少一项用于模型选择的信息。
可选地,在本公开的一个实施例之中,其中,模型选择的区域范围信息包括网络标识,
网络标识包括以下至少一项:
公共陆地移动网列表PLMN list;
跟踪区代码列表TAC list;
无线接入网通知区RNA;
下一代基站标识列表NG-RAN node ID list;
小区列表cell list;
经纬度和/或高度信息。
可选地,在本公开的一个实施例之中,终端设备的状态包括以下至少一项:
无线资源控制RRC_连接状态RRC_CONNECTED;
RRC_不活动状态RRC_INACTIVE;
RRC_空闲状态RRC_IDLE。
可选地,在本公开的一个实施例之中,其中,无线环境相关阈值准则包括以下至少一项:
终端设备测量的信号强度;
终端设备测量的信号干扰;
基站测量的上行信号干扰。
可选地,在本公开的一个实施例之中,其中,业务相关准则包括以下至少一项:
PDU会话信息;
QoS流信息;
无线承载信息;
网络切片信息;
体验质量QoE门限;
服务质量QoS门限。
可选地,在本公开的一个实施例之中,其中,终端算力准则包括中央处理器CPU的使用率门限。
可选地,在本公开的一个实施例之中,用于模型选择的信息为针对每一个特定的模型和/或针对每一个特定的模型标识。
可选地,在本公开的一个实施例之中,确定模块4101,用于确定用于模型选择的信息时,具体用于:
接收第二节点发送的用于模型选择的信息。
可选地,在本公开的一个实施例之中,其中,第一节点、第二节点选自以下组合中的至少一项:
第一节点为终端设备,第二节点为基站;
第一节点为终端设备,第二节点为核心网节点;
第一节点为基站,第二节点为核心网节点;
第一节点为基站,第二节点为操作维护管理OAM节点;
第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站;
第一节点为多连接场景下的主节点MN,第二节点为多连接场景下的辅助节点SN;
第一节点为新服务gNB new serving gNB,第二节点为上一次服务gNB last serving gNB;
第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU。
可选地,在本公开的一个实施例之中,确定模块4101,用于接收第二节点发送的用于模型选择的信息时,具体用于以下至少一项:
在第一节点为终端设备,第二节点为基站时,接收第二节点发送的RRC消息,其中,RRC消息包括用于模型选择的信息;
在第一节点为终端设备,第二节点为核心网节点时,接收第二节点发送的非接入NAS消息,其中,NAS消息包括用于模型选择的信息;
在第一节点为基站,第二节点为核心网节点时,接收第二节点发送的下一代应用协议NGAP消息,其中,NGAP消息包括用于模型选择的信息;
在第一节点为基站,第二节点为操作维护管理OAM节点时,接收第二节点发送的用于模型选择的信息;
在第一节点为切换过程中的目的基站,第二节点为切换过程中的源基站时,接收第二节点发送的Xn应用协议XnAP消息,其中,XnAP消息包括用于模型选择的信息;
在第一节点为多连接场景下的主节点MN,第二节点为多连接场景下的辅助节点SN时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息;
在第一节点为新服务gNB new serving gNB,第二节点为上一次服务gNB last serving gNB时,接收第二节点发送的XnAP消息,其中,XnAP消息包括用于模型选择的信息;
在第一节点为分离架构下的集中单元CU,第二节点为分离架构下的分布单元DU时,接收第二节点发送的F1AP消息,其中,F1AP消息包括用于模型选择的信息。
可选地,在本公开的一个实施例之中,确定模块4101,还用于:
响应于完成模型选择,发送模型选择结果至第三节点。
可选地,在本公开的一个实施例之中,其中,第一节点、第三节点选自以下组合中的至少一项:
第一节点为终端设备,第三节点为基站;
第一节点为终端设备,第三节点为核心网节点;
第一节点为基站,第三节点为终端设备;
第一节点为分离架构下的CU,第三节点为分离架构下的DU;
第一节点为分离架构下的DU,第三节点为分离架构下的CU;
第一节点为多连接场景下的MN,第三节点为多连接场景下的SN;
第一节点为多连接场景下的SN,第三节点为多连接场景下的MN。
可选地,在本公开的一个实施例之中,确定模块4101,用于发送模型选择结果至第三节点时,具体用于以下至少一项:
在第一节点为终端设备,第三节点为基站时,通过RRC信令和/或下层lower layer信令发送模型选择结果至第三节点;
在第一节点为终端设备,第三节点为核心网节点时,通过NAS信令发送模型选择结果至第三节点;
在第一节点为基站,第三节点为终端设备时,通过RRC信令和/或lower layer信令发送模型选择结果至第三节点;
在第一节点为分离架构下的CU,第三节点为分离架构下的DU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果;
在第一节点为分离架构下的DU,第三节点为分离架构下的CU时,发送F1AP消息至第三节点,其中,F1AP消息包括模型选择结果;
在第一节点为多连接场景下的MN,第三节点为多连接场景下的SN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果;
在第一节点为多连接场景下的SN,第三节点为多连接场景下的MN时,发送XnAP消息至第三节点,其中,XnAP消息包括模型选择结果。
可选地,在本公开的一个实施例之中,其中,模型选择的结果包括用于标识模型的标识ID信息。
图42为本公开实施例所提供的一种模型选择装置的结构示意图,如图42所示,该装置4200可以设置于第二节点侧,该装置4200可以包括:
发送模块4201,用于发送用于模型选择的信息至第一节点,其中,用于模型选择的信息用于指示第一节点选择模型。
综上所述,在本公开实施例的模型选择装置之中,通过发送模块发送用于模型选择的信息至第一节点,其中,用于模型选择的信息用于指示第一节点选择模型。本公开实施例之中,可以提供模型选择机制,通过用于模型选择的信息的交互,可以减少模型选择不准确的情况,可以提高模型选择效率。本公开针对一种“模型选择”这一情形提供了一种处理方法,以发送用于模型选择的信息至第一节点,以第一节点可以进行模型选择,减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图43为本公开实施例所提供的一种模型选择装置的结构示意图,如图43所示,该装置4300可以设置于第三节点侧,该装置4300可以包括:
接收模块4301,用于接收第一节点发送的模型选择结果;
执行模块4302,用于根据模型选择结果,执行相关操作。
综上所述,在本公开实施例的模型选择装置之中,通过接收模块接收第一节点发送的模型选择结果;执行模块根据模型选择结果,执行相关操作。本公开实施例之中,可以提供模型选择机制,并将模型选择的结果应用到对应的第三节点,第三节点可以执行相关操作。本公开针对一种“模型选择”这一情形提供了一种处理方法,以根据第一节点发送的模型选择结果执行相关操作,可以减少模型选择时长,且无需多个节点参与选择,可以减少推理节点不一样导致模型选择不准确的情况,可以提高模型选择的效率和准确性。
图44是本公开一个实施例所提供的一种终端设备UE4400的框图。例如,UE4400可以是移动电话,计算机,数字广播终端设备,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图44,UE4400可以包括以下至少一个组件:处理组件4402,存储器4404,电源组件4406,多媒体组件4408,音频组件4410,输入/输出(I/O)的接口4412,传感器组件4414,以及通信组件4416。
处理组件4402通常控制UE4400的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件4402可以包括至少一个处理器4420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件4402可以包括至少一个模块,便于处理组件4402和其他组件之间的交互。例如,处理组件4402可以包括多媒体模块,以方便多媒体组件4408和处理组件4402之间的交互。
存储器4404被配置为存储各种类型的数据以支持在UE4400的操作。这些数据的示例包括用于在UE4400上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器4404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件4406为UE4400的各种组件提供电力。电源组件4406可以包括电源管理系统,至少一个电源,及其他与为UE4400生成、管理和分配电力相关联的组件。
多媒体组件4408包括在所述UE4400和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括至少一个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的唤醒时间和压力。在一些实施例中,多媒体组件4408包括一个前置摄像头和/或后置摄像头。当UE4400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件4410被配置为输出和/或输入音频信号。例如,音频组件4410包括一个麦克风(MIC),当UE4400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器4404或经由通信组件4416发送。在一些实施例中,音频组件4410还包括一个扬声器,用于输出音频信号。
I/O接口4412为处理组件4402和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件4414包括至少一个传感器,用于为UE4400提供各个方面的状态评估。例如,传感器组件4414可以检测到设备2600的打开/关闭状态,组件的相对定位,例如所述组件为UE4400的显示器和小键盘,传感器组件4414还可以检测UE4400或UE4400的一个组件的位置改变,用户与UE4400接触的存在或不存在,UE4400方位或加速/减速和UE4400的温度变化。传感器组件4414可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件4414还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件4414还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件4416被配置为便于UE4400和其他设备之间有线或无线方式的通信。UE4400可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件4416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件4416还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,UE4400可以被至少一个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实 现,用于执行上述方法。
图45是本公开实施例所提供的一种基站4500的框图。例如,基站4500可以被提供为一网络侧设备。参照图45,基站4500包括处理组件4522,其进一步包括至少一个处理器,以及由存储器4532所代表的存储器资源,用于存储可由处理组件4522的执行的指令,例如应用程序。存储器4532中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件4522被配置为执行指令,以执行上述方法前述应用在所述基站的任意方法,例如,如图3所示方法。
基站4500还可以包括一个电源组件4530被配置为执行基站4500的电源管理,一个有线或无线网络接口4550被配置为将基站4500连接到网络,和一个输入/输出(I/O)接口4558。基站4500可以操作基于存储在存储器4532的操作系统,例如Windows Server TM,Mac OS XTM,Unix TM,Linux TM,Free BSDTM或类似。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
本公开实施例提供的一种通信装置。通信装置可包括收发模块和处理模块。收发模块可包括发送模块和/或接收模块,发送模块用于实现发送功能,接收模块用于实现接收功能,收发模块可以实现发送功能和/或接收功能。
通信装置可以是终端设备(如前述方法实施例中的终端设备),也可以是终端设备中的装置,还可以是能够与终端设备匹配使用的装置。或者,通信装置可以是网络设备,也可以是网络设备中的装置,还可以是能够与网络设备匹配使用的装置。
本公开实施例提供的另一种通信装置。通信装置可以是网络设备,也可以是终端设备(如前述方法实施例中的终端设备),也可以是支持网络设备实现上述方法的芯片、芯片系统、或处理器等,还可以是支持终端设备实现上述方法的芯片、芯片系统、或处理器等。该装置可用于实现上述方法实施例中描述的方法,具体可以参见上述方法实施例中的说明。
通信装置可以包括一个或多个处理器。处理器可以是通用处理器或者专用处理器等。例如可以是基带处理器或中央处理器。基带处理器可以用于对通信协议以及通信数据进行处理,中央处理器可以用于对通信装置(如,网络侧设备、基带芯片,终端设备、终端设备芯片,DU或CU等)进行控制,执行计算机程序,处理计算机程序的数据。
可选地,通信装置中还可以包括一个或多个存储器,其上可以存有计算机程序,处理器执行所述计算机程序,以使得通信装置执行上述方法实施例中描述的方法。可选地,所述存储器中还可以存储有数据。通信装置和存储器可以单独设置,也可以集成在一起。
可选地,通信装置还可以包括收发器、天线。收发器可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
可选地,通信装置中还可以包括一个或多个接口电路。接口电路用于接收代码指令并传输至处理器。处理器运行所述代码指令以使通信装置执行上述方法实施例中描述的方法。
通信装置为第一节点:处理器用于执行图3-图36任一所示的方法。
通信装置为第二节点:处理器用于执行图37任一所示的方法。
通信装置为第三节点:处理器用于执行图38任一所示的方法。
在一种实现方式中,处理器中可以包括用于实现接收和发送功能的收发器。例如该收发器可以是收发电路,或者是接口,或者是接口电路。用于实现接收和发送功能的收发电路、接口或接口电路可以是分开的,也可以集成在一起。上述收发电路、接口或接口电路可以用于代码/数据的读写,或者,上述收发电路、接口或接口电路可以用于信号的传输或传递。
在一种实现方式中,处理器可以存有计算机程序,计算机程序在处理器上运行,可使得通信装置执行上述方法实施例中描述的方法。计算机程序可能固化在处理器中,该种情况下,处理器可能由硬件实现。
在一种实现方式中,通信装置可以包括电路,所述电路可以实现前述方法实施例中发送或接收或者通信的功能。本公开中描述的处理器和收发器可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。该处理器和收发器也可以用各种IC工艺技术来制造,例如互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)、N型金属氧化物半导体(nMetal-oxide-semiconductor,NMOS)、P型金属氧化物半导体(positive channel metal oxide semiconductor,PMOS)、双极结型晶体管(bipolar junction transistor,BJT)、双极CMOS(BiCMOS)、硅锗(SiGe)、砷化镓(GaAs)等。
以上实施例描述中的通信装置可以是网络设备或者终端设备(如前述方法实施例中的终端设备),但本公开中描述的通信装置的范围并不限于此,而且通信装置的结构可以不受的限制。通信装置可以是独立的设备或者可以是较大设备的一部分。例如所述通信装置可以是:
(1)独立的集成电路IC,或芯片,或,芯片系统或子系统;
(2)具有一个或多个IC的集合,可选地,该IC集合也可以包括用于存储数据,计算机程序的存储部件;
(3)ASIC,例如调制解调器(Modem);
(4)可嵌入在其他设备内的模块;
(5)接收机、终端设备、智能终端设备、蜂窝电话、无线设备、手持机、移动单元、车载设备、网络设备、云设备、人工智能设备等等;
(6)其他等等。
对于通信装置可以是芯片或芯片系统的情况,芯片包括处理器和接口。其中,处理器的数量可以是一个或多个,接口的数量可以是多个。
可选地,芯片还包括存储器,存储器用于存储必要的计算机程序和数据。
本领域技术人员还可以了解到本公开实施例列出的各种说明性逻辑块(illustrative logical block)和步骤(step)可以通过电子硬件、电脑软件,或两者的结合进行实现。这样的功能是通过硬件还是软件来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本公开实施例保护的范围。
本公开还提供一种可读存储介质,其上存储有指令,该指令被计算机执行时实现上述任一方法实施例的功能。
本公开还提供一种计算机程序产品,该计算机程序产品被计算机执行时实现上述任一方法实施例的功能。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序。在计算机上加载和执行所述计算机程序时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解:本公开中涉及的第一、第二等各种数字编号仅为描述方便进行的区分,并不用来限制本公开实施例的范围,也表示先后顺序。
本公开中的至少一个还可以描述为一个或多个,多个可以是两个、三个、四个或者更多个,本公开不做限制。在本公开实施例中,对于一种技术特征,通过“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”等区分该种技术特征中的技术特征,该“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”描述的技术特征间无先后顺序或者大小顺序。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本公开旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (29)

  1. 一种模型选择方法,其特征在于,所述方法由第一节点执行,所述方法包括:
    确定用于模型选择的信息;
    根据所述用于模型选择的信息,选择模型。
  2. 根据权利要求1所述的方法,其特征在于,其中,所述用于模型选择的信息,包括以下至少一项:
    优先级等级信息,其中,所述优先级等级用于指示模型选择优先级和/或回退的优先级;
    模型选择的区域范围信息,其中,所述区域范围信息用于指示模型可用的区域范围;
    使用时间信息,其中,所述使用时间信息用于指示模型可用时间信息;
    终端设备状态信息,其中,所述终端状态信息用于指示模型可用时终端设备的状态;
    功能类型信息,其中,所述功能类型信息用于指示模型针对的功能;
    事件准则,其中,所述事件准则用于指示模型使用的特定事件;
    无线环境相关阈值准则,其中,所述无线环境相关阈值准则用于指示模型可用的无线环境;
    业务相关准则,其中,所述业务相关准则用于指示模型可用的特定业务和/或业务体验情况;
    模型性能相关准则,其中,所述模型性能相关准则用于指示模型可用的性能指标;
    终端设备移动速度准则,其中,终端设备移动速度准则用于指示模型可用的终端设备速度和/或特定移动速度门限;
    终端算力准则和/或电量消耗准则,其中,所述用终端算力准则和/或所述电量消耗准则于指示模型可用的终端设备能力要求;
    模型应用场景,其中,所述模型应用场景用于指示模型可用的地理覆盖场景。
  3. 根据权利要求2所述的方法,其特征在于,其中,所述用于模型选择的信息包括所述优先级等级信息,并且其中,
    所述根据所述用于模型选择的信息,选择模型,包括:
    响应于第一模型使用异常或不再满足使用条件,根据所述优先等级信息,回退至满足所述使用条件的第二模型,其中,所述第二模型为满足所述使用条件的次低优先级等级模型;或
    根据所述优先等级信息,回退至第三模型,其中,所述第三模型为最低优先级等级模型或缺省模型。
  4. 根据权利要求2所述的方法,其特征在于,其中,所述用于模型选择的信息包括所述优先级等级信息,并且其中,
    所述根据所述用于模型选择的信息,选择模型,包括:
    在模型集合中至少一个模型满足模型选择信息时,根据所述优先级等级信息,在所述至少一个模型中选择优先级等级最高的模型,其中,所述模型选择信息包括除所述优先级等级信息之外的至少一项用于模型选择的信息。
  5. 根据权利要求2所述的方法,其特征在于,其中,所述模型选择的区域范围信息包括网络标识,
    所述网络标识包括以下至少一项:
    公共陆地移动网列表PLMN list;
    跟踪区代码列表TAC list;
    无线接入网通知区RNA;
    下一代基站标识列表NG-RAN node ID list;
    小区列表cell list;
    经纬度和/或高度信息。
  6. 根据权利要求2所述的方法,其特征在于,所述终端设备的状态包括以下至少一项:
    无线资源控制RRC_连接状态RRC_CONNECTED;
    RRC_不活动状态RRC_INACTIVE;
    RRC_空闲状态RRC_IDLE。
  7. 根据权利要求2所述的方法,其特征在于,其中,所述无线环境相关阈值准则包括以下至少一项:
    所述终端设备测量的信号强度;
    所述终端设备测量的信号干扰;
    基站测量的上行信号干扰。
  8. 根据权利要求2所述的方法,其特征在于,其中,所述业务相关准则包括以下至少一项:
    PDU会话信息;
    QoS流信息;
    无线承载信息;
    网络切片信息;
    体验质量QoE门限;
    服务质量QoS门限。
  9. 根据权利要求2所述的方法,其特征在于,其中,所述终端算力准则包括中央处理器CPU的使用率门限。
  10. 根据权利要求1所述的方法,其特征在于,所述用于模型选择的信息为针对每一个特定的模型和/或针对每一个特定的模型标识。
  11. 根据权利要求1所述的方法,其特征在于,所述确定用于模型选择的信息,包括:
    接收第二节点发送的所述用于模型选择的信息。
  12. 根据权利要求11所述的方法,其特征在于,其中,所述第一节点、所述第二节点选自以下组合中的至少一项:
    所述第一节点为终端设备,所述第二节点为基站;
    所述第一节点为所述终端设备,所述第二节点为核心网节点;
    所述第一节点为所述基站,所述第二节点为核心网节点;
    所述第一节点为所述基站,所述第二节点为操作维护管理OAM节点;
    所述第一节点为切换过程中的目的基站,所述第二节点为切换过程中的源基站;
    所述第一节点为多连接场景下的主节点MN,所述第二节点为多连接场景下的辅助节点SN;
    所述第一节点为新服务gNB new serving gNB,所述第二节点为上一次服务gNB last serving gNB;
    所述第一节点为分离架构下的集中单元CU,所述第二节点为所述分离架构下的分布单元DU。
  13. 根据权利要求12所述的方法,其特征在于,所述接收第二节点发送的所述用于模型选择的信息,包括以下至少一项:
    在所述第一节点为终端设备,所述第二节点为基站时,接收所述第二节点发送的RRC消息,其中,所述RRC消息包括所述用于模型选择的信息;
    在所述第一节点为所述终端设备,所述第二节点为核心网节点时,接收所述第二节点发送的非接入NAS消息,其中,所述NAS消息包括所述用于模型选择的信息;
    在所述第一节点为所述基站,所述第二节点为核心网节点时,接收所述第二节点发送的下一代应用协议NGAP消息,其中,所述NGAP消息包括所述用于模型选择的信息;
    在所述第一节点为所述基站,所述第二节点为操作维护管理OAM节点时,接收所述第二节点发送的所述用于模型选择的信息;
    在所述第一节点为切换过程中的目的基站,所述第二节点为切换过程中的源基站时,接收所述第二节点发送的Xn应用协议XnAP消息,其中,所述XnAP消息包括所述用于模型选择的信息;
    在所述第一节点为多连接场景下的主节点MN,所述第二节点为多连接场景下的辅助节点SN时,接收所述第二节点发送的XnAP消息,其中,所述XnAP消息包括所述用于模型选择的信息;
    在所述第一节点为新服务gNB new serving gNB,所述第二节点为上一次服务gNB last serving gNB时,接收所述第二节点发送的XnAP消息,其中,所述XnAP消息包括所述用于模型选择的信息;
    在所述第一节点为分离架构下的集中单元CU,所述第二节点为所述分离架构下的分布单元DU时,接收所述第二节点发送的F1AP消息,其中,所述F1AP消息包括所述用于模型选择的信息。
  14. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于完成模型选择,发送模型选择结果至第三节点。
  15. 根据权利要求14所述的方法,其特征在于,其中,所述第一节点、所述第三节点选自以下组合中的至少一项:
    所述第一节点为所述终端设备,所述第三节点为基站;
    所述第一节点为所述终端设备,所述第三节点为核心网节点;
    所述第一节点为所述基站,所述第三节点为所述终端设备;
    所述第一节点为分离架构下的CU,所述第三节点为所述分离架构下的DU;
    所述第一节点为所述分离架构下的DU,所述第三节点为所述分离架构下的CU;
    所述第一节点为多连接场景下的MN,所述第三节点为多连接场景下的SN;
    所述第一节点为所述多连接场景下的SN,所述第三节点为所述多连接场景下的MN。
  16. 根据权利要求14所述的方法,其特征在于,所述发送模型选择结果至第三节点,包括以下至少一项:
    在所述第一节点为所述终端设备,所述第三节点为基站时,通过RRC信令和/或下层lower layer信令发送所述模型选择结果至所述第三节点;
    在所述第一节点为所述终端设备,所述第三节点为核心网节点时,通过NAS信令发送所述模型选择结果至所述第三节点;
    在所述第一节点为所述基站,所述第三节点为所述终端设备时,通过所述RRC信令和/或所述lower layer信令发送所述模型选择结果至所述第三节点;
    在所述第一节点为分离架构下的CU,所述第三节点为所述分离架构下的DU时,发送F1AP消息至所述第三节点,其中,所述F1AP消息包括所述模型选择结果;
    在所述第一节点为所述分离架构下的DU,所述第三节点为所述分离架构下的CU时,发送所述F1AP消息至所述第三节点,其中,所述F1AP消息包括所述模型选择结果;
    在所述第一节点为多连接场景下的MN,所述第三节点为多连接场景下的SN时,发送XnAP消息至所述第三节点,其中,所述XnAP消息包括所述模型选择结果;
    在所述第一节点为所述多连接场景下的SN,所述第三节点为所述多连接场景下的MN时,发送所述XnAP消息至所述第三节点,其中,所述XnAP消息包括所述模型选择结果。
  17. 根据权利要求14至16任一项所述的方法,其特征在于,其中,所述模型选择的结果包括用于标识模型的标识ID信息。
  18. 一种模型选择方法,其特征在于,所述方法由第二节点执行,所述方法包括:
    发送用于模型选择的信息至第一节点,其中,所述用于模型选择的信息用于指示所述第一节点选择模型。
  19. 一种模型选择方法,其特征在于,所述方法由第三节点执行,所述方法包括:
    接收第一节点发送的模型选择结果;
    根据所述模型选择结果,执行相关操作。
  20. 一种模型选择装置,其特征在于,所述装置设置于第一节点侧,所述装置包括:
    确定模块,用于确定用于模型选择的信息;
    选择模块,用于根据所述用于模型选择的信息,选择模型。
  21. 一种模型选择装置,其特征在于,所述装置设置于第二节点侧,所述装置包括:
    发送模块,用于发送用于模型选择的信息至第一节点,其中,所述用于模型选择的信息用于指示所述第一节点选择模型。
  22. 一种模型选择装置,其特征在于,所述装置设置于第三节点侧,所述装置包括:
    接收模块,用于接收第一节点发送的模型选择结果;
    执行模块,用于根据所述模型选择结果,执行相关操作。
  23. 一种第一节点,其特征在于,所述设备包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求1至17中任一项所述的方法。
  24. 一种第二节点,其特征在于,所述设备包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求18中任一项所述的方法。
  25. 一种第三节点,其特征在于,所述设备包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求19中任一项所述的方法。
  26. 一种通信装置,其特征在于,包括:处理器和接口电路,其中
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求1至17或18或19中任一项所述的方法。
  27. 一种计算机可读存储介质,其特征在于,用于存储有指令,当所述指令被执行时,使如权利要求1至17或18或19中任一项所述的方法被实现。
  28. 一种模型选择系统,其特征在于,所述系统包括:
    第二节点,用于发送用于模型选择的信息至第一节点;
    所述第一节点,用于接收所述第二节点发送的所述用于模型选择的信息;
    所述第一节点,还用于根据所述用于模型选择的信息,选择模型。
  29. 一种模型选择系统,其特征在于,所述系统包括:
    第一节点,用于确定用于模型选择的信息;
    所述第一节点,还用于根据所述用于模型选择的信息,选择模型;
    所述第一节点,还用于发送模型选择结果至第三节点;
    所述第三节点,用于接收所述第一节点发送的模型选择结果;所述第三节点,用于根据所述模型选择结果,执行相关操作。
PCT/CN2022/129668 2022-11-03 2022-11-03 模型选择方法、装置 WO2024092660A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280004229.5A CN118302772A (zh) 2022-11-03 2022-11-03 模型选择方法、装置
PCT/CN2022/129668 WO2024092660A1 (zh) 2022-11-03 2022-11-03 模型选择方法、装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/129668 WO2024092660A1 (zh) 2022-11-03 2022-11-03 模型选择方法、装置

Publications (1)

Publication Number Publication Date
WO2024092660A1 true WO2024092660A1 (zh) 2024-05-10

Family

ID=90929259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129668 WO2024092660A1 (zh) 2022-11-03 2022-11-03 模型选择方法、装置

Country Status (2)

Country Link
CN (1) CN118302772A (zh)
WO (1) WO2024092660A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020107184A1 (zh) * 2018-11-26 2020-06-04 华为技术有限公司 一种模型选择方法和终端
US20210357757A1 (en) * 2020-05-15 2021-11-18 David T. Nguyen Customizing an artificial intelligence model to process a data set
CN114189889A (zh) * 2021-12-03 2022-03-15 中国信息通信研究院 一种无线通信人工智能处理方法和设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020107184A1 (zh) * 2018-11-26 2020-06-04 华为技术有限公司 一种模型选择方法和终端
US20210357757A1 (en) * 2020-05-15 2021-11-18 David T. Nguyen Customizing an artificial intelligence model to process a data set
CN114189889A (zh) * 2021-12-03 2022-03-15 中国信息通信研究院 一种无线通信人工智能处理方法和设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
INTERDIGITAL, INC.: "Discussion on AIML methods", 3GPP DRAFT; R2-2210436, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG2, no. Electronic; 20221010 - 20221019, 30 September 2022 (2022-09-30), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052263755 *

Also Published As

Publication number Publication date
CN118302772A (zh) 2024-07-05

Similar Documents

Publication Publication Date Title
CN113056951B (zh) 信息传输方法、装置、通信设备和存储介质
US20240023082A1 (en) Data processing method and apparatus, communication device, and storage medium
CN113170334B (zh) 信息传输方法、装置、通信设备和存储介质
WO2023000150A1 (zh) 一种中继确定方法及装置
US20240098595A1 (en) Method and apparatus for determining handover configuration, and communication device
KR20220163411A (ko) 정보 전송 방법, 장치, 통신 기기 및 저장 매체
US20220232445A1 (en) Communications System Switching Method and Terminal Device
CN113923800A (zh) 一种通信方法及装置
CN111543094A (zh) 寻呼处理方法、装置、用户设备、基站及存储介质
CN114731566A (zh) 路径切换方法及装置
US20240064606A1 (en) Relay ue selection method and apparatus, information pro- cessing method and apparatus, and device and medium
WO2024092664A1 (zh) 会话区分方法、装置
WO2024065133A1 (zh) 定位辅助终端设备的重新选择方法、装置
WO2024092660A1 (zh) 模型选择方法、装置
WO2022021271A1 (zh) 波束切换方法及装置、网络设备、终端及存储介质
CN116846771A (zh) 业务操作方法、装置、终端及可读存储介质
WO2021227081A1 (zh) 转移业务的方法、装置、通信设备及存储介质
CN114391265A (zh) 信息传输方法、装置、通信设备和存储介质
WO2024065134A1 (zh) 终端设备状态辅助运营方法、装置
WO2024130521A1 (zh) 数据处理方法、装置
WO2023035872A1 (zh) 确定用户面路径的方法及通信装置
WO2024130522A1 (zh) 数据处理方法、装置
WO2024138375A1 (zh) 一种通信方法、装置、设备及存储介质
WO2024138758A1 (zh) 感知信息的获取方法、装置
WO2024065337A1 (zh) 服务域限制的实现方法、装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22963970

Country of ref document: EP

Kind code of ref document: A1