WO2024082274A1 - Ai任务指示的方法、通信装置和系统 - Google Patents

Ai任务指示的方法、通信装置和系统 Download PDF

Info

Publication number
WO2024082274A1
WO2024082274A1 PCT/CN2022/126752 CN2022126752W WO2024082274A1 WO 2024082274 A1 WO2024082274 A1 WO 2024082274A1 CN 2022126752 W CN2022126752 W CN 2022126752W WO 2024082274 A1 WO2024082274 A1 WO 2024082274A1
Authority
WO
WIPO (PCT)
Prior art keywords
network node
task
information
orchestration
ran
Prior art date
Application number
PCT/CN2022/126752
Other languages
English (en)
French (fr)
Inventor
乔云飞
张公正
李榕
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/126752 priority Critical patent/WO2024082274A1/zh
Publication of WO2024082274A1 publication Critical patent/WO2024082274A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control

Definitions

  • the present application relates to the field of wireless communications, specifically, to wireless communications technology using intelligent networks, and more particularly to a method, communication device, and system for AI task indication.
  • AI artificial intelligence
  • the present application provides a method, communication device and system for AI task indication.
  • a control node determine scheduling information for an AI task and indicate the scheduling information
  • the AI task can be executed through network nodes in a wireless network, thereby realizing the integration of AI and the wireless network.
  • a method for AI task indication is provided, which can be executed by a control node.
  • the control node can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the method may include: a control node determines first orchestration information for an AI task, the first orchestration information instructing a first network node to perform a first task of the AI task; and the control node sends the first orchestration information to the first network node.
  • the control node can determine the orchestration information of the network node for the AI task and send the orchestration information to the network node, and then the network node can perform corresponding operations based on the orchestration information.
  • the network node can perform unified orchestration to improve the overall efficiency.
  • the method also includes: the control node determines second orchestration information for the AI task, the second orchestration information instructing the second network node to perform a second task of the AI task; the control node sends the second orchestration information to the first network node, or the control node sends the second orchestration information to the second network node.
  • the control node determines the scheduling information of multiple network nodes for the AI task, which can improve the global efficiency.
  • the control node can send the scheduling information of each network node to a certain network node (such as the first network node), reducing the signaling overhead caused by the control node sending the scheduling information to each network node.
  • the control node can also send the scheduling information of each network node to each network node separately, which can reduce the signaling overhead caused by transmitting the scheduling information between network nodes.
  • the method also includes: the control node determines second orchestration information for the AI task, the second orchestration information instructing the second network node to perform a second task of the AI task; the control node sends the first orchestration information and the second orchestration information to the second network node; the control node sends the first orchestration information to the first network node, including: the control node sends the first orchestration information and the second orchestration information to the first network node.
  • control node determines the orchestration information of multiple network nodes for the AI task, which can improve the global efficiency.
  • the control node can send the orchestration information of all network nodes to each network node, which can reduce the overhead caused by the control node selecting the orchestration information of each network node.
  • the first network node is the first network node to participate in executing the AI task.
  • the first orchestration information includes at least one of the following information: the first task, an identifier of the first network node, resources provided by the first network node to perform the first task, and an exit condition for the first network node to perform the first task.
  • control node determines the first orchestration information for the AI task, including: the control node determines the first orchestration information for the AI task according to the AI capability of the first network node.
  • the control node can determine the orchestration information of the network node according to the AI capability of the network node, so that the orchestration information determined by the control node can match the AI capability of each network node, thereby reducing the probability that the network node cannot perform AI tasks.
  • the method further includes: the control node receiving response information from the first network node, where the response information indicates whether the first network node agrees with the first orchestration information.
  • the network node can also send a response to the control node whether it agrees to the orchestration information, so that the control node can know whether the network node agrees to the orchestration information, and then determine whether to issue the AI task.
  • a method for AI task indication is provided, which can be performed by a network node.
  • the network node can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the following is an example of a first network node.
  • the method may include: a first network node receives first orchestration information from a control node, the first orchestration information instructing the first network node to execute a first task of an AI task; and the first network node executes the first task according to the first orchestration information.
  • the first network node receives first orchestration information from a control node, including: the first network node receives the first orchestration information and second orchestration information from the control node, the second orchestration information instructing the second network node to perform a second task of the AI task; the method also includes: the first network node sends the second orchestration information to the second network node.
  • the first network node sending the second orchestration information to the second network node includes: the first network node sending the processing result of the first task and the second orchestration information to the second network node.
  • the first network node is the first network node to participate in executing the AI task.
  • the first orchestration information includes at least one of the following information: the first task, an identifier of the first network node, resources provided by the first network node to perform the first task, and an exit condition for the first network node to perform the first task.
  • the method further includes: the first network node sending the AI capability of the first network node to the control node.
  • the method further includes: the first network node sending response information to the control node, where the response information indicates whether the first network node agrees with the first orchestration information.
  • the method also includes: the first network node sends the first task or part of the first task to at least one terminal device; or, the first network node sends the first task or part of the first task to the second network node, and the second network node is at least one network node participating in executing the AI task.
  • the network node can schedule other network nodes (such as the second network node) or terminal devices to collaboratively perform AI tasks. This can perform AI tasks by utilizing idle computing power, which can not only improve resource utilization but also improve flexibility.
  • At least one terminal device is in a preset state.
  • the network node can send an AI task to a terminal in a preset state, that is, the AI in the preset state can participate in the execution of the AI task.
  • the method before the first network node sends the first task or part of the first task to at least one terminal device, the method also includes: the first network node sends notification information to at least one terminal device, and the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • a method for AI task indication is provided, which can be performed by a network node.
  • the network node can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the following is an example of a first network node.
  • the method may include: the first network node sends a processing result and target state information of a first task of the AI task to the second network node, where the target state information is used to indicate a target result of the AI task.
  • the processing result and target state information of the first task may implicitly indicate that the second network node participates in executing the AI task, such as the second network node executing the second task of the AI task.
  • network nodes can collaborate to perform AI tasks, and network nodes can determine whether to participate in the execution of AI tasks based on current processing results and target status information, thereby reducing the signaling overhead caused by instructing network nodes to participate in the execution of AI tasks.
  • the first network node sends the processing result and target state information of the first task of the AI task to the second network node, including: based on the AI capability of the second network node, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the first network node can determine whether to send the processing result and target state information of the first task of the AI task to the second network node based on the AI capability of the second network node, that is, determine whether the second network node participates in executing the AI task. This can reduce the probability that the second network node cannot participate in executing the AI task.
  • the method further includes: the first network node sends first request information to the control node or the second network node, the first request information requesting the AI capability of the second network node; the first network node receives response information to the first request information, the response information to the first request information indicating the AI capability of the second network node.
  • the method before the first network node sends the processing result and target status information of the first task of the AI task to the second network node, the method also includes: the first network node sends second request information to the second network node, and the second request information requests the second network node to collaborate in performing the AI task.
  • the first network node determines that the second network node agrees to collaborate in executing the AI task
  • the first network node sends the processing result and target state information of the first task of the AI task to the second network node, thereby reducing the probability that the second network node cannot participate in executing the AI task.
  • the processing result of the first task represents current state information of the AI task.
  • the current state information and target state information of the AI task may indicate that the second network node participates in executing the AI task, such as executing the second task of the AI task.
  • the method also includes: the first network node also sends area information to the second network node, and the area information is used by the second network node to determine the network node that collaborates to perform the AI task.
  • the method further includes: the first network node sends the first task or a portion of the first task to at least one terminal device.
  • network nodes can schedule terminal devices to collaboratively execute AI tasks. This can perform AI tasks by utilizing idle computing power, which can not only improve resource utilization but also increase flexibility.
  • At least one terminal device is in a preset state.
  • the network node can send an AI task to a terminal in a preset state, that is, the AI in the preset state can participate in the execution of the AI task.
  • the method before the first network node sends the first task or part of the first task to at least one terminal device, the method also includes: the first network node sends notification information to at least one terminal device, and the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • a method for AI task indication is provided, which can be performed by a network node.
  • the network node can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the following is an example of a second network node.
  • the method may include: a second network node receives a processing result and target state information of a first task of an AI task from a first network node, where the target state information is used to indicate a target result of the AI task; and the second network node executes a second task of the AI task based on the processing result and target state information of the first task.
  • the method further includes: the second network node sending the AI capability of the second network node to the control node or the first network node.
  • the method before the second network node receives the processing result and target status information of the first task of the AI task from the first network node, the method also includes: the second network node receives second request information from the first network node, and the second request information requests the second network node to collaborate in performing the AI task.
  • the processing result of the first task represents the current state information of the AI task; the second network node executes the second task of the AI task based on the processing result of the first task and the target state information, including: the second network node executes the second task of the AI task based on the current state information and the target state information of the AI task.
  • the method also includes: the second network node receives area information from the first network node, and the area information is used by the second network node to determine the network node that collaborates to perform the AI task.
  • the method further includes: the second network node sends the second task or a portion of the second task to at least one terminal device.
  • At least one terminal device is in a preset state.
  • the method before the second network node sends the second task or part of the second task to at least one terminal device, the method also includes: the second network node sends notification information to at least one terminal device, and the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • a method for AI task indication is provided, which can be executed by a network node.
  • the network node can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the method may include: a network node sending an AI task to at least one terminal device, wherein the at least one terminal device is in a preset state.
  • the network node can send the AI task to the terminal in a preset state, that is, the AI in the preset state can perform the AI task.
  • the method before the network node sends the AI task to at least one terminal device, the method also includes: the network node sends notification information to the at least one terminal device, and the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • a method for AI task indication is provided, which can be executed by a terminal device.
  • the terminal device can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the method may include: a terminal device receiving an AI task from a network node, wherein the terminal device is in a preset state; and the terminal device executing the AI task.
  • the method before the terminal device receives the AI task from the network node, the method also includes: the terminal device receives notification information from the network node, and the notification information notifies that the terminal device is adjusted to a preset state.
  • a method for AI task indication is provided, which can be performed by a communication system, and the communication information includes, for example, a control node and a network node.
  • the control node and the network node can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the method may include: a control node determines first orchestration information for an AI task, the first orchestration information instructing a first network node to perform a first task of the AI task; the control node sends the first orchestration information to the first network node; and the first network node performs the first task according to the first orchestration information.
  • the control node may be, for example, the control node described in the first aspect
  • the first network node may be, for example, the first network node described in the second aspect.
  • a method for AI task indication is provided, which can be performed by a communication system, and the communication information includes, for example, a first network node and a second network node.
  • the first network node and the second network node can be devices, or chips (systems) or circuits for devices, which are not limited in this application.
  • the method may include: a first network node sends a processing result and target state information of a first task of an AI task to a second network node, where the target state information is used to indicate a target result of the AI task; and the second network node executes a second task of the AI task based on the processing result and target state information of the first task.
  • the first network node may be, for example, the first network node described in the third aspect
  • the second network node may be, for example, the second network node described in the fourth aspect.
  • a method for AI task indication is provided, which can be executed by a communication system, and the communication information includes, for example, a network node and a terminal device.
  • the network node and the terminal device can be a device, or a chip (system) or circuit for a device, which is not limited in this application.
  • the method may include: a network node sending an AI task to at least one terminal device, wherein the at least one terminal device is in a preset state; and the at least one terminal device executing the AI task.
  • the network node may be, for example, the network node described in the fifth aspect
  • the terminal device may be, for example, the terminal device described in the sixth aspect.
  • a communication device which is used to execute the method provided in any one of the first to ninth aspects.
  • the device may include a unit and/or module, such as a processing unit and/or a communication unit, for executing the method provided in any one of the above implementations of any one of the first to ninth aspects.
  • the apparatus is a communication device.
  • the communication unit may be a transceiver, or an input/output interface;
  • the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the device is a chip, a chip system or a circuit used in a communication device.
  • the communication unit may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or a related circuit on the chip, the chip system or the circuit;
  • the processing unit may be at least one processor, a processing circuit or a logic circuit.
  • a communication device which includes: a memory for storing programs; and at least one processor for executing computer programs or instructions stored in the memory to execute the method provided in any one of the above-mentioned implementations of any one of the first to ninth aspects.
  • the apparatus is a communication device.
  • the apparatus is a chip, a chip system or a circuit used in a communication device.
  • the present application provides a processor for executing the methods provided in the above aspects.
  • a computer-readable storage medium which stores a program code for execution by a device, and the program code includes a method provided by any one of the above-mentioned implementation methods for executing any one of the above-mentioned first to ninth aspects.
  • a computer program product comprising instructions is provided.
  • the computer program product When the computer program product is run on a computer, the computer executes the method provided by any one of the above-mentioned implementation modes of any one of the above-mentioned first to ninth aspects.
  • a chip including a processor and a communication interface, the processor reads instructions stored in a memory through the communication interface, and executes a method provided by any one of the above-mentioned implementation methods of any one of the above-mentioned first to ninth aspects.
  • the chip also includes a memory, in which a computer program or instruction is stored, and the processor is used to execute the computer program or instruction stored in the memory.
  • the processor is used to execute the method provided in any one of the above-mentioned implementation methods of any one of the first to ninth aspects.
  • a communication system comprising the control node in the first aspect and the first network node in the second aspect.
  • the communication system further includes a second network node.
  • a communication system comprising the first network node in the third aspect and the second network node in the fourth aspect.
  • a communication system comprising the network node in the fifth aspect and the terminal device in the sixth aspect.
  • FIG. 1 is a schematic diagram of a wireless communication system 100 applicable to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a wireless communication system according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a method 300 for AI task indication provided in accordance with an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a method 400 for AI task indication provided by another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a method 500 for AI task indication provided by another embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a method 600 for AI task indication provided according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram applicable to an embodiment of the present application.
  • FIG8 is a schematic flowchart of a method 800 for AI task indication provided according to another embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method 900 for AI task indication provided according to another embodiment of the present application.
  • FIG. 10 is a schematic block diagram of a communication device 1000 provided in an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a communication device 1100 provided in an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of a chip system 1200 provided in an embodiment of the present application.
  • AI model an algorithm or computer program that can realize AI functions.
  • the AI model represents the mapping relationship between the input and output of the model, or the AI model is a function model that maps a certain dimension of input to a certain dimension of output.
  • a and b are the parameters of the AI model, and a and b can be obtained through machine learning training.
  • the implementation of the AI model can be a hardware circuit, or software, or a combination of software and hardware, without limitation.
  • Non-limiting examples of software include: program code, program, subroutine, instruction, instruction set, code, code segment, software module, application, or software application, etc.
  • Dataset Data used for model training, model validation, or model testing in machine learning. The quantity and quality of data will affect the effect of machine learning.
  • Model training By selecting a suitable loss function and using an optimization algorithm to train the model parameters, the loss function value is minimized. The loss function is used to measure the difference between the model's predicted value and the true value.
  • AI tasks refers to tasks related to AI.
  • AI tasks may include tasks related to models (such as AI models), tasks related to data sets, etc.
  • the technical solution provided in this application can be applied to various communication systems, such as: the fifth generation (5th generation, 5G) or new radio (new radio, NR) system, long term evolution (long term evolution, LTE) system, LTE frequency division duplex (frequency division duplex, FDD) system, LTE time division duplex (time division duplex, TDD) system, etc.
  • the technical solution provided in this application can also be applied to future communication systems, such as the sixth generation mobile communication system.
  • the technical solution provided in this application can also be applied to device to device (D2D) communication, vehicle-to-everything (V2X) communication, machine to machine (M2M) communication, machine type communication (machine type communication, MTC), and Internet of things (IoT) communication system or other communication systems.
  • D2D device to device
  • V2X vehicle-to-everything
  • M2M machine to machine
  • MTC machine type communication
  • IoT Internet of things
  • the terminal devices in the embodiments of the present application include various devices with wireless communication functions, which can be used to connect people, objects, machines, etc.
  • the terminal devices can be widely used in various scenarios, such as: cellular communication, D2D, V2X, peer to peer (P2P), M2M, MTC, IoT, virtual reality (VR), augmented reality (AR), industrial control, automatic driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city drones, robots, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery, etc.
  • the terminal device can be a terminal in any of the above scenarios, such as an MTC terminal, an IoT terminal, etc.
  • the terminal device can be a user equipment (UE), terminal, fixed device, mobile station device or mobile device of the third generation partnership project (3GPP) standard, a subscriber unit, a handheld device, a vehicle-mounted device, a wearable device, a cellular phone, a smart phone, a SIP phone, a wireless data card, a personal digital assistant (PDA), a computer, a tablet computer, a notebook computer, a wireless modem, a handheld device (handset), a laptop computer, a computer with wireless transceiver function, a smart book, a vehicle, a satellite, a global positioning system (GPS) device, a target tracking device, an aircraft (such as a drone, a helicopter, a multi-copter, a quadcopter, or an airplane), a ship, a remote control device, a smart home device, an industrial device, or a device built into the above device (for example, a communication module, a modem or a chip in the above device), or other processing devices connected to the wireless
  • the UE can also be used to act as a base station.
  • the UE can act as a scheduling entity that provides sidelink signals between UEs in scenarios such as V2X, D2D or P2P.
  • the device for realizing the function of the terminal device can be the terminal device, or it can be a device that can support the terminal device to realize the function, such as a chip system or a chip, which can be installed in the terminal device.
  • the chip system can be composed of a chip, or it can include a chip and other discrete devices.
  • the network device in the embodiment of the present application may be a device for communicating with a terminal device, and the network device may also be referred to as an access network device or a wireless access network device, such as a base station.
  • the network device in the embodiment of the present application may refer to a wireless access network (RAN) node (or device) that connects a terminal device to a wireless network.
  • RAN wireless access network
  • Base station can broadly cover various names as follows, or replace with the following names, such as: NodeB, evolved NodeB (eNB), next generation NodeB (gNB), relay station, access point, transmitting point (TRP), transmitting point (TP), master station, auxiliary station, multi-standard wireless (motor slide retainer, MSR) node, home base station, network controller, access node, wireless node, access point (AP), transmission node, transceiver node, baseband unit (BBU), remote radio unit (RRU), active antenna unit (AAU), remote radio head (RRH), central unit (CU), distributed unit (DU), positioning node, etc.
  • NodeB evolved NodeB (eNB), next generation NodeB (gNB), relay station, access point, transmitting point (TRP), transmitting point (TP), master station, auxiliary station, multi-standard wireless (motor slide retainer, MSR) node, home base station, network controller, access node, wireless node, access point (AP), transmission node, transceiver node, base
  • the base station can be a macro base station, a micro base station, a relay node, a donor node or the like, or a combination thereof.
  • the base station may also refer to a communication module, modem or chip used to be set in the aforementioned equipment or device.
  • the base station may also be a mobile switching center and a device that performs the base station function in D2D, V2X, and M2M communications, a network-side device in a 6G network, and a device that performs the base station function in a future communication system.
  • the base station may support networks with the same or different access technologies. The embodiments of the present application do not limit the specific technology and specific device form used by the network equipment.
  • the base station can be fixed or mobile.
  • a helicopter or drone can be configured to act as a mobile base station, and at least one cell can move according to the location of the mobile base station.
  • a helicopter or drone can be configured to act as a device that communicates with another base station.
  • the network equipment and terminal equipment can be deployed on land, including indoors or outdoors, handheld or vehicle-mounted; they can also be deployed on the water surface; they can also be deployed on aircraft, balloons and satellites in the air.
  • the embodiments of the present application do not limit the scenarios in which the network equipment and terminal equipment are located.
  • FIG. 1 is a schematic diagram of a wireless communication system 100 applicable to an embodiment of the present application.
  • the wireless communication system 100 may include at least one network device, such as the network device 110 shown in FIG. 1
  • the wireless communication system 100 may also include at least one terminal device, such as the terminal device 120 and the terminal device 130 shown in FIG. 1 .
  • Both the network device and the terminal device may be configured with multiple antennas, and the network device and the terminal device may communicate using a multi-antenna technology. Terminal devices may communicate directly with each other.
  • the network device can manage at least one cell, and there can be at least one terminal device in one cell.
  • the network device 110 and the terminal device 120 form a single-cell communication system, and without loss of generality, the cell is referred to as cell #1.
  • the network device 110 can be a network device in cell #1, or the network device 110 can serve a terminal device (such as terminal device 120) in cell #1.
  • a cell can be understood as an area within the coverage of wireless signals of network equipment.
  • Fig. 1 is a simplified schematic diagram for ease of understanding, and the wireless communication system 100 may also include other network devices or other terminal devices, which are not shown in Fig. 1.
  • the embodiments of the present application may be applicable to any communication scenario in which a transmitting device and a receiving device communicate.
  • the terminal types are diversified, and the terminal connections are more flexible and intelligent.
  • the terminal types are diversified, the super IoT (such as IoT, car networking, industry, medical care, etc.), massive connections, the terminal connections are more flexible, and the terminals themselves have certain AI capabilities.
  • the network may also provide computing and AI services to better support inclusive, real-time and highly secure AI services.
  • NWDAF network data analytics function
  • the main functions of NWDAF include: supporting data collection from other network functions (NF) and application functions (AF), supporting data collection from network operation and maintenance systems (such as operation administration and maintenance (OAM)), and providing metadata open services and data analysis services to NF or AF.
  • the main goals of the introduction of NWDAF include: automation and intelligence of network operation and maintenance, optimization of network performance and service experience, and end-to-end service level agreement (SLA) guarantee.
  • SLA service level agreement
  • the AI model trained by NWDAF can be applied to the network's own fields such as mobility management, session management and network automation, using AI methods to replace the original methods based on numerical formulas in network functions.
  • NWDAF is deployed in the core network and is an external AI unit. It is not designed to be strongly coupled with the communication network, and its performance is limited.
  • the number and types of smart terminals in communication networks may also grow rapidly.
  • the large amount of data collected, processed, and generated by smart terminals can provide power for the application of AI technology.
  • a large number of AI nodes may be deployed in wireless networks. Accordingly, a large amount of AI-related traffic will be generated between AI nodes, such as data sets, AI models, intermediate parameters, etc. Therefore, a transmission mechanism for AI-related traffic can be designed to make the network and AI more closely integrated and provide better AI services.
  • each network node can be orchestrated based on the AI capabilities of each network node, that is, to determine how each network node collaborates to process AI tasks; or, the network node can obtain the AI capability information of other network nodes as needed, so that multiple network nodes can collaborate to process AI tasks.
  • FIG 2 is a schematic diagram of a wireless communication system according to an embodiment of the present application.
  • the wireless communication system may include a network node, such as a RAN.
  • the wireless communication system may also include an AI node, such as an AI management function (AI management function, AI-MF) and an AI function (AI-F).
  • AI management function AI management function, AI-MF
  • AI-F AI function
  • the network node and the AI node may communicate directly or indirectly (such as through forwarding by other nodes).
  • the AI node may store or maintain the AI capability of the network node.
  • the AI capability of the network node may also be referred to as the AI-related parameter of the network node, and will be described below as the AI capability of the network node.
  • the AI capability of the network node may include, for example, at least one of the following: the priority of the network node, the computing power supported by the network node (such as the maximum computing power supported by the network node), the hardware capability of the network node, the AI tasks supported by the network node, the performance of the local AI model of the network node, and the performance of the local data set of the network node.
  • the priority of the network node can be determined based on the historical response of the network node. For example, if the network node participates in the collaborative processing of AI tasks more times, the priority of the network node is higher; if the network node participates in the collaborative processing of AI tasks less times, the priority of the network node is lower.
  • the priority of the network node can be determined based on the capabilities of the network node (such as the supported computing power, and the hardware capabilities of the network node itself, etc.). For example, if the capabilities of the network node are higher, the priority of the network node is higher; if the capabilities of the network node are lower, the priority of the network node is lower.
  • the AI capabilities of a network node may also include security requirements of the network node.
  • the AI node is deployed in the core network; or the AI node is deployed outside the core network, such as the AI node can be deployed in a network node; or the AI node is an operation and maintenance management system independently configured by the operator.
  • the AI node AI-MF can be deployed in the core network, and the RAN and AI-MF can communicate through the NG interface.
  • the AI node AI-F can be deployed in the RAN, and other modules in the RAN can communicate with AI-F through an internal interface.
  • an AI node can be an independent device or integrated into the same device to implement certain functions, or it can be a network element in a hardware device, or it can be a software function running on dedicated hardware, or it can be a virtualized function instantiated on a platform (for example, a cloud platform).
  • a platform for example, a cloud platform
  • Figure 2 is an exemplary illustration and the present application is not limited thereto.
  • the communication system shown in Figure 2 may also include a greater number of devices, such as a greater number of terminals, a greater number of AI nodes, a greater number of network nodes, and the like.
  • indication may include direct indication, indirect indication, explicit indication, and implicit indication.
  • indication information may include direct indication, indirect indication, explicit indication, and implicit indication.
  • the information indicated by the indication information is referred to as the information to be indicated.
  • the information to be indicated can be directly indicated, such as the information to be indicated itself or the index of the information to be indicated.
  • the information to be indicated can also be indirectly indicated by indicating other information, wherein there is an association relationship between the other information and the information to be indicated. It is also possible to indicate only a part of the information to be indicated, while the other parts of the information to be indicated are known or agreed in advance.
  • the indication of specific information can also be achieved with the help of the arrangement order of each information agreed in advance (such as specified by the protocol), thereby reducing the indication overhead to a certain extent.
  • the information to be indicated can be sent as a whole, or divided into multiple sub-information and sent separately, and the sending period and/or sending time of these sub-information can be the same or different.
  • the specific sending method is not limited in this application.
  • the sending period and/or sending time of these sub-information can be pre-defined, for example, pre-defined according to the protocol, or configured by the transmitting device by sending configuration information to the receiving device.
  • the configuration information can include, for example, but not limited to, one or a combination of at least two of radio resource control signaling, media access control (media access control, MAC) layer signaling and physical layer signaling.
  • radio resource control signaling for example, radio resource control (radio resource control, RRC) signaling
  • MAC layer signaling for example, includes MAC control element (control element, CE)
  • physical layer signaling for example, includes downlink control information (downlink control information, DCI).
  • Fig. 3 is a schematic diagram of a method 300 for AI task indication provided by an embodiment of the present application.
  • the method 300 may include the following steps.
  • the control node determines first scheduling information for the AI task, where the first scheduling information instructs the first network node to execute a first task of the AI task.
  • the AI task can be determined by the control node itself, or it can be requested by other nodes (such as network nodes, terminals, core network nodes, AI nodes, etc.), without restriction.
  • nodes such as network nodes, terminals, core network nodes, AI nodes, etc.
  • the control node may be, for example, an AI node, such as the AI-MF or AI-F shown in FIG2 .
  • the network node may be, for example, a network device, such as the RAN shown in FIG2 .
  • the first network node may be the first network node participating in executing the AI task, or the first network node may be any network node participating in executing the AI task.
  • the two network nodes participate in executing an AI task
  • the two network nodes are a first network node and a second network node.
  • the first network node first executes the AI task, such as the first network node executing part of the AI task (such as recorded as the first task)
  • the second network node continues to execute the AI task, such as the second network node executing the remaining part of the AI task (such as recorded as the second task)
  • the first network node can be considered as the first network node participating in the execution of the AI task
  • the second network node can be considered as the next network node (or next-hop network node) of the first network node.
  • the first orchestration information indicates that the first network node performs the first task of the AI task, and the first orchestration information may directly indicate the first network node to perform the first task of the AI task, such as the first orchestration information includes the first task; or the first orchestration information may indirectly indicate the first network node to perform the first task of the AI task, such as the first orchestration information includes the other information, and the other information may indirectly indicate the first task.
  • the first orchestration information includes at least one of the following information: the first task, an identifier of the first network node, resources provided by the first network node for executing the first task, and an exit condition for the first network node to execute the first task.
  • the first task refers to the part of the AI task (or decomposed task) that the first network node is responsible for when the first network node participates in executing the AI task, or the operation provided by the first network node when the first network node participates in executing the AI task.
  • the first orchestration information can directly instruct the first network node to perform the first task of the AI task, that is, the first network node can directly obtain the operations that need to be provided when performing the AI task based on the first orchestration information, and then perform the first task based on the first orchestration information.
  • An identifier of a first network node used to identify that the network nodes participating in executing the AI task include the first network node.
  • the first arrangement information may indirectly instruct the first network node to perform the first task of the AI task.
  • the first network node may know that it is to participate in the execution of the AI task based on the identifier of the first network node in the first arrangement information.
  • the first network node may participate in the execution of the AI task according to its own AI capability, so that the first network node may determine the operation that needs to be provided when executing the AI task according to its own AI capability, that is, determine the first task.
  • the resources provided by the first network node to execute the first task represent the resources that the first network node needs to provide when participating in the execution of the AI task, such as the computing power that needs to be provided and the hardware capabilities that need to be provided.
  • the first orchestration information may indirectly instruct the first network node to perform the first task of the AI task.
  • the first network node may perform the AI task based on the resources provided by the first network node to perform the first task in the first orchestration information, so that the first network node may determine when to stop performing the AI task based on the resources, and further determine the operation that needs to be provided when performing the AI task, that is, determine the first task.
  • the exit condition for the first network node to execute the first task indicates the condition for the first network node to transfer the AI task to the next network node for further processing, or the condition for the first network node to stop executing the AI task, which can be used by the first network node to determine when to stop executing the AI task.
  • the first arrangement information may indirectly instruct the first network node to perform the first task of the AI task.
  • the first network node may perform the AI task based on the exit condition for the first network node to perform the first task, so that the first network node may determine when to stop performing the AI task according to the exit condition, and further determine the operation that needs to be provided when performing the AI task, that is, determine the first task.
  • the last network element node its exit condition for executing the AI task, that is, the condition for stopping executing the AI task, the last network node does not need to transfer the AI task to other network nodes (such as the next network node). For example, if the first network node is the last network node, that is, the first network element node executes the first task based on the exit condition of the first network node, after executing the first task, what is obtained is the final result of the AI task.
  • the first network element node can directly send the final result of the AI task to the initiating node of the AI task (such as a terminal device), or the first network element node can send the final result of the AI task to other nodes, and the other nodes send the final result of the AI task to the initiating node of the AI task.
  • the control node sends first orchestration information to the first network node.
  • the method 300 further includes: the first network node executing a first task of the AI task based on the first orchestration information.
  • the control node can determine the scheduling information for the AI task and send the scheduling information to the network node, and then the network node can execute the AI task based on the scheduling information. In this way, the control node can determine the appropriate scheduling information according to the AI task to improve the overall efficiency.
  • the control node determines the first orchestration information for the AI task, including: the control node determines an orchestration table of the AI task for the AI task, the orchestration table including the orchestration information of N network nodes, the N network nodes including the first network node, and N is an integer greater than 1 or equal to 1.
  • the control node can perform unified orchestration to improve global efficiency.
  • the orchestration table includes the orchestration information of the N network nodes, that is, the orchestration information of the N network nodes can be considered as one orchestration table.
  • the N network nodes include a first network node and a second network node
  • the control node determines the first orchestration information and the second orchestration information for the AI task
  • the second orchestration information instructs the second network node to perform the second task of the AI task.
  • the orchestration information of each network node reference may be made to the description of the orchestration information of the first network node (i.e., the first orchestration information), which will not be repeated here.
  • the arrangement table may exist in the form of a table, a function, or a string, such as for storage or transmission.
  • the following Table 1 is an example of presenting the arrangement table in table form.
  • Network Node Network node orchestration information First network node First arrangement information Second network node Second arrangement information
  • the orchestration information of the first network node is the first orchestration information, that is, the first orchestration information indicates that the first network node performs the first task of the AI task;
  • the orchestration information of the second network node is the second orchestration information, that is, the second orchestration information indicates that the second network node performs the second task of the AI task.
  • Table 1 is only an exemplary description and is not limiting. Any variation of Table 1 is applicable to the present application. For example, Table 1 may also include a greater number of network nodes.
  • the scheduling information of each network node may be transmitted in any of the following ways.
  • control node sends an orchestration table to each of the N network nodes.
  • each network node among the N network nodes can obtain the scheduling table from the control node, and then can obtain respective scheduling information according to the scheduling table.
  • the orchestration table includes orchestration information of the first network node and orchestration information of the second network node
  • the orchestration information of the first network node is the first orchestration information
  • the orchestration information of the second network node is the second orchestration information.
  • the control node sends the first orchestration information and the second orchestration information to the first network node
  • the control node sends the first orchestration information and the second orchestration information to the second network node.
  • control node sends an orchestration table to one network node (such as the first network node) among the N network nodes.
  • the first network node may be the first network node participating in executing the AI task, or the first network node may be any network node participating in executing the AI task.
  • Example 1 The first network node sends an orchestration table to other network nodes in the N network nodes. For example, after receiving the orchestration table, the first network node can directly send the orchestration table to other network nodes in the N network nodes. For another example, after the first network node completes the tasks it is responsible for in the AI task based on the orchestration table, it sends the orchestration table to other network nodes in the N network nodes.
  • the N network nodes include a first network node and a second network node
  • the orchestration table includes orchestration information of the first network node and orchestration information of the second network node
  • the orchestration information of the first network node is the first orchestration information
  • the orchestration information of the second network node is the second orchestration information.
  • the control node sends the first orchestration information and the second orchestration information to the first network node
  • the first network node sends the first orchestration information and the second orchestration information to the second network node.
  • Example 2 The first network node sends the orchestration information of other network nodes to other network nodes in the N network nodes. For example, after receiving the orchestration table, the first network node can directly send the orchestration information of other network nodes in the orchestration table to other network nodes in the N network nodes. For another example, after the first network node completes the tasks for which it is responsible in the AI task based on the orchestration table, it sends the orchestration information of other network nodes in the orchestration table to other network nodes in the N network nodes.
  • the orchestration table includes orchestration information of the first network node and orchestration information of the second network node, the orchestration information of the first network node is first orchestration information, and the orchestration information of the second network node is second orchestration information.
  • the control node sends the first orchestration information and the second orchestration information to the first network node, and the first network node sends the second orchestration information to the second network node.
  • Example 3 The first network node sends an orchestration table to the next network node of the first network node, and the next network node sends an orchestration table to the next network node of the next network node, and so on.
  • the first network node can directly send the orchestration table to the next network node of the first network node.
  • the first network node completes the task for which it is responsible in the AI task based on the orchestration table, it sends the orchestration table to the next network node of the first network node.
  • the N network nodes include a first network node, a second network node, and a third network node
  • the orchestration table includes the orchestration information of the first network node, the orchestration information of the second network node, and the orchestration information of the third network node
  • the orchestration information of the first network node is the first orchestration information
  • the orchestration information of the second network node is the second orchestration information
  • the orchestration information of the third network node is the third orchestration information.
  • control node sends the first orchestration information, the second orchestration information, and the third orchestration information to the first network node
  • the first network node sends the first orchestration information, the second orchestration information, and the third orchestration information to the second network node
  • the second network node sends the first orchestration information, the second orchestration information, and the third orchestration information to the third network node.
  • Example 4 The first network node sends the orchestration information in the orchestration table except the orchestration information of the first network node to the next network node of the first network node, and the next network node sends the orchestration information in the orchestration table received from the first network node except the orchestration information of the current network node to the next network node of the next network node, and so on.
  • the first network node can directly send the orchestration information in the orchestration table except the orchestration information of the first network node to the next network node of the first network node.
  • the first network node executes the task for which it is responsible in the AI task based on the orchestration table, it sends the orchestration information in the orchestration table except the orchestration information of the first network node to the next network node of the first network node.
  • the N network nodes include a first network node, a second network node, and a third network node
  • the orchestration table includes orchestration information of the first network node, orchestration information of the second network node, and orchestration information of the third network node
  • the orchestration information of the first network node is the first orchestration information
  • the orchestration information of the second network node is the second orchestration information
  • the orchestration information of the third network node is the third orchestration information.
  • control node sends the first orchestration information, the second orchestration information, and the third orchestration information to the first network node, the first network node sends the second orchestration information and the third orchestration information to the second network node, and the second network node sends the third orchestration information to the third network node.
  • control node sends the scheduling information of each network node to each network node among the N network nodes.
  • the orchestration table includes orchestration information of the first network node and orchestration information of the second network node, the orchestration information of the first network node is first orchestration information, and the orchestration information of the second network node is second orchestration information.
  • the control node sends the first orchestration information to the first network node, and the control node sends the second orchestration information to the second network node.
  • control node may also send an orchestration table to some of the N network nodes, and then the some network nodes send the orchestration table or the orchestration information of each network node to other network nodes.
  • the first network node when the first network node sends the second network node's orchestration information to the second network node, the first network node can also send the processing result of the first task to the second network node. In this way, the second network node can continue to execute the AI task based on the processing result of the first task.
  • control node determines a scheduling table for the AI task based on the AI capabilities of the N network nodes. For example, the control node determines first scheduling information for the AI task based on the AI capability of the first network node. In this way, the scheduling information determined by the control node can match the AI capability of each network node, reducing the probability that the network node cannot execute the AI task.
  • the AI capability of a network node may include, for example, at least one of the following: the priority of the network node, the computing power supported by the network node, the hardware capability of the network node, the AI tasks supported by the network node, the performance of the local AI model of the network node, and the performance of the local data set of the network node.
  • the control node determining a scheduling table for an AI task based on the AI capabilities of N network nodes are listed below.
  • Example 1 The control node determines a scheduling table for the AI task according to the AI tasks supported by the network node, that is, the control node determines the scheduling information of the network node for the AI task according to the AI tasks supported by the network node.
  • the control node can determine which network nodes support the model training task based on the AI tasks supported by each network node, and the control node can determine the N network nodes that participate in the execution of the AI task from the network nodes that support the model training task.
  • the operations that each of the N network nodes is responsible for and the resources provided, etc. can be determined by the control node based on other AI capabilities of the network nodes, or can be determined by each network node according to its own AI capabilities during the execution of the AI task, without limitation.
  • Example 2 The control node determines the orchestration table for the AI task based on the computing power supported by the network node, that is, the control node determines the orchestration information of the network node for the AI task based on the computing power supported by the network node.
  • control node determines that N network nodes with higher computing power will perform AI tasks based on the computing power supported by each network node.
  • control node can also determine the operations that each network node is responsible for and/or the resources provided by each network node based on the computing power supported by the N network nodes. It can be understood that the operations that each of the N network nodes is responsible for and the resources provided, etc., can also be determined by the control node based on other AI capabilities of the network nodes, or can also be determined by each network node according to its own AI capabilities during the execution of the AI task, without limitation.
  • Example 3 The control node determines the scheduling table for the AI task according to the hardware capabilities of the network nodes, that is, the control node determines the scheduling information of the network nodes for the AI task according to the hardware capabilities of the network nodes.
  • control node determines that N network nodes with higher hardware capabilities will perform AI tasks based on the hardware capabilities of each network node.
  • control node can also determine the operations that each network node is responsible for and/or the resources provided by each network node based on the hardware capabilities of the N network nodes. It can be understood that the operations that each of the N network nodes is responsible for and the resources provided, etc., can also be determined by the control node based on other AI capabilities of the network nodes, or can also be determined by each network node according to its own AI capabilities during the execution of the AI task, without limitation.
  • Example 4 The control node determines a scheduling table for the AI task based on the performance of the local AI model of the network node, that is, the control node determines the scheduling information of the network node for the AI task based on the performance of the local AI model of the network node.
  • the performance of the local AI model of the network node may include, but is not limited to, accuracy and timeliness.
  • Accuracy may characterize the performance of the AI model when performing a number of tasks.
  • Timeliness may characterize the generation time of the AI model.
  • control node determines that N network nodes with higher performance perform the AI task based on the performance of the local AI model of the network node.
  • control node can also determine the operations that each network node is responsible for and/or the resources provided by each network node based on the performance of the local AI models of the N network nodes. It can be understood that the operations that each of the N network nodes is responsible for and the resources provided, etc., can also be determined by the control node based on other AI capabilities of the network node, or can also be determined by each network node according to its own AI capabilities during the execution of the AI task, without limitation.
  • Example 5 The control node determines a scheduling table for the AI task based on the performance of the local data set of the network node, that is, the control node determines the scheduling information of the network node for the AI task based on the performance of the local data set of the network node.
  • the performance of a local data set of a network node may include, but is not limited to, accuracy and timeliness.
  • Accuracy may characterize the performance of the data set under several test models.
  • Timeliness may characterize the generation time of the data set.
  • control node determines that N network nodes with higher performance perform the AI task based on the performance of the local data sets of the network nodes.
  • control node can also determine the operations that each network node is responsible for and/or the resources provided by each network node based on the performance of the local data sets of the N network nodes. It can be understood that the operations that each of the N network nodes is responsible for and the resources provided, etc., can also be determined by the control node based on other AI capabilities of the network nodes, or can also be determined by each network node according to its own AI capabilities during the execution of the AI task, without limitation.
  • Example 6 The control node determines the scheduling table according to the priority of the network node, that is, the control node determines the scheduling information of the network node for the AI task according to the priority of the network node.
  • control node determines that N network nodes with higher priorities perform the AI task based on the priorities of the network nodes.
  • the operations that the N network nodes are responsible for and the resources they provide may also be determined by the control node based on other AI capabilities of the network nodes, or by each network node according to its own AI capabilities during the execution of the AI task, without limitation.
  • Example 7 The control node determines a scheduling table for the AI task based on the AI tasks supported by the network node and the computing power supported by N network nodes. That is, the control node determines the scheduling information of the network node for the AI task based on the AI tasks supported by the network node and the computing power supported by N network nodes.
  • the control node can determine which network nodes support the model training task based on the AI tasks supported by each network node, and the control node can determine the N network nodes that participate in executing the AI task from the network nodes that support the model training task. Furthermore, the control node can determine the operations that each network node is responsible for and the resources it provides based on the computing power supported by the N network nodes.
  • control node may determine the scheduling table for the AI task based on at least one of the following: the priority of the network node, the computing power supported by the network node, the hardware capability of the network node, the AI task supported by the network node, the performance of the local AI model of the network node, and the performance of the local data set of the network node.
  • control node may obtain the AI capabilities of the N network nodes in any of the following ways.
  • control node locally maintains the AI capability of at least one network node, and the control node can directly determine the scheduling table for the AI task based on the AI capability of at least one network node maintained locally.
  • the at least one network node includes N network nodes.
  • the AI capability of at least one network node may be present in the form of a table, a function, or a string, such as for storage or transmission.
  • Table 2 below is an example of presenting the AI capability of at least one network node in table form.
  • Network Node AI capabilities of network nodes First network node AI capabilities of the first network node Second network node AI capabilities of the second network node.
  • Table 2 is only an exemplary description and is not limiting. Any variation of Table 2 is applicable to the present application. For example, Table 2 may also include a greater number of network nodes.
  • the control node after confirming the AI task, requests the AI capability of at least one network node from other nodes, and then determines a scheduling table for the AI task based on the AI capability of the at least one network node.
  • the at least one network node includes N network nodes.
  • the control node after confirming the AI task, requests the respective AI capabilities of at least one network node, and then determines a scheduling table for the AI task based on the AI capabilities of the at least one network node.
  • the at least one network node includes N network nodes.
  • the AI capability of the network node may be updated.
  • the control node maintaining the AI capability of the network node two examples are introduced below.
  • the control node periodically updates the AI capability of the network node.
  • the network node periodically reports its own AI capability to the control node, and then the control node can periodically update the AI capability of the network node.
  • the control node periodically sends information to the network node, and the information is used to trigger the network node to report its own AI capability to the control node, and then the control node can periodically update the AI capability of the network node.
  • the control node updates the AI capability of the network node. For example, the network node reports its own AI capability to the control node. If the AI capability reported by the network node is inconsistent with the previously stored AI capability of the network node, the control node updates the AI capability of the network node. For another example, after the network node determines the scheduling information for a certain AI task, it updates the AI capability of the network node.
  • the method 300 further includes: the control node receiving response information from the first network node, where the response information indicates whether the first network node agrees with the first orchestration information.
  • the first network node agrees with the first orchestration information, so the response information sent by the first network node to the control node indicates that the first network node agrees with the first orchestration information.
  • the first network node disagrees with the first orchestration information, so the response information sent by the first network node to the control node indicates that the first network node disagrees with the first orchestration information.
  • the first orchestration information can be understood as whether the first network node agrees to execute the first task.
  • the response information indicates whether the first network node agrees with the first orchestration information, including any one of the following implementations.
  • the response information directly indicates whether the first network node agrees with the first orchestration information.
  • the response information may be implemented by a positive acknowledgement and a negative acknowledgement.
  • the first network node if the first network node agrees with the first orchestration information, the first network node sends a positive acknowledgement to the control node; if the first network node disagrees with the first orchestration information, the first network node sends a negative acknowledgement to the control node.
  • the response information indirectly indicates whether the first network node agrees with the first orchestration information.
  • the first network node sends the first orchestration information adjusted by the first network node to the control node, and the adjusted first orchestration information can indirectly indicate that the first network node disagrees with the first orchestration information, that is, the control node learns that the first network node disagrees with the first orchestration information based on the adjusted first orchestration information.
  • the adjusted first orchestration information can include, but is not limited to: the adjusted first task and/or the resources that the first network node can provide for performing the first task.
  • the control node if the control node does not receive a negative response from the first network node within a period of time (for distinction, recorded as time period #1), the control node assumes that the first network node agrees with the first orchestration information (equivalent to an implicit response information indicating that the first network node agrees with the first orchestration information).
  • the starting time of time period #1 can be the time when the control node sends the first orchestration information, and the duration of time period #1 can be predefined, or it can be estimated based on historical conditions, without limitation.
  • time period #1 can be implemented by a timer.
  • the following implementations may be included.
  • control node adjusts the first scheduling information.
  • the control node may redetermine the first orchestration information.
  • the scheduling table includes scheduling information of N network nodes, and the N network nodes include a first network node. After the control node learns that the first network node disagrees with the first scheduling information, the scheduling table can be re-determined.
  • the first network node adjusts the first orchestration information, and sends the adjusted first orchestration information to the control node.
  • the first network node may adjust the first orchestration information and send the adjusted first orchestration information to the control node.
  • the scheduling table includes scheduling information of N network nodes, and the N network nodes include a first network node.
  • the control node may adjust the scheduling information of at least one network node among the N network nodes except the first network node based on the adjusted first scheduling information.
  • the first network node sends the first task or a portion of the first task to the second network node, and the second network node is at least one network node participating in executing the AI task.
  • the second network node may be determined by the first network node, such as an adjacent network node selected by the first network node; or the second network node may be determined by the control node, such as a next network node of the first network node selected by the control node.
  • the first network node directly transmits the first task to the second network node, and the second network node performs the first task.
  • the first network node performs part of the first task, and then sends the rest of the first task to the second network node, and the second network node performs the rest of the first task.
  • the first network node may first execute part of the first task, and then execute the rest of the first task later.
  • the network node schedules at least one terminal to participate in the operation of the network node, that is, at least one terminal and the network node jointly collaborate to perform the AI task.
  • the network node schedules at least one terminal to participate in the operation of the network node, that is, at least one terminal and the network node jointly collaborate to perform the AI task.
  • the overhead caused by the network node performing the AI task can be reduced.
  • At least one terminal may be a terminal for which the network node provides communication services, or a terminal in a cell managed by the network node, or a terminal in the cell of the network node.
  • the network node may schedule the terminal in the cell to participate in the operation of the network node.
  • the terminals in the cell of the first network node include terminal #1 and terminal #2, and the first network node can schedule at least one terminal to participate in the operation of the first network node, that is, to perform the first task.
  • the first network node sends the first task or part of the first task to at least one terminal.
  • the first network node performs part of the first task, and terminal #1 and/or terminal #2 performs the rest of the first task.
  • the terminal participating in the execution of the first task may send the processing result of the execution of the first task to the first network node.
  • terminal #1 and/or terminal #2 executes the first task.
  • terminal #1 or terminal #2 executes the complete first task.
  • the terminal that executes the complete first task may send the processing result of the first task to the first network node.
  • terminal #1 and terminal #2 respectively perform the complete first task.
  • terminal #1 and terminal #2 can send the processing result of the first task to the first network node, and the first network node can merge or filter the processing results provided by terminal #1 and terminal #2.
  • terminal #1 performs part of the first task
  • terminal #2 performs the rest of the first task.
  • terminal #1 and terminal #2 can send the processing result of the first task to the first network node, and the first network node can merge or filter the processing results provided by terminal #1 and terminal #2.
  • the network node determines whether the terminal participates in executing the AI task according to the AI state of the terminal, which will be described in detail later in conjunction with method 500.
  • control node determining the orchestration information of the network node for the AI task, and is not limited to this.
  • the control node can also determine the orchestration information of at least one core network node for the AI task, that is, the at least one core network node can collaborate to perform the AI task based on their respective orchestration information.
  • the control node can also determine the orchestration information of at least one terminal for the AI task, that is, the at least one terminal can collaborate to perform the AI task based on their respective orchestration information.
  • Fig. 4 is a schematic diagram of a method 400 for AI task indication provided by another embodiment of the present application.
  • the method 400 may include the following steps.
  • the first network node sends a processing result and target state information of a first task of the AI task to the second network node.
  • the target state information is used to indicate the target result of the AI task, or in other words, the target state information is used to indicate the final state of the AI task.
  • the target state information is used to indicate the final state of the model, or can be used to describe the state of the model when it stops flowing in the network.
  • the target state information includes at least one of the following information: accuracy, timeliness, and model structure. Among them, accuracy can characterize the performance of the model when performing several tasks. Timeliness can characterize the generation time of the model.
  • the target state information is used to indicate the final state of the data set, or can be used to describe the state of the data set when it stops flowing in the network.
  • the target state information includes at least one of the following information: accuracy, timeliness, composition, and attributes. Among them, accuracy can characterize the performance of the data set under several test models. Timeliness can characterize the generation time of the data set. Composition can characterize the composition of the data contained in the data set. Attributes can characterize the type, quantification, dimension, etc. of the data contained in the data set.
  • method 400 also includes step 420 .
  • the second network node executes a second task of the AI task based on the processing result of the first task and the target state information.
  • the processing result and target state information of the first task can implicitly indicate that the second network node participates in the execution of the AI task, such as the second network node executing the second task of the AI task. That is, after receiving the processing result and target state information of the first task, the second network node can know that it will participate in the execution of the AI task.
  • network nodes can collaborate to perform AI tasks and determine whether to participate in the execution of AI tasks based on current processing results and target status information.
  • the processing result of the first task represents the current state information of the AI task.
  • the current state information is used to indicate the current result of the AI task, or the current state information is used to indicate the current state of the AI task.
  • the second network node receives the current state information and the target state information of the AI task, and the second network node determines to participate in the execution of the AI task based on the inconsistency between the current state information and the target state information, and the second network node can execute the AI task with the target state information as the final result of the AI task.
  • the current state information is used to indicate the current state of the model, that is, the state of the model when it is generated at the first network node.
  • the current state information includes at least one of the following information: accuracy, timeliness, and model structure. For the description of each information, please refer to the relevant description in step 410, which will not be repeated here.
  • the current state information is used to indicate the current state of the data set, that is, the state when the first network node of the data set is generated.
  • the current state information includes at least one of the following information: accuracy, timeliness, composition, and attributes. For the description of each information, please refer to the relevant description in step 410, which will not be repeated here.
  • the first network node sends the processing result and target state information of the first task of the AI task to the second network node, including: based on the AI capability of the second network node, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the first network node learns that the second network node supports the AI task based on the AI capability of the second network node, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the first network node if the first network node learns that the computing power supported by the second network node meets the preset value based on the AI capability of the second network node, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the preset value may be predefined, such as predefined by the protocol, or may be estimated based on historical circumstances, without limitation.
  • the first network node learns that the performance of the local AI model of the second network node meets the preset condition based on the AI capability of the second network node, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the preset condition may be predefined, such as predefined by the protocol, or may be estimated based on historical circumstances, without limitation.
  • any one of the following may be satisfied.
  • the first network node and the second network node are adjacent network nodes, wherein the adjacent network nodes may be, for example, network nodes adjacent in location, or network nodes having an adjacent relationship in a network topology structure.
  • the relative position between the first network node and the second network node satisfies a preset condition.
  • the relative position between the first network node and the second network node can be understood as the position of the second network node relative to the first network node with the first network node as a reference; or can also be described as the position of the first network node relative to the second network node with the second network node as a reference.
  • the relative position may include: distance and/or angle.
  • the second network node is a network node that can provide services to the terminal.
  • the first network node can also obtain the AI capability of the second network node that can provide services to the terminal after receiving the task request of the AI task of the terminal.
  • the second network node can be a network node that previously collaborated with the first network node to perform the AI task.
  • the second network node can be any network node.
  • the first network node may obtain the AI capability of the second network node in any of the following ways.
  • the first network node obtains the AI capability of the second network node from the control node.
  • a first network node queries its control node for the AI capability of a second network node. For example, the first network node sends a first request message to the control node, and the first request message is used to request the AI capability of the second network node; the control node sends a response message to the first request message to the first network node based on the request of the first network node, and the response message indicates the AI capability of the second network node.
  • the response message indicates the AI capability of the second network node, which may be a direct indication, such as the response message includes the AI capability of the second network node; or it may be an indirect indication, such as the response message includes other information, and the AI capability of the second network node can be indirectly known based on the other information.
  • the first network node when the first network node cannot complete the AI task, it can query the control node for the AI capability of the second network node. In this way, it can be determined whether to obtain the AI capability of the second network node from the control node according to the actual situation. For example, if the AI capability of the first network node cannot complete the AI task, the first network node can request other network nodes (such as the second network node) to collaborate to complete the AI task. Therefore, the first network node can obtain the AI capability of the second network node from the control node, so that it can determine whether the second network node can collaborate to complete the AI task according to the AI capability of the second network node.
  • the first network node can obtain the AI capability of the second network node from the control node, so that it can determine whether the second network node can collaborate to complete the AI task according to the AI capability of the second network node.
  • the first network node subscribes to the AI capability of the second network node from the control node.
  • the first network node subscribes to the AI capability of the second network node from the control node.
  • the control node learns the AI capability of the second network node, it sends the AI capability of the second network node to the first network node in response to the subscription of the first network node. That is, the first network node first obtains the AI capability of the second network node from the control node and saves the AI capability of the second network node. In this way, after determining the AI task, the AI capability of the second network node can be directly used, reducing the delay caused by executing the AI task.
  • the first network node obtains the AI capability of the second network node from the second network node.
  • a first network node queries a second network node for the AI capability of the second network node. For example, the first network node sends a first request message to the second network node, where the first request message is used to request the AI capability of the second network node; the second network node sends a response message to the first request message based on the request of the first network node, where the response message indicates the AI capability of the second network node.
  • the first network node subscribes to the second network node for the AI capability of the second network node.
  • control node may also actively send the AI capability of the second network node to the first network node.
  • first network node may also obtain the AI capability of the second network node from other network nodes or core network nodes.
  • method 400 further includes: the first network node sends a second request message to the second network node, the second request message requests the second network node to collaborate in executing the AI task. Based on this, when the first network node determines that the second network node agrees to collaborate in executing the AI task, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the first network node sends a second request message to the second network node. If the first network node receives a response to the second request message from the second network node, where the response to the second request message is used to indicate that the second network node agrees to collaborate in executing the AI task, the first network node sends the processing result and target status information of the first task of the AI task to the second network node.
  • the first network node sends a second request message to the second network node. If the first network node does not receive a negative response from the second network node within a period of time (for distinction, recorded as time period #2), the first network node assumes that the second network node agrees to collaborate in the execution of the AI task (equivalent to an implicit response message indicating that the second network node agrees to collaborate in the execution of the AI task). Therefore, the first network node sends the processing result and target state information of the first task of the AI task to the second network node.
  • the starting time of time period #2 can be the time when the first network node sends the second request message, and the duration of time period #2 can be predefined, or it can be estimated based on historical conditions without restriction.
  • time period #2 can be implemented by a timer.
  • the method 400 further includes: the first network node sends area information to the second network node, and the area information is used by the second network node to determine a network node that collaborates to execute the AI task.
  • the region information is used to assist the current network node in determining other nodes that collaborate to perform the AI task.
  • the region information sent by the first network node to the second network node is used to assist the second network node in determining the network node that collaborates to perform the AI task.
  • the regional information may include geographic location information, or the regional information may also be embodied by some parameters (recorded as parameter #A for distinction), and the parameter #A may be, for example: service type, terminal type, node computing power type.
  • the service type may be a service type supported or operated in a certain area.
  • the terminal type may be a terminal type in a certain area.
  • the node type may be the computing power type of the node in a certain area.
  • the parameter #A of nodes in the same area is relatively close.
  • the appropriate next node can be selected according to actual needs. For example, when the second network node selects the next network node, it may select a network node in an area where the difference in parameter #A is relatively large, or it may select a network node in an area where the difference in parameter #A is relatively close.
  • the network node schedules at least one terminal to participate in the operation of the network node, that is, at least one terminal and the network node jointly collaborate to perform the AI task.
  • the network node determines whether the terminal participates in the execution of the AI task according to the AI state of the terminal, which will be described in detail later in conjunction with method 500.
  • the network node can also collaborate with the terminal to perform the AI task.
  • the first network node sends the processing result and target state information of the first task of the AI task to at least one terminal.
  • the network node can also collaborate with the core network node to perform the AI task.
  • the first network node sends the processing result and target state information of the first task of the AI task to at least one core network node.
  • Fig. 5 is a schematic diagram of a method 500 for AI task indication provided by another embodiment of the present application.
  • the method 500 may include the following steps.
  • the network node sends an AI task to a terminal, wherein the terminal is in a preset state.
  • method 500 also includes step 520 .
  • the terminal executes the AI task.
  • the network node can send an AI task to a terminal in a preset state, that is, the AI in the preset state can participate in the execution of AI tasks.
  • the network node sends a notification message to the terminal, and the notification message notifies that the terminal is adjusted to a preset state, that is, the notification message notifies that the AI state of the terminal is adjusted to a preset state.
  • the notification message may be a combination of any one or more of the following: radio resource control signaling, MAC layer signaling, physical layer signaling, and AI paging.
  • the radio resource control signaling for example, includes RRC signaling
  • the MAC layer signaling for example, includes MAC CE
  • the physical layer signaling for example, includes DCI.
  • AI paging can be sent by a network node to trigger a specific terminal, or a terminal in a specific AI state, to perform an AI state transition.
  • the AI state of the terminal may be any of the following: AI-idle state, AI-activated state, AI-suspended state.
  • the preset state may be, for example, AI-activated state.
  • the naming of each AI state is only an example, and its naming does not limit the protection scope of the embodiments of the present application.
  • AI-idle state The terminal has not established a connection with the AI node, and there is no AI model locally. If the terminal is in the AI-idle state, the terminal can perform operations such as AI paging monitoring, AI node selection, and AI connection establishment.
  • the network node may first trigger the terminal to switch to the AI-active state, and then schedule the terminal to participate in the execution of the AI task.
  • the network node triggers the terminal to complete the AI state transition by means of signaling, such as sending a notification message to the terminal.
  • AI-Activated state The terminal has established an AI connection with the AI node. If the terminal is in the AI-Activated state, the terminal can perform operations such as AI scheduling monitoring, executing AI tasks according to the schedule, and AI node selection.
  • the network node may schedule the terminal to participate in executing the AI task.
  • AI-Permanent state The terminal has not established an AI connection with the AI node, and the AI model is deployed locally. If the terminal is in the AI-Permanent state, the terminal can perform operations such as AI paging monitoring, AI node selection, and AI connection establishment.
  • the network node may first trigger the terminal to switch to the AI-activate state, and then schedule the terminal to participate in the execution of the AI task.
  • the network node triggers the terminal to complete the AI state transition by means of signaling, such as sending a notification message to the terminal.
  • step 510 the control node sends the AI task to the terminal.
  • the above description of the AI state of the terminal is only an exemplary description and is not limiting.
  • the AI state of the terminal may include other AI states in addition to the above AI-idle state, AI-activated state, and AI-stayed state.
  • method 500 can be used alone or in combination with the foregoing method 300 or method 400, without limitation thereto.
  • method 500 may be used in combination with method 300.
  • the first network node sends a first task or a part of the first task to at least one terminal, and the state of the at least one terminal is a preset state. Based on this, if the state of the terminal is the preset state, the first network node sends the first task or a part of the first task to the terminal, and the terminal cooperates with the first network node to perform the first task.
  • method 500 may be used in combination with method 400.
  • the first network node sends the first task or part of the first task to at least one terminal, and the state of the at least one terminal is a preset state. Based on this, if the state of the terminal is the preset state, the first network node sends the first task or part of the first task to the terminal, and the terminal cooperates with the first network node to perform the first task.
  • the following takes the network node as RAN and the control node as AI-MF as an example, and exemplarily describes the embodiment of the present application in conjunction with Figures 6 to 9.
  • the steps and terms involved may be specifically referred to in the above description.
  • FIG. 6 is a schematic flow chart of a method 600 for AI task indication provided according to an embodiment of the present application.
  • the method 600 may be used to implement a solution such as method 400.
  • the method 600 may be applicable to a scenario in which a terminal requests a model-related AI task from a RAN.
  • the method 600 may include the following steps.
  • the AI-MF maintains the AI capability of at least one RAN.
  • the AI capability of the RAN may include at least one of the following: the priority of the RAN, the computing power supported by the RAN, the hardware capability of the RAN, the AI tasks supported by the RAN (or the types of operations that the RAN can perform), the performance of the RAN local AI model, and the performance of the RAN local data set. Further optionally, if the AI capability of the RAN includes the AI tasks supported by the RAN, the AI capability of the RAN also includes parameters associated with the AI tasks supported by the RAN.
  • the AI capability of the RAN includes AI tasks supported by the RAN, and the AI tasks supported by the RAN include model training, then further optionally, the AI capability of the RAN includes parameters associated with the model training.
  • the parameters associated with the model training include at least one of the following: model structure, training set, and available computing power.
  • the AI capability of the RAN includes AI tasks supported by the RAN, and the AI tasks supported by the RAN include model fusion
  • the AI capability of the RAN includes parameters associated with the model fusion.
  • the parameters associated with the model fusion include at least one of the following: model fusion strategy, model structure supporting fusion, and local knowledge base information.
  • the AI capability of the RAN includes an AI task supported by the RAN, and the AI task supported by the RAN includes a model test, then further optionally, the AI capability of the RAN includes parameters associated with the model test.
  • the parameters associated with the model test include at least one of the following: model test capability, test set.
  • the AI capability of the RAN may exist in the form of a table, a function, or a string, such as for storage or transmission.
  • the following Table 3 is an example of presenting the AI capability of the RAN in table form.
  • the AI tasks supported by RAN#1 include Task A, and when the local model of RAN#1 executes Task A, the accuracy is value 1 and the timeliness is value 2.
  • Accuracy can represent the performance of the model when executing several tasks.
  • Timeliness can represent the generation time of the model.
  • the AI tasks supported by the RAN can be represented by at least one bit.
  • the AI tasks related to the model include: model training tasks, model testing tasks, and model fusion tasks, and 2 bits are used to indicate the AI tasks supported by the RAN. If the bit is set to "00", it means that the AI tasks supported by the RAN are model training tasks; if the bit is set to "01”, it means that the AI tasks supported by the RAN are model testing tasks; if the bit is set to "10", it means that the AI tasks supported by the RAN are model fusion tasks. It should be understood that the above is only an exemplary description and is not limiting.
  • the AI tasks supported by the RAN can be represented by a bitmap.
  • the AI tasks related to the model include: model training tasks, model testing tasks, and model fusion tasks, and the bit value is "1" to indicate support, and the bit value is "0" to indicate non-support.
  • the AI tasks supported by the RAN are represented as "110”
  • the three bits in "110” correspond to the model training tasks, model testing tasks, and model fusion tasks, respectively, so "110" indicates that the RAN supports model training tasks and model testing tasks, and does not support model fusion tasks.
  • the AI tasks supported by the RAN are represented as "101”
  • the three bits in "101” correspond to model training tasks, model testing tasks, and model fusion tasks, respectively, so "101" indicates that the RAN supports model training tasks and model fusion tasks, and does not support model testing tasks.
  • Table 3 is only an exemplary description and is not limited thereto. Any variation of Table 3 is applicable to the present application.
  • Table 3 may also include a greater number of RANs.
  • RAN#1 and RAN#2 in Table 3 support a greater number of AI tasks.
  • Table 3 may also include a greater number of parameters characterizing the performance of the local AI model.
  • model-related AI tasks are mainly used as examples for illustrative explanation. Therefore, the above-mentioned AI capabilities of RAN mainly introduce model-related capabilities and are not limited to this.
  • the terminal publishes an AI task to the first RAN
  • the AI task published by the terminal to the first RAN is a model training task.
  • the terminal sends relevant information of the initial model to the first RAN.
  • the terminal performs an encapsulation operation on the initial model, and carries relevant information of the initial model in the encapsulation.
  • the terminal may also perform a segmentation operation on the initial model, so as to facilitate the first RAN to correctly restore the initial model.
  • the initial model is the model to be trained.
  • the relevant information of the initial model may include at least one of the following: parameter set of the initial model, current state information, target state information, region information, and version of the initial model.
  • parameter set of the initial model current state information
  • target state information target state information
  • region information region information
  • version of the initial model version of the initial model.
  • Current state information which can be used to describe the state of the model when the current node is generated.
  • the current state information can include at least one of the following information: accuracy and timeliness.
  • accuracy For operations involving model structure changes such as model compression/model distillation, a description of the model structure can also be added to the state information.
  • the initial model may not carry the current state information.
  • Target state information or the target state information of the initial model, can be used to describe the final state of the model, or the state of the model when it stops flowing in the network.
  • the target state information includes at least one of the following information: accuracy and timeliness.
  • accuracy For operations involving model structure changes such as model compression/model distillation, a description of the model structure can also be added to the state information.
  • Regional information used to assist the current node in deciding other nodes to collaborate with to perform the model training task.
  • the regional information in the relevant information of the initial model can be used to assist the first RAN node in deciding the RAN to collaborate with to perform the model training task.
  • the parameter set of the initial model may include the training weights of the neural network corresponding to the initial model.
  • the version of the initial model indicates that the initial model provided by the terminal has performed t1 model trainings, and t1 is an integer greater than or equal to 0. For example, if the version of the initial model is 0, it indicates that the initial model provided by the terminal has not yet been trained. For another example, if the version of the initial model is 1, it indicates that the initial model provided by the terminal has performed 1 model training (such as the terminal has performed model training once).
  • the relevant information of the initial model may also include: the structure of the neural network corresponding to the initial model, the calculation rules of the initial model parameters, the number of the initial model, etc.
  • step 602 the following implementations may be included.
  • the terminal sends the relevant information of the initial model to the first RAN, and the relevant information of the initial model may implicitly indicate that the first RAN needs to perform model training on the initial model.
  • the relevant information of the initial model includes current state information and target state information, and the first RAN determines to perform model training on the initial model based on the inconsistency between the current state information and the target state information.
  • step 602 the terminal sends indication information and relevant information of the initial model to the first RAN, where the indication information indicates to perform model training on the initial model.
  • the first RAN performs model training on the initial model to obtain a first model.
  • the first RAN may perform a model training task based on the initial model provided by the terminal in step 602.
  • a model obtained by the first RAN performing model training on the initial model is recorded as a first model.
  • the first RAN cannot complete the model training task alone, that is, the state of the first model obtained by the first RAN through model training of the initial model does not meet the target state required by the terminal, so the first RAN can complete the model training task with the cooperation of other RANs. It is assumed that the RAN determined by the first RAN to cooperate in executing the model training task is the second RAN.
  • the first RAN may also schedule terminals in the cell to participate in operations, such as participating in model training of the initial model.
  • operations such as participating in model training of the initial model.
  • the first RAN obtains the AI capability of the second RAN from the AI-MF.
  • the first RAN may acquire the AI capability of the second RAN from the AI-MF.
  • first RAN and the second RAN reference may be made to the description regarding the first network node and the second network node in the foregoing method 400 , which will not be repeated herein.
  • first RAN acquiring the AI capability of the second RAN from the AI-MF
  • the embodiment of the present application is mainly described by taking one second RAN as an example, and the number of the second RANs is not limited.
  • the first RAN can obtain the AI capability of at least one second RAN from the AI-MF.
  • step 604 is an exemplary description, and the embodiments of the present application are not limited thereto.
  • the first RAN may also obtain the AI capability of the second RAN from the second RAN.
  • the first network node obtaining the AI capability of the second network node from the second network node in the previous method 400, which will not be repeated here.
  • the first RAN sends relevant information of the first model to the second RAN.
  • the first RAN may encapsulate the first model, and carry relevant information of the first model in the encapsulation.
  • the first RAN may also segment the first model, so as to facilitate the second RAN to correctly restore the first model.
  • the first model is a model obtained after the first RAN performs model training.
  • the relevant information of the first model may include at least one of the following: a parameter set of the first model, current state information, target state information, region information, and a version of the first model.
  • the current state information, region information, and the version of the first model are briefly introduced below. For other information not introduced in detail, please refer to the relevant description in step 602.
  • the current state information is used to describe the state of the model when the current node is generated. Therefore, the current state information provided by the first RAN to the second RAN here represents the current state information of the first model, which is used to describe the state of the first model when it is generated in the first RAN.
  • Regional information As mentioned above, regional information is used to assist the current node in deciding which other nodes to collaborate with to perform the model training task. Therefore, the regional information in the relevant information of the first model here can be used to assist the second RAN in deciding which RAN to collaborate with to perform the model training task.
  • the region information in the relevant information of the first model and the region information in the relevant information of the initial model in step 602 may be the same as or different from each other.
  • the area information in the relevant information of the first model is the same as the area information in the relevant information of the initial model, such as both are area information where the terminal can receive signals.
  • the regional information in the relevant information of the first model is different from the regional information in the relevant information of the initial model.
  • the regional information in the relevant information of the first model is information about the coverage area of the low-frequency base station
  • the regional information in the relevant information of the initial model is information about the coverage area of the high-frequency base station.
  • the version of the first model indicates that the first model provided by the first RAN has been trained t2 times, and t2 is an integer greater than 1 or equal to 1.
  • the version of the first model is 1, indicating that the first model provided by the first RAN has been trained once.
  • the first model is a model that has been trained once on the initial model, or the first RAN is the RAN that performs model training on the initial model for the first time.
  • the first RAN may send the relevant information of the first model to the second RAN in the following two ways:
  • the first RAN when the first RAN determines that the second RAN can perform the model training task, the first RAN sends the relevant information of the first model to the second RAN. For example, the first RAN determines that the second RAN can perform the model training task based on the AI capability of the second RAN obtained in step 604, such as the AI capability of the second RAN includes the AI task supported by the second RAN, and the AI task supported by the second RAN includes the model training task, so the first RAN sends the relevant information of the first model to the second RAN.
  • the first RAN after the first RAN performs model training on the initial model, the first RAN directly sends the relevant information of the first model to the second RAN.
  • the first RAN may default or assume that the second RAN can perform the model training task, and therefore directly sends the relevant information of the first model to the second RAN after the initial model is trained.
  • the first RAN sends relevant information of the first model to the second RAN, including: when the first RAN determines that the second RAN agrees to collaborate in performing the model training task, the first RAN sends relevant information of the first model to the second RAN.
  • method 600 further includes: the first RAN requests the second RAN to perform the model training task. After obtaining confirmation from the second RAN, that is, agreeing to cooperate with the first RAN to perform the model training task, the first RAN sends relevant information of the first model to the second RAN.
  • the second RAN performs model training on the first model to obtain a second model.
  • the second RAN may perform a model training task based on the first model generated by the first RAN.
  • the model obtained by the second RAN performing model training on the first model is recorded as the second model.
  • method 600 further includes step 607 .
  • the second RAN can obtain the AI capability of the third RAN and send the relevant information of the second model to the third RAN.
  • the model reaches the target state and sends the final generated model to the terminal.
  • the second RAN may also schedule terminals in the cell to participate in operations, such as participating in model training of the first model.
  • operations such as participating in model training of the first model.
  • the second RAN sends the second model to the terminal.
  • the model reaches the target state, that is, the state of the second model meets the target state, then in one possible implementation manner, the second RAN sends the second model to the terminal; or in another possible implementation manner, the second RAN sends the second model to the first RAN, and the first RAN forwards the second model to the terminal, without limitation.
  • the terminal sends the relevant information of the initial model to the first RAN (such as the RAN numbered 1), the first RAN performs model training on the initial model to obtain the first model, and sends the relevant information of the first model to the second RAN (such as the RAN numbered 2), such as including: regional information, current model state (that is, state description information of the first model), target model state (that is, target state information), and model version 1; the second RAN performs model training on the first model to obtain the second model, and sends the relevant information of the second model to the third RAN (such as the RAN numbered 3), such as including: regional information, current model state (that is, state description information of the second model), target model state (that is, target state information), model version 2; and so on, until the current model state reaches the target model state.
  • the first RAN such as the RAN numbered 1
  • the first RAN performs model training on the initial model to obtain the first model
  • the relevant information of the first model to the second RAN (such as the
  • model version 1 means that the model provided by the first RAN is the model obtained by performing the first model training on the initial model, or the first RAN is the RAN that performs the model training on the initial model for the first time.
  • Model version 2 means that the model provided by the second RAN is the model obtained by performing the second model training on the initial model, or the second RAN is the RAN that performs the model training on the initial model for the second time.
  • method 600 is mainly illustrated by taking the model training task as an example. It can be understood that the above-mentioned model training task can be replaced by any other model-related tasks.
  • step 604 can be executed first, and then step 602; or step 602 can be executed first, and then step 604; or it can be performed simultaneously, without limitation.
  • one RAN determining the next cooperative RAN as an example for exemplary description, and is not limited to this.
  • one RAN can determine multiple cooperative RANs, and the multiple cooperative RANs collaborate to perform AI tasks.
  • a terminal requests a model-related AI task from a RAN.
  • multiple RANs can collaborate to complete the AI task requested by the terminal.
  • each RAN can collaborate to perform the AI task based on the relevant information of the model received from the previous RAN.
  • FIG. 8 is a schematic flow chart of a method 800 for AI task indication provided according to another embodiment of the present application.
  • the method 800 may be used to implement a solution such as method 400.
  • the method 800 may be applicable to a scenario in which a terminal requests an AI task related to a data set from a RAN.
  • the method 800 may include the following steps.
  • the AI-MF maintains the AI capability of at least one RAN.
  • the AI capability of the RAN may include at least one of the following: the priority of the RAN, the computing power supported by the RAN, the hardware capability of the RAN, the AI tasks supported by the RAN (or the types of operations that the RAN can perform), the performance of the RAN local AI model, and the performance of the RAN local data set. Further optionally, if the AI capability of the RAN includes the AI tasks supported by the RAN, the AI capability of the RAN also includes parameters associated with the AI tasks supported by the RAN.
  • the AI capability of the RAN includes an AI task supported by the RAN, and the AI task supported by the RAN includes a data cleaning operation
  • the AI capability of the RAN includes parameters associated with the data cleaning operation.
  • the parameters associated with the data cleaning operation include at least one of the following: data supplementation of a specific attribute, redundancy identification, authenticity verification, etc.
  • the AI capability of the RAN includes an AI task supported by the RAN, and the AI task supported by the RAN includes a data augmentation operation
  • the AI capability of the RAN includes parameters associated with the data augmentation operation.
  • the parameters associated with the data augmentation operation include supported augmentation strategies, such as data augmentation for a single data source (single sample augmentation, multi-sample augmentation, generative adversarial networks (GAN) generation, automatic augmentation, etc.), data integration for multiple data sources, etc.
  • the AI capability of the RAN includes an AI task supported by the RAN, and the AI task supported by the RAN includes a data reduction operation, then further optionally, the AI capability of the RAN includes parameters of the data reduction operation.
  • the parameters associated with the data reduction operation include the reduction strategy adopted, such as dimension reduction and dimension transformation for a specific task.
  • the AI capability of the RAN may exist in the form of a table, a function, or a string, such as for storage or transmission.
  • the following Table 4 is an example of presenting the AI capability of the RAN in table form.
  • Table 4 mainly uses model-related AI tasks as examples, while Table 4 mainly uses dataset-related AI tasks as examples.
  • the AI tasks supported by RAN#1 include Task A, and the accuracy of Task A is value 1, and the timeliness is value 2.
  • accuracy can represent the performance of the data set under the test model, and timeliness can represent the generation time of the data set.
  • the AI tasks supported by the RAN can be represented by at least one bit.
  • the AI tasks related to the data set include: data cleaning, data amplification, data reduction, and data conversion, and 2 bits are used to indicate the AI tasks supported by the RAN. If the bit is set to "00", it means that the AI task supported by the RAN is data cleaning; if the bit is set to "01”, it means that the AI task supported by the RAN is data amplification; if the bit is set to "10", it means that the AI task supported by the RAN is data reduction; if the bit is set to "11", it means that the AI task supported by the RAN is data conversion. It should be understood that the above is only an exemplary description and is not limiting.
  • the AI tasks supported by the RAN can be represented by a bitmap.
  • the AI tasks related to the data set include: data cleaning, data augmentation, data reduction, and data conversion, and the bit value "1" indicates support, and the bit value "0" indicates non-support.
  • the AI tasks supported by the RAN are represented as "0110”
  • the 4 bits in "0110” correspond to data cleaning, data augmentation, data reduction, and data conversion, respectively, so "0110" indicates that the RAN supports data augmentation and data reduction, and does not support data cleaning and data conversion.
  • the 4 bits in “1011” correspond to data cleaning, data augmentation, data reduction, and data conversion, respectively, so “1011” indicates that the RAN supports data cleaning, data reduction, and data conversion, and does not support data augmentation. It can be understood that the above examples are exemplary descriptions, and the embodiments of the present application are not limited to this.
  • Table 4 is only an exemplary description and is not limited thereto. Any variation of Table 4 is applicable to the present application.
  • Table 4 may also include a greater number of RANs.
  • RAN#1 and RAN#2 support different AI tasks, or RAN#1 and RAN#2 support a greater number of AI tasks.
  • Table 4 may also include a greater number of parameters characterizing the performance of a local data set.
  • AI tasks related to data sets are mainly used as examples for illustrative description. Therefore, the above-mentioned AI capabilities of RAN mainly introduce capabilities related to data sets, and are not limited to this.
  • the terminal publishes an AI task to the first RAN, and the AI task published by the terminal to the first RAN is data amplification.
  • the terminal sends relevant information of the initial data set to the first RAN.
  • the initial data set is the data set for the data augmentation task to be performed.
  • the relevant information of the initial data set may include at least one of the following: current state information, target state information, region information, and version of the initial data set.
  • current state information may include at least one of the following: current state information, target state information, region information, and version of the initial data set.
  • Current state information which can be used to describe the state of the data set when the current node is generated.
  • the current state information can include at least one of the following information: accuracy, timeliness, composition, and attributes.
  • Accuracy can characterize the performance of the data set under several test models.
  • Timeliness can characterize the generation time of the data set.
  • Composition can characterize the composition of the data contained in the data set.
  • Attributes can characterize the type, quantification, dimension, etc. of the data contained in the data set.
  • the initial data set may not carry the current state information.
  • Target state information or the target state information of the initial data set, can be used to describe the final state of the data set, or can be used to describe the state of the data set when it stops flowing in the network.
  • the target state information includes at least one of the following information: accuracy, timeliness, composition, and attributes. For each information, please refer to the previous description and will not be repeated here.
  • Regional information used to assist the current node in deciding other nodes to collaborate with to perform the data set augmentation task. For example, regional information in the relevant information of the initial data set can be used to assist the first RAN node in deciding the RAN to collaborate with to perform the data augmentation task.
  • the version of the initial data set indicates that the initial data set provided by the terminal has been augmented t1 times, and t1 is an integer greater than or equal to 0.
  • t1 is an integer greater than or equal to 0.
  • step 802 the following implementation methods may be included.
  • the terminal sends relevant information of the initial data set to the first RAN, and the relevant information of the initial data set may implicitly indicate that the first RAN needs to perform a data set expansion operation on the initial data set.
  • the relevant information of the initial data set includes current state information and target state information, and the first RAN determines to perform a data set expansion operation on the initial data set based on the inconsistency between the current state information and the target state information.
  • step 802 the terminal sends indication information and relevant information of the initial data set to the first RAN, where the indication information indicates to perform data set expansion on the initial data set.
  • the first RAN performs data amplification on the initial data set to obtain a first data set.
  • the first RAN may perform a data augmentation task based on the initial data set provided by the terminal in step 602. For distinction, a data set obtained by the first RAN performing data augmentation on the initial data set is recorded as a first data set.
  • the first RAN cannot complete the data augmentation task alone, that is, the state of the first data set obtained by the first RAN through data augmentation of the initial data set does not meet the target state required by the terminal, so the first RAN can complete the data augmentation task with the cooperation of other RANs. It is assumed that the RAN determined by the first RAN to collaborate in performing the data augmentation task is the second RAN.
  • the first RAN may also schedule terminals in the cell to participate in the operation, such as participating in data augmentation of the initial data set.
  • the relevant description in method 500 which will not be repeated here.
  • the first RAN obtains the AI capability of the second RAN from the AI-MF.
  • the first RAN may acquire the AI capability of the second RAN from the AI-MF.
  • step 601 Regarding the first RAN and the second RAN, and the manner in which the first RAN obtains the AI capability of the second RAN from the AI-MF, reference may be made to the relevant description in step 601, which will not be repeated here.
  • the first RAN sends relevant information of the first data set to the second RAN.
  • the first data set is a data set obtained after the first RAN performs data set amplification.
  • the relevant information of the first data set may include at least one of the following: current state information, target state information, region information, and version of the first data set.
  • the current state information, region information, and version of the first model are briefly introduced below. For other information not introduced in detail, please refer to the relevant description in step 802.
  • the current state information is used to describe the state of the data set when it is generated at the current node. Therefore, the current state information provided by the first RAN to the second RAN here represents the current state information of the first data set, which is used to describe the state of the first data set when it is generated by the first RAN.
  • Regional information As mentioned above, the regional information is used to assist the current node in deciding other nodes to collaborate in performing the data augmentation task. Therefore, the regional information in the relevant information of the first data set here can be used to assist the second RAN in deciding the RAN to collaborate in performing the data augmentation task.
  • the regional information can refer to the regional information in step 605, which will not be repeated here.
  • the version of the first data set indicates that the first data set provided by the first RAN has been augmented t2 times, and t2 is an integer greater than 1 or equal to 1.
  • the version of the first data set is 1, indicating that the first data set provided by the first RAN has been augmented once.
  • the first data set is a data set that has been augmented once on the initial data set, or the first RAN is the RAN that performs data augmentation on the initial data set for the first time.
  • step 605 For a related solution of the first RAN sending the related information of the first model to the second RAN, reference may be made to the description in step 605 , which will not be described again herein.
  • the second RAN performs data amplification on the first data set to obtain a second data set.
  • the second RAN may perform a data augmentation task based on the first data set generated by the first RAN.
  • the data set obtained by the second RAN performing data augmentation on the first data set is recorded as the second data set.
  • the data set reaches a target state, that is, the state of the second data set meets the target state, and the method 800 further includes step 807 .
  • the second RAN can obtain the AI capability of the third RAN and send the relevant information of the second data set to the third RAN.
  • the data set reaches the target state and sends the final generated data set to the terminal.
  • the second RAN may also schedule terminals in the cell to participate in the operation, such as participating in data augmentation of the first data set.
  • the relevant description in method 500 which will not be repeated here.
  • the second RAN sends a second data set to the terminal.
  • the second RAN sends the second data set to the terminal; or in another possible implementation manner, the second RAN sends the second data set to the first RAN, and the first RAN forwards the second data set to the terminal, and there is no limitation on this.
  • method 800 is mainly illustrated by taking a dataset-related task as an example. It can be understood that the above-mentioned data augmentation task can be replaced by any other dataset-related task.
  • step 804 can be executed first, and then step 802; or step 802 can be executed first, and then step 804; or it can be performed simultaneously, without limitation.
  • one RAN can determine multiple cooperative RANs, and the multiple cooperative RANs cooperate to perform AI tasks.
  • the above text introduces the scenario of a terminal requesting an AI task related to a data set from a RAN by way of example in conjunction with FIG8.
  • multiple RANs may collaborate to complete the AI task requested by the terminal.
  • each RAN may collaborate to perform the AI task based on the relevant information of the data set received from the previous RAN.
  • FIG. 9 is a schematic flow chart of a method 900 for AI task indication provided according to another embodiment of the present application.
  • the method 900 may be used to implement a solution such as method 300.
  • the method 900 may be applicable to a scenario in which a terminal initiates an AI task request.
  • the method 900 may include the following steps.
  • the AI-MF maintains the AI capability of at least one RAN.
  • Step 901 may refer to the description in step 601 or step 801 and will not be repeated here.
  • the terminal sends task request information to the AI-MF.
  • the task request information is used to request the execution of the AI task, in other words, to request the AI-MF to determine the scheduling information for executing the AI task.
  • the AI task requested to be executed by the terminal is recorded as AI task #1.
  • AI task #1 may include: AI tasks related to the model, AI tasks related to the data set, etc.
  • the terminal before the terminal sends the task request information to the AI-MF, the terminal establishes a connection with the AI-MF, and the terminal sends the task request information to the AI-MF based on the connection established with the AI-MF. In another possible implementation, the terminal sends the task request information to the AI-MF through other devices (such as RAN).
  • the AI-MF determines an orchestration table for the AI task #1 based on the AI capability of at least one RAN.
  • the AI-MF may determine an orchestration table for the AI task #1 based on the AI capability of at least one RAN.
  • the scheduling table includes scheduling information of N RANs, where N is an integer greater than 1 or equal to 1. That is, in step 903, the AI-MF determines the scheduling information of N RANs for AI task #1 based on the AI capability of at least one RAN.
  • scheduling information scheduling information, and the scheme in which the AI-MF determines the scheduling table for the AI task #1 based on the AI capability of at least one RAN, reference may be made to the relevant description in method 300, which will not be repeated here.
  • the AI-MF sends an orchestration table or orchestration information to at least one RAN among the N RANs.
  • the AI-MF sends the scheduling table or scheduling information to at least one RAN among the N RANs, which may include the following implementation methods.
  • the AI-MF sends an orchestration table to each RAN in the N RANs.
  • the AI-MF sends an orchestration table to one RAN among the N RANs.
  • the AI-MF sends the orchestration information of each RAN to each RAN in the N RANs.
  • At least one RAN among the N RANs sends response information to the AI-MF.
  • the response information may be used to notify the AI-MF of successful reception of the scheduling information or scheduling table, or may be used to notify the AI-MF of whether the scheduling information or scheduling table is approved.
  • step 904 the AI-MF sends an orchestration table to each of the N RANs, or the AI-MF sends orchestration information of each RAN to each of the N RANs, then in step 905, the N RANs respectively send response information to the AI-MF.
  • step 904 the AI-MF sends the scheduling table to one RAN (such as a first RAN) among the N RANs, then in step 905, the first RAN sends a response message to the AI-MF.
  • one RAN such as a first RAN
  • Method 900 is mainly described by taking the example that each RAN agrees with its own arrangement information.
  • the RAN disagrees with the arrangement information, reference may be made to the relevant description in method 300 .
  • the AI-MF sends a response message of the task request message to the terminal.
  • the response information of the task request information can be used to notify the terminal that the scheduling table has been determined for the AI task #1 requested by the terminal, so that the terminal can provide an initial model or an initial data set to the RAN participating in the execution of AI task #1.
  • the AI-MF fails to determine the scheduling table, such as in the AI capability of at least one RAN maintained by the AI-MF in step 901, each RAN does not support AI task #1, then the AI-MF can also send a response information of the task request information to the terminal, and the response information of the task request information is used to notify the terminal that the scheduling table cannot be provided for the AI task #1 requested by the terminal.
  • the AI-MF after receiving the response to the scheduling information, the AI-MF sends a response to the task request information to the terminal. In another possible implementation, after the AI-MF determines the scheduling table for the AI task #1, it sends a response to the task request information to the terminal.
  • the terminal sends AI task #1 to N RANs.
  • the terminal sends AI task #1 to the first RAN among N RANs.
  • the first RAN indicates the first RAN among the N RANs that executes the AI task #1.
  • AI task #1 is a model training task
  • the terminal sends the initial model to the first RAN among N RANs.
  • AI task #1 is a data set collection task
  • the terminal sends the attributes of the data set to be collected to the first RAN among the N RANs.
  • N RANs collaborate to perform AI task #1.
  • the collaborative execution of AI tasks includes: continuing based on the results of the AI task executed by the previous RAN, or each RAN simultaneously executing the tasks for which it is responsible.
  • the RAN may also schedule terminals in the cell to participate in the operation.
  • the RAN may also schedule terminals in the cell to participate in the operation.
  • reference may be made to the relevant description in method 500, which will not be repeated here.
  • RAN sends the processing result of AI task #1 to the terminal.
  • the RAN in step 909 may be any RAN among the N RANs.
  • the RAN in step 909 may be the last RAN participating in executing AI task #1, or may be the first RAN participating in executing AI task #1, and there is no limitation on this.
  • control node AI-MF confirms the scheduling table for the AI task by way of example in conjunction with Figure 9.
  • control node determines the operation of each network node RAN to perform the AI task, which can improve the global efficiency.
  • each RAN may execute part of the AI task, thereby completing the AI task together.
  • multiple RANs sequentially execute AI tasks requested by the terminal as an example for exemplary description, and this is not limited to this.
  • the AI-MF determines the tasks that each RAM is responsible for, and each RAN can simultaneously or synchronously execute the tasks that it is responsible for.
  • sending a message is mentioned multiple times.
  • a sending a message to B may include A sending a message directly to B, or may include A sending a message to B through other devices, which is not limited.
  • a device such as a terminal, a control node, or a network node
  • components of the device such as a chip or a circuit
  • the embodiments of the present application also provide corresponding devices, which include modules for executing the corresponding methods in the above-mentioned method embodiments.
  • the module can be software, hardware, or a combination of software and hardware. It can be understood that the technical features described in the above-mentioned method embodiments are also applicable to the following device embodiments.
  • FIG. 10 is a schematic block diagram of a communication device 1000 provided in an embodiment of the present application.
  • the device 1000 includes a transceiver unit 1010 and a processing unit 1020.
  • the transceiver unit 1010 can be used to implement corresponding communication functions.
  • the transceiver unit 1010 can also be referred to as a communication interface or a communication unit.
  • the processing unit 1020 can be used to implement corresponding processing functions, such as determining scheduling information, and executing AI tasks.
  • the device 1000 is used to execute the steps or processes executed by the control node in the embodiment shown in FIG. 3 , and the steps or processes executed by the AI-MF in the embodiment shown in FIG. 9 .
  • the processing unit 1020 is used to determine first orchestration information for the AI task, where the first orchestration information indicates that the first network node performs the first task of the AI task; and the transceiver unit 1010 is used to send the first orchestration information to the first network node.
  • the processing unit 1020 is further used to determine second orchestration information for the AI task, the second orchestration information instructing the second network node to perform a second task of the AI task; the transceiver unit 1010 is further used to send the second orchestration information to the first network node, or to send the second orchestration information to the second network node.
  • the processing unit 1020 is further used to determine second orchestration information for the AI task, the second orchestration information indicating that the second network node performs a second task of the AI task; the transceiver unit 1010 is further used to send the first orchestration information and the second orchestration information to the second network node; the transceiver unit 1010 is used to send the first orchestration information to the first network node, including: the transceiver unit 1010 is used to send the first orchestration information and the second orchestration information to the first network node.
  • the first network node is the first network node that participates in executing the AI task.
  • the first orchestration information includes at least one of the following information: the first task, an identifier of the first network node, resources provided by the first network node for executing the first task, and an exit condition for the first network node to execute the first task.
  • the processing unit 1020 is used to determine the first orchestration information for the AI task, including: the processing unit 1020 is used to determine the first orchestration information for the AI task according to the AI capability of the first network node.
  • the transceiver unit 1010 is further configured to receive response information from the first network node, where the response information indicates whether the first network node agrees with the first orchestration information.
  • the device 1000 is used to execute the steps or processes executed by the network node in the embodiment shown in FIG. 3 , and the steps or processes executed by the RAN in the embodiment shown in FIG. 9 .
  • the transceiver unit 1010 is used to receive first orchestration information from a control node, where the first orchestration information indicates that a first network node performs a first task of an AI task; and the processing unit 1020 is used to execute the first task according to the first orchestration information.
  • the transceiver unit 1010 is used to receive first orchestration information from a control node, including: the transceiver unit 1010 is used to receive first orchestration information and second orchestration information from the control node, the second orchestration information indicating that the second network node performs a second task of the AI task; the transceiver unit 1010 is also used to send the second orchestration information to the second network node.
  • the transceiver unit 1010 is configured to send the second orchestration information to the second network node, including: the transceiver unit 1010 is configured to send the processing result of the first task and the second orchestration information to the second network node.
  • the first network node is the first network node that participates in executing the AI task.
  • the first orchestration information includes at least one of the following information: the first task, an identifier of the first network node, resources provided by the first network node for executing the first task, and an exit condition for the first network node to execute the first task.
  • the transceiver unit 1010 is further configured to send the AI capability of the first network node to the control node.
  • the transceiver unit 1010 is further configured to send response information to the control node, where the response information indicates whether the first network node agrees with the first orchestration information.
  • the transceiver unit 1010 is also used to send the first task or part of the first task to at least one terminal device; or, to send the first task or part of the first task to a second network node, where the second network node is at least one network node participating in executing the AI task.
  • At least one terminal device is in a preset state.
  • the transceiver unit 1010 is further configured to send notification information to at least one terminal device, where the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • the apparatus 1000 is used to execute the steps or processes executed by the first network node in the embodiment shown in FIG. 4 , or the steps or processes executed by the first RAN in the embodiment shown in FIG. 6 or 8 .
  • the transceiver unit 1010 is used to send the processing result and target state information of the first task of the AI task to the second network node, where the target state information is used to indicate the target result of the AI task.
  • a transceiver unit 1010 is used to send the processing result and target state information of the first task of the AI task to the second network node, including: the transceiver unit 1010 is used to send the processing result and target state information of the first task of the AI task to the second network node based on the AI capability of the second network node.
  • the transceiver unit 1010 is further used to send a first request message to the control node or the second network node, wherein the first request message requests the AI capability of the second network node; and receive a response message to the first request message, wherein the response message to the first request message indicates the AI capability of the second network node.
  • the transceiver unit 1010 is further used to send a second request message to the second network node, where the second request message requests the second network node to collaborate in performing the AI task.
  • the processing result of the first task represents the current state information of the AI task.
  • the transceiver unit 1010 is further used to send area information to the second network node, and the area information is used by the second network node to determine the network node that collaborates to perform the AI task.
  • the transceiver unit 1010 is further configured to send the first task or a portion of the first task to at least one terminal device.
  • At least one terminal device is in a preset state.
  • the transceiver unit 1010 is further configured to send notification information to at least one terminal device, where the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • the apparatus 1000 is used to execute the steps or processes executed by the second network node in the embodiment shown in FIG. 4 , or the steps or processes executed by the second RAN in the embodiment shown in FIG. 6 or 8 .
  • the transceiver unit 1010 is used to receive the processing result and target state information of the first task of the AI task from the first network node, where the target state information is used to indicate the target result of the AI task; the processing unit 1020 is used to execute the second task of the AI task based on the processing result and target state information of the first task.
  • the transceiver unit 1010 is further configured to send the AI capability of the second network node to the control node or the first network node.
  • the transceiver unit 1010 is further used to receive a second request message from the first network node, where the second request message requests the second network node to collaborate in performing the AI task.
  • the processing result of the first task represents the current state information of the AI task; the processing unit 1020 is used to execute the second task of the AI task based on the processing result of the first task and the target state information, including: the processing unit 1020 is used to execute the second task of the AI task based on the current state information and the target state information of the AI task.
  • the transceiver unit 1010 is further used to receive area information from the first network node, and the area information is used by the second network node to determine the network node that collaborates to perform the AI task.
  • the transceiver unit 1010 is further configured to send the second task or a portion of the second task to at least one terminal device.
  • At least one terminal device is in a preset state.
  • the transceiver unit 1010 is further configured to send notification information to at least one terminal device, where the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • the device 1000 is used to execute the steps or processes executed by the network node in the embodiment shown in FIG. 5 .
  • the transceiver unit 1010 is configured to send an AI task to at least one terminal device, wherein the at least one terminal device is in a preset state.
  • the transceiver unit 1010 is further configured to send notification information to at least one terminal device, where the notification information notifies that the at least one terminal device is adjusted to a preset state.
  • the device 1000 is used to execute the steps or processes executed by the terminal in the embodiment shown in FIG. 5 .
  • the transceiver unit 1010 is used to receive an AI task from a network node, wherein the terminal device is in a preset state; and the processing unit 1020 is used to execute the AI task.
  • the transceiver unit 1010 is further configured to receive notification information from a network node, where the notification information notifies that the terminal device is adjusted to a preset state.
  • the apparatus 1000 herein is embodied in the form of a functional unit.
  • the term "unit” herein may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group processor, etc.) and a memory for executing one or more software or firmware programs, a combined logic circuit, and/or other suitable components that support the described functionality.
  • ASIC application specific integrated circuit
  • processor e.g., a shared processor, a dedicated processor, or a group processor, etc.
  • memory for executing one or more software or firmware programs, a combined logic circuit, and/or other suitable components that support the described functionality.
  • the product implementation form of the device 1000 provided in the embodiment of the present application is a program code that can be executed on a computer.
  • the apparatus 1000 provided in the embodiment of the present application may be a communication device, or a chip, a chip system (e.g., a system on chip (SoC)) or a circuit applied to a communication device.
  • the transceiver unit 1010 may be a transceiver, or an input/output interface;
  • the processing unit 1020 may be a processor.
  • the transceiver unit 1010 may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or a related circuit on the chip, the chip system or the circuit;
  • the processing unit 1020 may be a processor, a processing circuit or a logic circuit, etc.
  • transceiver unit 1010 can also be a transceiver circuit (for example, can include a receiving circuit and a sending circuit), and the processing unit can be a processing circuit.
  • FIG. 11 is a schematic block diagram of a communication device 1100 provided in an embodiment of the present application.
  • the device 1100 includes a processor 1110, and the processor 1110 is coupled to a memory 1120.
  • a memory 1120 is further included, which is used to store computer programs or instructions and/or data, and the processor 1110 is used to execute the computer programs or instructions stored in the memory 1120, or read the data stored in the memory 1120, so as to execute the methods in the above method embodiments.
  • processors 1110 there are one or more processors 1110 .
  • the memory 1120 is one or more.
  • the memory 1120 is integrated with the processor 1110 or provided separately.
  • the device 1100 further includes a transceiver 1130, and the transceiver 1130 is used for receiving and/or sending signals.
  • the processor 1110 is used for controlling the transceiver 1130 to receive and/or send signals.
  • the device 1100 is used to implement the operations performed by the control node in the above various method embodiments.
  • the processor 1110 is used to execute the computer program or instructions stored in the memory 1120 to implement the relevant operations of the control node in each method embodiment above.
  • the device 1100 is used to implement the operations performed by the network node in the above method embodiments.
  • the processor 1110 is used to execute the computer program or instructions stored in the memory 1120 to implement the relevant operations of the network node in each method embodiment above.
  • the device 1100 is used to implement the operations performed by the terminal in the above method embodiments.
  • the processor 1110 is used to execute the computer program or instructions stored in the memory 1120 to implement the relevant operations of the network node in the above various method embodiments. For example, the method executed by the terminal in the embodiment shown in FIG5 .
  • each step of the above method can be completed by the hardware integrated logic circuit in the processor 1110 or the instruction in the form of software.
  • the method disclosed in conjunction with the embodiment of the present application can be directly embodied as a hardware processor for execution, or a combination of hardware and software modules in the processor for execution.
  • the software module can be located in a storage medium mature in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the memory 1120, and the processor 1110 reads the information in the memory 1120 and completes the steps of the above method in conjunction with its hardware. To avoid repetition, it is not described in detail here.
  • the processor may be one or more integrated circuits for executing related programs to execute the embodiments of the methods of the present application.
  • a processor may include one or more processors and be implemented as a combination of computing devices.
  • the processor may include one or more of the following: a microprocessor, a microcontroller, a digital signal processor (DSP), a digital signal processing device (DSPD), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device (PLD), a gating logic, a transistor logic, a discrete hardware circuit, a processing circuit or other suitable hardware, firmware and/or a combination of hardware and software for performing the various functions described in the present disclosure.
  • the processor may be a general-purpose processor or a special-purpose processor.
  • processor 1110 may be a baseband processor or a central processing unit.
  • the baseband processor may be used to process communication protocols and communication data.
  • the central processing unit may be used to enable the device to execute a software program and process data in the software program.
  • a portion of the processor may also include a non-volatile random access memory.
  • the processor may also store information about the type of device.
  • Program in this application is used to refer to software in a broad sense.
  • Non-limiting examples of software include: program code, program, subroutine, instruction, instruction set, code, code segment, software module, application, or software application, etc.
  • the program can be run in a processor and/or computer. So that the device performs various functions and/or processes described in this application.
  • the memory can store data required by the processor (e.g., processor 1110) when executing software.
  • the memory can be implemented using any suitable storage technology.
  • the memory can be any available storage medium that can be accessed by the processor and/or computer.
  • Non-limiting examples of storage media include random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous link dynamic random access memory (SLDRAM), and direct rambus RAM (DR RAM), removable media, optical disk storage, magnetic disk storage media, magnetic storage devices, flash memory, registers, state memory, remote mounted storage, local or remote storage components, or any other medium capable of carrying or storing software, data or information and accessible by a processor/computer. It should be noted that the memory described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • the memory e.g., memory 1120
  • the processor e.g., processor 1110
  • the memory may be used to connect to the processor so that the processor can read information from the memory and store and/or write information in the memory.
  • the memory may be integrated in the processor.
  • the memory and the processor may be provided in an integrated circuit (e.g., the integrated circuit may be provided in a UE or a BS or other network node).
  • FIG. 12 is a schematic block diagram of a chip system 1200 provided in an embodiment of the present application.
  • the chip system 1200 (or also referred to as a processing system) includes a logic circuit 1210 and an input/output interface 1220.
  • the logic circuit 1210 can be a processing circuit in the chip system 1200.
  • the logic circuit 1210 can be coupled to the storage unit and call the instructions in the storage unit so that the chip system 1200 can implement the methods and functions of each embodiment of the present application.
  • the input/output interface 1220 can be an input/output circuit in the chip system 1200, outputting information processed by the chip system 1200, or inputting data or signaling information to be processed into the chip system 1200 for processing.
  • the chip system 1200 is used to implement the operations performed by the control node in the above method embodiments.
  • the logic circuit 1210 is used to implement the processing-related operations performed by the control node in the above method embodiments, such as the processing-related operations performed by the control node in the embodiment shown in FIG. 3 , or the processing-related operations performed by the AI-MF in the embodiment shown in FIG. 9 ;
  • the input/output interface 1220 is used to implement the sending and/or receiving-related operations performed by the control node in the above method embodiments, such as the sending and/or receiving-related operations performed by the control node in the embodiment shown in FIG. 3 , or the sending and/or receiving-related operations performed by the AI-MF in the embodiment shown in FIG. 9 .
  • the chip system 1200 is used to implement the operations performed by the network node in the above method embodiments.
  • the logic circuit 1210 is used to implement the processing-related operations performed by the network node in the above method embodiments, such as the processing-related operations performed by the network node in the embodiment shown in Figure 3, or the processing-related operations performed by the RAN in the embodiment shown in Figure 9;
  • the input/output interface 1220 is used to implement the sending and/or receiving-related operations performed by the network node in the above method embodiments, such as the sending and/or receiving-related operations performed by the network node in the embodiment shown in Figure 3, or the sending and/or receiving-related operations performed by the RAN in the embodiment shown in Figure 9.
  • the logic circuit 1210 is used to implement the processing-related operations performed by the network node in the above method embodiments, such as the processing-related operations performed by the first network node and the second network node in the embodiment shown in FIG. 4 , or the processing-related operations performed by the first RAN and the second RAN in the embodiments shown in FIGS. 6 and 8 ;
  • the input/output interface 1220 is used to implement the sending and/or receiving-related operations performed by the network node in the above method embodiments, such as the sending and/or receiving-related operations performed by the first network node and the second network node in the embodiment shown in FIG. 4 , or the sending and/or receiving-related operations performed by the first RAN and the second RAN in the embodiments shown in FIGS. 6 and 8 .
  • the chip system 1200 is used to implement the operations performed by the terminal in the above method embodiments.
  • the logic circuit 1210 is used to implement the processing-related operations performed by the terminal in the above method embodiments, such as the processing-related operations performed by the terminal in the embodiment shown in Figure 5;
  • the input/output interface 1220 is used to implement the sending and/or receiving-related operations performed by the terminal in the above method embodiments, such as the sending and/or receiving-related operations performed by the terminal in the embodiment shown in Figure 5.
  • An embodiment of the present application also provides a computer-readable storage medium on which computer instructions for implementing the methods executed by a control node, a network node, or a terminal in the above-mentioned method embodiments are stored.
  • An embodiment of the present application also provides a computer program product, comprising instructions, which, when executed by a computer, implement the methods performed by a control node, a network node, or a terminal in the above-mentioned method embodiments.
  • An embodiment of the present application also provides a communication system, which includes at least one of the control node, network node, and terminal in the above embodiments.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the above units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to implement the solution provided by the present application.
  • each functional unit in each embodiment of the present application may be integrated into one unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network or other programmable devices.
  • the computer can be a personal computer, a server, or a network device, etc.
  • the computer instruction can be stored in a computer-readable storage medium, or transmitted from a computer-readable storage medium to another computer-readable storage medium, for example, the computer instruction can be transmitted from a website site, a computer, a server or a data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center.
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请提供一种AI任务指示的方法、通信装置和系统,可以适用于AI与无线网络结合的场景。该方法可以包括:控制节点获知AI任务,并为该AI任务确定至少一个网络节点的编排信息,各个网络节点的编排信息可指示各个网络节点在参与执行AI任务时提供的操作;控制节点向该至少一个网络节点中的部分或全部网络节点发送编排信息。通过该方式,不仅可以实现无线网络中的网络节点执行AI任务,实现AI与无线网络的融合,而且由控制节点统一进行编排,可以提高全局效率。

Description

AI任务指示的方法、通信装置和系统 技术领域
本申请涉及无线通信领域,具体地,涉及应用了智能网络的无线通信技术,尤其涉及一种AI任务指示的方法、通信装置和系统。
背景技术
为了应对未来智能普惠的愿景,智能化将在无线网络架构层面进一步演进,人工智能(artificial intelligence,AI)将与无线网络进一步深度的融合,实现网络内生的智能和终端的智能化,从而可以应对一些可能的新需求和新场景。如何实现AI与无线网络的融合,是亟需解决的问题。
发明内容
本申请提供一种AI任务指示的方法、通信装置和系统,通过由控制节点为AI任务确定编排信息并指示该编排信息,从而可以实现通过无线网络中的网络节点执行AI任务,实现AI与无线网络的融合。
第一方面,提供了一种AI任务指示的方法,该方法可以由控制节点执行。该控制节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。
该方法可以包括:控制节点为AI任务确定第一编排信息,第一编排信息指示第一网络节点执行AI任务的第一任务;控制节点向第一网络节点发送第一编排信息。
基于上述技术方案,可以由控制节点为AI任务确定网络节点的编排信息,并将该编排信息发送给网络节点,进而网络节点可以基于该编排信息执行相应的操作。这样,不仅可以实现无线网络中的网络节点执行AI任务,实现AI与无线网络的融合,而且由控制节点统一进行编排,可以提高全局效率。
结合第一方面,在第一方面的某些实现方式中,方法还包括:控制节点为AI任务确定第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务;控制节点向第一网络节点发送第二编排信息,或者,控制节点向第二网络节点发送第二编排信息。
基于上述技术方案,控制节点为AI任务确定多个网络节点的编排信息,这样可以提高全局效率。此外,控制节点可以向某一个网络节点(如第一网络节点)发送各个网络节点的编排信息,降低控制节点向各个网络节点发送编排信息带来的信令开销。或者控制节点也可以分别向各个网络节点发送各个网络节点的编排信息,这样可以降低网络节点之间传输编排信息带来的信令开销。
结合第一方面,在第一方面的某些实现方式中,方法还包括:控制节点为AI任务确定第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务;控制节点向第二网络节点发送第一编排信息和第二编排信息;控制节点向第一网络节点发送第一编排信息,包括:控制节点向第一网络节点发送第一编排信息和第二编排信息。
基于上述技术方案,控制节点为AI任务确定多个网络节点的编排信息,这样可以提高全局效率。此外,控制节点可以向各个网络节点发送所有网络节点的编排信息,这样可以降低控制节点选择各个网络节点的编排信息带来的开销。
结合第一方面,在第一方面的某些实现方式中,第一网络节点为参与执行AI任务的第一个网络节点。
结合第一方面,在第一方面的某些实现方式中,第一编排信息包括以下至少一项信息:第一任务、第一网络节点的标识、第一网络节点执行第一任务提供的资源、第一网络节点执行第一任务的退出条件。
结合第一方面,在第一方面的某些实现方式中,控制节点为AI任务确定第一编排信息,包括:控制节点根据第一网络节点的AI能力,为AI任务确定第一编排信息。
基于上述技术方案,控制节点可以根据网络节点的AI能力确定该网络节点的编排信息,这样控制节点确定的编排信息可以与各网络节点的AI能力相匹配,降低网络节点无法执行AI任务的概率。
结合第一方面,在第一方面的某些实现方式中,方法还包括:控制节点接收来自第一网络节点的响应信息,响应信息指示第一网络节点是否同意第一编排信息。
基于上述技术方案,网络节点还可以向控制节点发送是否同意编排信息的响应,这样可以便于控制节点获知网络节点是否同意编排信息,进而可以确定是否下发AI任务。
第二方面,提供了一种AI任务指示的方法,该方法可以由网络节点执行。该网络节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。下面以第一网络节点为例进行说明。
该方法可以包括:第一网络节点接收来自控制节点的第一编排信息,第一编排信息指示第一网络节点执行AI任务的第一任务;第一网络节点根据第一编排信息,执行第一任务。
结合第二方面,在第二方面的某些实现方式中,第一网络节点接收来自控制节点的第一编排信息,包括:第一网络节点接收来自控制节点的第一编排信息和第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务;方法还包括:第一网络节点向第二网络节点发送第二编排信息。
结合第二方面,在第二方面的某些实现方式中,第一网络节点向第二网络节点发送第二编排信息,包括:第一网络节点向第二网络节点发送第一任务的处理结果和第二编排信息。
结合第二方面,在第二方面的某些实现方式中,第一网络节点为参与执行AI任务的第一个网络节点。
结合第二方面,在第二方面的某些实现方式中,第一编排信息包括以下至少一项信息:第一任务、第一网络节点的标识、第一网络节点执行第一任务提供的资源、第一网络节点执行第一任务的退出条件。
结合第二方面,在第二方面的某些实现方式中,方法还包括:第一网络节点向控制节点发送第一网络节点的AI能力。
结合第二方面,在第二方面的某些实现方式中,方法还包括:第一网络节点向控制节点发送响应信息,响应信息指示第一网络节点是否同意第一编排信息。
结合第二方面,在第二方面的某些实现方式中,方法还包括:第一网络节点向至少一个终端装置发送第一任务或第一任务的部分任务;或者,第一网络节点向第二网络节点发送第一任务或第一任务的部分任务,第二网络节点为参与执行AI任务的至少一个网络节点。
基于上述技术方案,网络节点可以调度其他网络节点(如第二网络节点)或终端装置,协作执行AI任务,这样可以通过利用空闲的算力来执行AI任务,不仅可以提高资源利用率,而且还可以提高灵活性。
结合第二方面,在第二方面的某些实现方式中,至少一个终端装置处于预设状态。
基于上述技术方案,通过对终端定义不同的状态,可以便于网络节点基于终端的状态获知终端是否能够参与执行AI任务。例如,网络节点可向处于预设状态的终端发送AI任务,也就是说,处于预设状态的AI可参与执行AI任务。
结合第二方面,在第二方面的某些实现方式中,在第一网络节点向至少一个终端装置发送第一任务或第一任务的部分任务之前,方法还包括:第一网络节点向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
第二方面及各个可能的设计的有益效果可以参考第一方面相关的描述,在此不予赘述。
第三方面,提供了一种AI任务指示的方法,该方法可以由网络节点执行。该网络节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。下面以第一网络节点为例进行说明。
该方法可以包括:第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,目标状态信息用于指示AI任务的目标结果。
示例地,第一任务的处理结果和目标状态信息,可隐式指示第二网络节点参与执行AI任务,如第二网络节点执行AI任务的第二任务。
基于上述技术方案,网络节点之间可协作执行AI任务,并且网络节点可通过当前的处理结果和目标状态信息确定是否参与执行AI任务,降低了指示网络节点参与执行AI任务带来的信令开销。
结合第三方面,在第三方面的某些实现方式中,第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,包括:基于第二网络节点的AI能力,第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
基于上述技术方案,第一网络节点可基于第二网络节点的AI能力,确定是否向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,也即确定第二网络节点是否参与执行AI任务。这样可以降低第二网络节点无法参与执行AI任务的概率。
结合第三方面,在第三方面的某些实现方式中,方法还包括:第一网络节点向控制节点或第二网络节点发送第一请求信息,第一请求信息请求第二网络节点的AI能力;第一网络节点接收第一请求信息的响应信息,第一请求信息的响应信息指示第二网络节点的AI能力。
结合第三方面,在第三方面的某些实现方式中,在第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息之前,方法还包括:第一网络节点向第二网络节点发送第二请求信息,第二请求信息请求第二网络节点协作执行AI任务。
基于上述技术方案,第一网络节点确定第二网络节点同意协作执行AI任务的情况下, 第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,降低第二网络节点无法参与执行AI任务的概率。
结合第三方面,在第三方面的某些实现方式中,第一任务的处理结果表示AI任务的当前状态信息。
基于上述技术方案,AI任务的当前状态信息和目标状态信息可指示第二网络节点参与执行AI任务,如执行AI任务的第二任务。
结合第三方面,在第三方面的某些实现方式中,方法还包括:第一网络节点还向第二网络节点发送区域信息,区域信息用于第二网络节点确定协作执行AI任务的网络节点。
结合第三方面,在第三方面的某些实现方式中,方法还包括:第一网络节点向至少一个终端装置发送第一任务或第一任务的部分任务。
基于上述技术方案,网络节点可以调度终端装置,协作执行AI任务,这样可以通过利用空闲的算力来执行AI任务,不仅可以提高资源利用率,而且还可以提高灵活性。
结合第三方面,在第三方面的某些实现方式中,至少一个终端装置处于预设状态。
基于上述技术方案,通过对终端定义不同的状态,可以便于网络节点基于终端的状态获知终端是否能够参与执行AI任务。例如,网络节点可向处于预设状态的终端发送AI任务,也就是说,处于预设状态的AI可参与执行AI任务。
结合第三方面,在第三方面的某些实现方式中,在第一网络节点向至少一个终端装置发送第一任务或第一任务的部分任务之前,方法还包括:第一网络节点向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
第四方面,提供了一种AI任务指示的方法,该方法可以由网络节点执行。该网络节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。下面以第二网络节点为例进行说明。
该方法可以包括:第二网络节点接收来自第一网络节点的AI任务的第一任务的处理结果和目标状态信息,目标状态信息用于指示AI任务的目标结果;第二网络节点基于第一任务的处理结果和目标状态信息,执行AI任务的第二任务。
结合第四方面,在第四方面的某些实现方式中,方法还包括:第二网络节点向控制节点或第一网络节点发送第二网络节点的AI能力。
结合第四方面,在第四方面的某些实现方式中,在第二网络节点接收来自第一网络节点的AI任务的第一任务的处理结果和目标状态信息之前,方法还包括:第二网络节点接收来自第一网络节点的第二请求信息,第二请求信息请求第二网络节点协作执行AI任务。
结合第四方面,在第四方面的某些实现方式中,第一任务的处理结果表示AI任务的当前状态信息;第二网络节点基于第一任务的处理结果和目标状态信息,执行AI任务的第二任务,包括:第二网络节点基于AI任务的当前状态信息和目标状态信息,执行AI任务的第二任务。
结合第四方面,在第四方面的某些实现方式中,方法还包括:第二网络节点接收来自第一网络节点的区域信息,区域信息用于第二网络节点确定协作执行AI任务的网络节点。
结合第四方面,在第四方面的某些实现方式中,方法还包括:第二网络节点向至少一个终端装置发送第二任务或第二任务的部分任务。
结合第四方面,在第四方面的某些实现方式中,至少一个终端装置处于预设状态。
结合第四方面,在第四方面的某些实现方式中,在第二网络节点向至少一个终端装置发送第二任务或第二任务的部分任务之前,方法还包括:第二网络节点向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
第四方面及各个可能的设计的有益效果可以参考第三方面相关的描述,在此不予赘述。
第五方面,提供了一种AI任务指示的方法,该方法可以由网络节点执行。该网络节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。
该方法可以包括:网络节点向至少一个终端装置发送AI任务,其中,至少一个终端装置处于预设状态。
基于上述技术方案,通过对终端定义不同的状态,可以便于网络节点基于终端的状态获知终端是否能够执行AI任务。例如,网络节点可向处于预设状态的终端发送AI任务,也就是说,处于预设状态的AI可执行AI任务。
结合第五方面,在第五方面的某些实现方式中,在网络节点向至少一个终端装置发送AI任务之前,方法还包括:网络节点向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
第六方面,提供了一种AI任务指示的方法,该方法可以由终端装置执行。该终端装置可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。
该方法可以包括:终端装置接收来自网络节点的AI任务,其中,终端装置处于预设状态;终端装置执行AI任务。
结合第六方面,在第六方面的某些实现方式中,在终端装置接收来自网络节点的AI任务之前,方法还包括:终端装置接收来自网络节点的通知信息,通知信息通知将终端装置调整为预设状态。
第六方面及各个可能的设计的有益效果可以参考第五方面相关的描述,在此不予赘述。
第七方面,提供了一种AI任务指示的方法,该方法可以由通信系统执行,该通信信息例如包括控制节点和网络节点。该控制节点和网络节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。
该方法可以包括:控制节点为AI任务确定第一编排信息,第一编排信息指示第一网络节点执行AI任务的第一任务;控制节点向第一网络节点发送第一编排信息;第一网络节点根据第一编排信息,执行第一任务。
其中,控制节点例如可以为第一方面中所述的控制节点,第一网络节点例如可以为第二方面中所述的第一网络节点。
第七方面的有益效果可以参考第一方面相关的描述,在此不予赘述。
第八方面,提供了一种AI任务指示的方法,该方法可以由通信系统执行,该通信信息例如包括第一网络节点和第二网络节点。该第一网络节点和第二网络节点可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。
该方法可以包括:第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,目标状态信息用于指示AI任务的目标结果;第二网络节点基于第一任务的处理结果和目标状态信息,执行AI任务的第二任务。
其中,第一网络节点例如可以为第三方面中所述的第一网络节点,第二网络节点例如可以为第四方面中所述的第二网络节点。
第八方面的有益效果可以参考第三方面相关的描述,在此不予赘述。
第九方面,提供了一种AI任务指示的方法,该方法可以由通信系统执行,该通信信息例如包括网络节点和终端装置。该网络节点和终端装置可以是设备,也可以是用于设备的芯片(系统)或电路,本申请对此不作限定。
该方法可以包括:网络节点向至少一个终端装置发送AI任务,其中,至少一个终端装置处于预设状态;至少一个终端装置执行AI任务。
其中,网络节点例如可以为第五方面中所述的网络节点,终端装置例如可以为第六方面中所述的终端装置。
第九方面的有益效果可以参考第五方面相关的描述,在此不予赘述。
第十方面,提供一种通信装置,该装置用于执行上述第一方面至第九方面中任一方面提供的方法。具体地,该装置可以包括用于执行第一方面至第九方面中任一方面的上述任一种实现方式提供的方法的单元和/或模块,如处理单元和/或通信单元。
在一种实现方式中,该装置为通信设备。当该装置为通信设备时,通信单元可以是收发器,或,输入/输出接口;处理单元可以是至少一个处理器。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。
在另一种实现方式中,该装置为用于通信设备中的芯片、芯片系统或电路。当该装置为用于终端设备中的芯片、芯片系统或电路时,通信单元可以是该芯片、芯片系统或电路上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等;处理单元可以是至少一个处理器、处理电路或逻辑电路等。
第十一方面,提供一种通信装置,该装置包括:存储器,用于存储程序;至少一个处理器,用于执行存储器存储的计算机程序或指令,以执行上述第一方面至第九方面中任一方面的上述任一种实现方式提供的方法。
在一种实现方式中,该装置为通信设备。
在另一种实现方式中,该装置为用于通信设备中的芯片、芯片系统或电路。
第十二方面,本申请提供一种处理器,用于执行上述各方面提供的方法。
对于处理器所涉及的发送和获取/接收等操作,如果没有特殊说明,或者,如果未与其在相关描述中的实际作用或者内在逻辑相抵触,则可以理解为处理器输出和输入等操作,也可以理解为由射频电路和天线所进行的发送和接收操作,本申请对此不做限定。
第十三方面,提供一种计算机可读存储介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第一方面至第九方面中任一方面的上述任一种实现方式提供的方法。
第十四方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面至第九方面中任一方面的上述任一种实现方式提供的方法。
第十五方面,提供一种芯片,芯片包括处理器与通信接口,处理器通过通信接口读取存储器上存储的指令,执行上述第一方面至第九方面中任一方面的上述任一种实现方式提供的方法。
可选地,作为一种实现方式,芯片还包括存储器,存储器中存储有计算机程序或指令,处理器用于执行存储器上存储的计算机程序或指令,当计算机程序或指令被执行时,处理 器用于执行上述第一方面至第九方面中任一方面的上述任意一种实现方式提供的方法。
第十六方面,提供一种通信系统,包括上文第一方面中的控制节点和第二方面中的第一网络节点。
可选地,该通信系统还包括第二网络节点。
第十七方面,提供一种通信系统,包括上文第三方面中的第一网络节点和第四方面中的第二网络节点。
第十八方面,提供一种通信系统,包括上文第五方面中的网络节点和第六方面中的终端装置。
附图说明
图1是适用于本申请实施例的无线通信系统100的示意图。
图2是根据本申请实施例的无线通信系统的示意图。
图3是本申请一实施例提供的AI任务指示的方法300的示意图。
图4是本申请另一实施例提供的AI任务指示的方法400的示意图。
图5是本申请另一实施例提供的AI任务指示的方法500的示意图。
图6是根据本申请一实施例提供的AI任务指示的方法600的示意性流程图。
图7是适用于本申请实施例的示意图。
图8是根据本申请另一实施例提供的AI任务指示的方法800的示意性流程图。
图9是根据本申请另一实施例提供的AI任务指示的方法900的示意性流程图。
图10是本申请实施例提供的一种通信装置1000的示意性框图。
图11是本申请实施例提供的一种通信装置1100的示意性框图。
图12是本申请实施例提供的一种芯片系统1200的示意性框图。
具体实施方式
下面将结合附图,对本申请的技术方案进行描述。
首先对本申请涉及到的相关概念和技术作简单介绍。
1、人工智能(artificial intelligence,AI)模型:是能实现AI功能的算法或者计算机程序,AI模型表征了模型的输入和输出之间的映射关系,或者说AI模型是将一定维度的输入映射到一定维度的输出的函数模型,函数模型的参数可通过机器学习训练得到。例如,f(x)=ax 2+b是一个二次函数模型,它可以看做一个AI模型,a和b为该AI模型的参数,a和b可以通过机器学习训练得到。
可以理解,AI模型的实现可以是硬件电路,也可以是软件,或者也可以是软件和硬件结合的方式,不予限制。软件的非限制性示例包括:程序代码、程序、子程序、指令、指令集、代码、代码段、软件模块、应用程序、或软件应用程序等。
2、数据集:机器学习中用于模型训练、模型验证、或模型测试的数据,数据的数量和质量将影响到机器学习的效果。
3、模型训练:通过选择合适的损失函数,利用优化算法对模型参数进行训练,使得损失函数值最小化。其中,损失函数用于衡量模型的预测值和真实值之间的差别。
4、AI任务:表示与AI相关的任务。作为示例,AI任务例如可以包括:与模型(如 AI模型)相关的任务、与数据集相关的任务等。
下面简单介绍适用于本申请的通信系统。
本申请提供的技术方案可以应用于各种通信系统,例如:第五代(5th generation,5G)或新无线(new radio,NR)系统、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)系统等。本申请提供的技术方案还可以应用于未来的通信系统,如第六代移动通信系统。本申请提供的技术方案还可以应用于设备到设备(device to device,D2D)通信,车到万物(vehicle-to-everything,V2X)通信,机器到机器(machine to machine,M2M)通信,机器类型通信(machine type communication,MTC),以及物联网(internet of things,IoT)通信系统或者其它通信系统。
本申请实施例中的终端设备包括各种具有无线通信功能的设备,其可用于连接人、物、机器等。终端设备可以广泛应用于各种场景,例如:蜂窝通信,D2D,V2X,端到端(peer to peer,P2P),M2M,MTC,IoT,虚拟现实(virtual reality,VR),增强现实(augmented reality,AR),工业控制,自动驾驶,远程医疗,智能电网,智能家具,智能办公,智能穿戴,智能交通,智慧城市无人机,机器人,遥感,被动传感,定位,导航与跟踪,自主交付等场景。终端设备可以是上述任一场景下的终端,如MTC终端、IoT终端等。终端设备可以是第三代合作伙伴项目(3rd generation partnership project,3GPP)标准的用户设备(user equipment,UE)、终端(terminal)、固定设备、移动台(mobile station)设备或者说移动设备、用户单元(subscriber unit)、手持设备、车载设备、可穿戴设备、蜂窝电话(cellular phone)、智能电话(smart phone)、SIP电话、无线数据卡、个人数字助理(personal digital assistant,PDA)、电脑、平板电脑、笔记本电脑、无线调制解调器、手持设备(handset)、膝上型电脑(laptop computer)、具有无线收发功能的计算机、智能书、车辆、卫星、全球定位系统(global positioning system,GPS)设备、目标跟踪设备、飞行器(例如无人机、直升机、多直升机、四直升机、或飞机等)、船只、遥控设备智能家居设备、工业设备,或者内置于上述设备中的装置(例如,上述设备中的通信模块、调制解调器或芯片等),或者连接到无线调制解调器的其它处理设备。为了描述方便,下文将终端设备以终端或UE为例来描述。
应理解,在某些场景下,UE还可以用于充当基站。例如,UE可以充当调度实体,其在V2X、D2D或P2P等场景中的UE之间提供侧行链路信号。
本申请实施例中,用于实现终端设备的功能的装置可以是终端设备,也可以是能够支持终端设备实现该功能的装置,例如芯片系统或芯片,该装置可以被安装在终端设备中。本申请实施例中,芯片系统可以由芯片构成,也可以包括芯片和其它分立器件。
本申请实施例中的网络设备可以是用于与终端设备通信的设备,该网络设备也可以称为接入网设备或无线接入网设备,如网络设备可以是基站。本申请实施例中的网络设备可以是指将终端设备接入到无线网络的无线接入网(radio access network,RAN)节点(或设备)。基站可以广义的覆盖如下中的各种名称,或与如下名称进行替换,比如:节点B(NodeB)、演进型基站(evolved NodeB,eNB)、下一代基站(next generation NodeB,gNB)、中继站、接入点、传输点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、主站、辅站、多制式无线(motor slide retainer,MSR)节点、家庭基站、网 络控制器、接入节点、无线节点、接入点(AP)、传输节点、收发节点、基带单元(BBU)、射频拉远单元(remote radio unit,RRU)、有源天线单元(active antenna unit,AAU)、射频头(remote radio head,RRH)、中心单元(central unit,CU)、分布式单元(distributed unit,DU)、定位节点等。基站可以是宏基站、微基站、中继节点、施主节点或类似物,或其组合。基站还可以指用于设置于前述设备或装置内的通信模块、调制解调器或芯片。基站还可以是移动交换中心以及D2D、V2X、M2M通信中承担基站功能的设备、6G网络中的网络侧设备、未来的通信系统中承担基站功能的设备等。基站可以支持相同或不同接入技术的网络。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。
基站可以是固定的,也可以是移动的。例如,直升机或无人机可以被配置成充当移动基站,至少一个小区可以根据该移动基站的位置移动。在其它示例中,直升机或无人机可以被配置成用作与另一基站通信的设备。
网络设备和终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上;还可以部署在空中的飞机、气球和卫星上。本申请实施例中对网络设备和终端设备所处的场景不做限定。
参见图1,作为示例,图1是适用于本申请实施例的无线通信系统100的示意图。如图1所示,该无线通信系统100可以包括至少一个网络设备,例如图1所示的网络设备110,该无线通信系统100还可以包括至少一个终端设备,例如图1所示的终端设备120和终端设备130。网络设备和终端设备均可配置多个天线,网络设备与终端设备可使用多天线技术通信。终端设备与终端设备之间可以直接进行通信。
其中,网络设备和终端设备通信时,网络设备可以管理至少一个小区,一个小区中可以有至少一个终端设备。可选地,网络设备110和终端设备120组成一个单小区通信系统,不失一般性,将小区称为小区#1。网络设备110可以是小区#1中的网络设备,或者,网络设备110可以为小区#1中的终端设备(例如终端设备120)服务。
需要说明的是,小区可以理解为网络设备的无线信号覆盖范围内的区域。
可以理解,图1为便于理解而示例的简化示意图,该无线通信系统100中还可以包括其它网络设备或者还可以包括其它终端设备,图1中未予以画出。本申请实施例可以适用于发送端设备和接收端设备通信的任何通信场景。
为了应对未来智能普惠的愿景,智能化将在无线网络架构层面进一步演进,AI将与无线网络进一步深度的融合,实现网络内生的智能和终端的智能化,从而可以应对一些可能的新需求和新场景。例如,一可能的场景,终端类型多样化,终端连接更加灵活和智能。终端类型多样化,超级物联网(supper IoT)(如物联,车联,工业,医疗等等),海量连接,终端连接更加灵活,终端本身具备一定的AI能力。又如,一可能的需求,网络内生智能。网络除了提供传统的通信连接服务,还可能会提供计算和AI服务,来更好的支持普惠性、实时性和高安全的AI服务。这些新需求和新场景,可能会带来无线网络架构和通信模式的变化。
目前,3GPP在5G网络中通过新增网络数据分析功能(network data analytics function,NWDAF),引入了AI能力。NWDAF的主要功能包括:支持从其它网络功能(network function,NF)和应用功能(application function,AF)收集数据,支持从网络运维系统(如操作维护管理(operation administration and maintenance,OAM))中收集数据,并可以向 NF或向AF提供元数据开放服务、数据分析服务等。NWDAF的引入,主要目标包括:网络运维的自动化和智能化、网络性能和业务体验优化、端到端服务级别协议(service level agreement,SLA)保障等。NWDAF训练的AI模型可应用于移动性管理、会话管理和网络自动化等网络自身领域,使用AI的方法来替换原有网络功能中基于数值公式的方法。但是NWDAF部署于核心网,属于外挂式AI单元,并未与通信网络作强耦合的设计,性能有局限。
基于未来无线网络面对的可能的场景和需求,通信网络中智能终端的数量和种类可能也会急速增长,智能终端采集、处理、产生的大量数据,可以为AI技术的应用提供动力。在这种背景下,无线网络中可能会部署大量的AI节点,相应地,AI节点之间会产生大量的AI相关流量,如包括数据集、AI模型、中间参数等等。因此,可以设计一种AI相关流量的传输机制,使得网络与AI结合更加紧密,提供更好的AI服务。
基于此,本申请提出,维护无线网络中各网络节点的AI能力,这样,可以基于各网络节点的AI能力,对各网络节点进行编排,也即确定各网络节点如何协作处理AI任务;或者,网络节点可以根据需求获取其它网络节点的AI能力信息,以便多个网络节点之间可以协作处理AI任务。
参见图2,作为示例,图2是根据本申请实施例的无线通信系统的示意图。如图2所示,该无线通信系统可以包括网络节点,如RAN。该无线通信系统还可包括AI节点,如AI管理功能(AI management function,AI-MF)和AI功能(AI-F)。网络节点与AI节点之间可以直接进行通信,或者也可以间接通信(如通过其它节点的转发进行通信)。
可选地,AI节点可存储或者说维护网络节点的AI能力。网络节点的AI能力,也可以称为网络节点的AI相关参数,下文统一用网络节点的AI能力描述。
其中,网络节点的AI能力例如可以包括以下至少一项:网络节点的优先级、网络节点支持的算力(如网络节点支持的最大算力)、网络节点的硬件能力、网络节点支持的AI任务、网络节点本地AI模型的性能、网络节点本地数据集的性能。作为一示例,网络节点的优先级,可以根据网络节点的历史响应情况确定。举例来说,若网络节点参与协作处理AI任务的次数较多,则网络节点的优先级较高;若网络节点参与协作处理AI任务的次数较少,则网络节点的优先级较低。作为另一示例,网络节点的优先级,可以根据网络节点的能力(如支持的算力,又如网络节点本身的硬件能力等)确定。举例来说,若网络节点的能力较高,则网络节点的优先级较高;若网络节点的能力较低,则网络节点的优先级较低。
可以理解,上述为示例性说明,对此不予限制,例如网络节点的AI能力还可包括网络节点的安全要求等。
可选地,AI节点部署于核心网中;或者AI节点部署于核心网外,如AI节点可部署于网络节点中;又如AI节点为运营商独立配置的运维管理系统。作为一示例,如图2所示,AI节点AI-MF可部署于核心网中,RAN与AI-MF之间可通过NG接口通信。作为另一示例,如图2所示,AI节点AI-F可部署于RAN,RAN内的其它模块与AI-F之间可通过内部(internal)接口通信。
可以理解,AI节点,可以是独立的设备,也可以集成于同一设备中实现某些功能,或者可以是硬件设备中的网络元件,也可以是在专用硬件上运行的软件功能,或者是平台 (例如,云平台)上实例化的虚拟化功能,本申请对于上述AI节点的具体形态不作限定。
还可以理解,图2为示例性说明,本申请不限于此。例如,图2所示的通信系统中还可以包括更多数量的设备,如更多数量的终端,又如更多数量的AI节点,又如更多数量的网络节点等等。
上文结合图2简单介绍了根据本申请实施例提供的通信系统。下面介绍本申请实施例提供的方法。下文所述的方法可用于图2所示的系统。
需要说明的是,在本申请中,“指示”可以包括直接指示、间接指示、显示指示、隐式指示。当描述某一指示信息用于指示A时,可以理解为该指示信息携带A、直接指示A,或间接指示A。
本申请中,指示信息所指示的信息,称为待指示信息。在具体实现过程中,对待指示信息进行指示的方式有很多种,例如但不限于,可以直接指示待指示信息,如待指示信息本身或者该待指示信息的索引等。也可以通过指示其它信息来间接指示待指示信息,其中该其它信息与待指示信息之间存在关联关系。还可以仅仅指示待指示信息的一部分,而待指示信息的其它部分则是已知的或者提前约定的。例如,还可以借助预先约定(例如协议规定)的各个信息的排列顺序来实现对特定信息的指示,从而在一定程度上降低指示开销。
待指示信息可以作为一个整体一起发送,也可以分成多个子信息分开发送,而且这些子信息的发送周期和/或发送时机可以相同,也可以不同。具体发送方法本申请不进行限定。其中,这些子信息的发送周期和/或发送时机可以是预先定义的,例如根据协议预先定义的,也可以是发射端设备通过向接收端设备发送配置信息来配置的。其中,该配置信息可以例如但不限于包括无线资源控制信令、媒体接入控制(media access control,MAC)层信令和物理层信令中的一种或者至少两种的组合。其中,无线资源控制信令例如包无线资源控制(radio resource control,RRC)信令;MAC层信令例如包括MAC控制元素(control element,CE);物理层信令例如包括下行控制信息(downlink control information,DCI)。
参见图3,作为示例,图3是本申请一实施例提供的AI任务指示的方法300的示意图。方法300可以包括如下步骤。
310,控制节点为AI任务确定第一编排信息,第一编排信息指示第一网络节点执行AI任务的第一任务。
其中,AI任务可以是控制节点自身确定的,或者也可以是其它节点(如网络节点,又如终端,又如核心网节点,又如AI节点等)请求的,不予限制。
其中,控制节点例如可以为AI节点,如图2中所示的AI-MF或AI-F。网络节点例如可以为网络设备,如图2中所示的RAN。
其中,第一网络节点可以是参与执行AI任务的第一个网络节点,或者该第一网络节点也可以是参与执行AI任务的任意一个网络节点。
举例来说,假设两个网络节点参与执行AI任务,该两个网络节点为第一网络节点和第二网络节点。若第一网络节点先执行AI任务,如第一网络节点执行AI任务的部分任务(如记为第一任务),第一网络节点执行完AI任务的第一任务后,第二网络节点继续执行AI任务,如第二网络节点执行AI任务的其余部分任务(如记为第二任务),则该第一网络节点可认为是参与执行AI任务的第一个网络节点,第二网络节点可认为是第一网络节点的下一个网络节点(或者称下一跳网络节点)。
其中,第一编排信息指示第一网络节点执行AI任务的第一任务,可以是第一编排信息直接指示第一网络节点执行AI任务的第一任务,如第一编排信息包括该第一任务;或者也可以是第一编排信息间接指示第一网络节点执行AI任务的第一任务,如第一编排信息包括该其它信息,该其它信息可间接指示第一任务。
可选地,第一编排信息包括以下至少一项信息:第一任务、第一网络节点的标识、第一网络节点执行第一任务提供的资源、第一网络节点执行第一任务的退出条件。
1)第一任务,表示第一网络节点参与执行AI任务时,第一网络节点负责的该AI任务的部分任务(或者称分解任务),或者说第一网络节点参与执行AI任务时提供的操作。
若第一编排信息中包括第一任务,则第一编排信息可直接指示第一网络节点执行AI任务的第一任务,也即第一网络节点可基于第一编排信息直接获知在执行AI任务时需要提供的操作,进而基于该第一编排信息执行第一任务。
2)第一网络节点的标识,用于识别参与执行AI任务的网络节点包括该第一网络节点。
若第一编排信息中包括第一网络节点的标识,则第一编排信息可间接指示第一网络节点执行AI任务的第一任务。具体来说,第一网络节点可基于第一编排信息中的第一网络节点的标识获知自己要参与执行AI任务,此时,第一网络节点可根据自己的AI能力,参与执行AI任务,这样第一网络节点可根据自己的AI能力确定在执行AI任务时需要提供的操作,也即确定第一任务。
3)第一网络节点执行第一任务提供的资源,表示第一网络节点参与执行AI任务时需要提供的资源,如需要提供的算力,又如需要提供的硬件能力。
若第一编排信息中包括第一网络节点执行第一任务提供的资源,则第一编排信息可间接指示第一网络节点执行AI任务的第一任务。具体来说,第一网络节点可基于第一编排信息中的第一网络节点执行第一任务提供的资源执行AI任务,这样第一网络节点可根据该资源确定何时停止执行AI任务,进而确定在执行AI任务时需要提供的操作,也即确定第一任务。
4)第一网络节点执行第一任务的退出条件,表示该第一网络节点将AI任务转交至下一个网络节点继续处理的条件,或者说该第一网元节点停止执行AI任务的条件,可用于第一网络节点确定何时停止执行AI任务。
若第一编排信息中包括第一网络节点执行第一任务的退出条件,则第一编排信息可间接指示第一网络节点执行AI任务的第一任务。具体来说,第一网络节点可基于第一网络节点执行第一任务的退出条件执行AI任务,这样第一网络节点可根据该退出条件确定何时停止执行AI任务,进而确定在执行AI任务时需要提供的操作,也即确定第一任务。
可以理解,对于最后一个网元节点来说,其执行AI任务的退出条件,也就是其停止执行该AI任务的条件,该最后一个网络节点不用向其他网络节点(如下一个网络节点)转交AI任务。举例来说,若第一网络节点为最后一个网络节点,也即第一网元节点基于第一网络节点执行第一任务的退出条件,执行该第一任务后,得到的是AI任务的最终结果。在该情况下,作为示例,第一网元节点可以直接将该AI任务的最终结果发送给AI任务的发起节点(如终端设备),或者,第一网元节点可以将该AI任务的最终结果发送给其他节点,由其他节点将该AI任务的最终结果发送给AI任务的发起节点。
320,控制节点向第一网络节点发送第一编排信息。
可选地,方法300还包括:第一网络节点基于第一编排信息执行AI任务的第一任务。
基于本申请实施例,可以由控制节点为AI任务确定编排信息,并将编排信息发送给网络节点,进而网络节点可基于编排信息执行AI任务。通过该方式,可以由控制节点根据AI任务确定合适的编排信息,提高全局效率。
可选地,控制节点为AI任务确定第一编排信息,包括:控制节点为AI任务确定AI任务的编排表,该编排表包括N个网络节点的编排信息,N个网络节点包括第一网络节点,N为大于1或等于1的整数。这样,可以由控制节点统一编排,提高全局效率。其中,该编排表包括N个网络节点的编排信息,也就是说,N个网络节点的编排信息可认为是一个编排表。
举例来说,假设N个网络节点包括第一网络节点和第二网络节点,控制节点为AI任务确定第一编排信息和第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务。关于各个网络节点的编排信息,可以参考前面第一网络节点的编排信息(即第一编排信息)的描述,此处不再赘述。
作为示例,编排表可以以表格,函数,或,字符串的形式存在,如存储或传输,如下表1为以表格形式呈现编排表的示例。
表1
网络节点 网络节点的编排信息
第一网络节点 第一编排信息
第二网络节点 第二编排信息
以表1为例,第一网络节点的编排信息为第一编排信息,也即第一编排信息指示第一网络节点执行AI任务的第一任务;第二网络节点的编排信息为第二编排信息,也即第二编排信息指示第二网络节点执行AI任务的第二任务。
可以理解,表1仅是示例性说明,对此不予限制,任何属于表1的变形,都适用于本申请。例如,表1中的还可以包括更多数量的网络节点。
作为示例,各个网络节点的编排信息可通过以下任一方式传输。
第一种可能的实现方式,控制节点向N个网络节点中的各个网络节点发送编排表。
基于该实现方式,N个网络节点中的各网络节点可从控制节点处获知编排表,进而可以根据编排表,获知各自的编排信息。
举例来说,假设N个网络节点包括第一网络节点和第二网络节点,编排表包括第一网络节点的编排信息和第二网络节点的编排信息,第一网络节点的编排信息为第一编排信息,第二网络节点的编排信息为第二编排信息。基于该实现方式,控制节点向第一网络节点发送第一编排信息和第二编排信息,控制节点向第二网络节点发送第一编排信息和第二编排信息。
第二种可能的实现方式,控制节点向N个网络节点中的一个网络节点(如第一网络节点)发送编排表。
其中,该第一网络节点可以是参与执行AI任务的第一个网络节点,或者该第一网络节点也可以是参与执行AI任务的任意一个网络节点。
示例1,第一网络节点向N个网络节点中的其它网络节点发送编排表。例如,第一网络节点可以在收到编排表后,直接向N个网络节点中的其它网络节点发送编排表。再例如,第一网络节点基于编排表执行完AI任务中自己负责的任务后,向N个网络节点中的其它网络节点发送编排表。
举例来说,假设N个网络节点包括第一网络节点和第二网络节点,编排表包括第一网络节点的编排信息和第二网络节点的编排信息,第一网络节点的编排信息为第一编排信息,第二网络节点的编排信息为第二编排信息。基于该示例1,控制节点向第一网络节点发送第一编排信息和第二编排信息,第一网络节点向第二网络节点发送第一编排信息和第二编排信息。
示例2,第一网络节点向N个网络节点中的其它网络节点发送其它网络节点的编排信息。例如,第一网络节点可以在收到编排表后,直接向N个网络节点中的其它网络节点发送编排表中其它网络节点的编排信息。再例如,第一网络节点基于编排表执行完AI任务中自己负责的任务后,向N个网络节点中的其它网络节点发送编排表中其它网络节点的编排信息。
举例来说,假设N个网络节点包括第一网络节点和第二网络节点,编排表包括第一网络节点的编排信息和第二网络节点的编排信息,第一网络节点的编排信息为第一编排信息,第二网络节点的编排信息为第二编排信息。基于该示例2,控制节点向第一网络节点发送第一编排信息和第二编排信息,第一网络节点向第二网络节点发送第二编排信息。
示例3,第一网络节点向该第一网络节点的下一个网络节点发送编排表,下一个网络节点向该下一个网络节点的下一个网络节点发送编排表,依次类推。例如,第一网络节点可以在收到编排表后,直接向该第一网络节点的下一个网络节点发送编排表。再例如,第一网络节点基于编排表执行完AI任务中自己负责的任务后,向该第一网络节点的下一个网络节点发送编排表。
举例来说,假设N个网络节点包括第一网络节点、第二网络节点、第三网络节点,编排表包括第一网络节点的编排信息、第二网络节点的编排信息、以及第三网络节点的编排信息,第一网络节点的编排信息为第一编排信息,第二网络节点的编排信息为第二编排信息,第三网络节点的编排信息为第三编排信息。基于该示例3,控制节点向第一网络节点发送第一编排信息、第二编排信息、以及第三编排信息,第一网络节点向第二网络节点发送第一编排信息、第二编排信息、以及第三编排信息,第二网络节点向第三网络节点发送第一编排信息、第二编排信息、以及第三编排信息。
示例4,第一网络节点向该第一网络节点的下一个网络节点发送编排表中除第一网络节点的编排信息之外的编排信息,下一个网络节点向该下一个网络节点的下一个网络节点发送从第一网络节点收到的编排表中除本网络节点的编排信息之外的编排信息,依次类推。例如,第一网络节点可以在收到编排表后,直接向该第一网络节点的下一个网络节点发送编排表中除第一网络节点的编排信息之外的编排信息。再例如,第一网络节点基于编排表执行完AI任务中自己负责的任务后,向该第一网络节点的下一个网络节点发送编排表中除第一网络节点的编排信息之外的编排信息。
举例来说,假设N个网络节点包括第一网络节点、第二网络节点、第三网络节点,编排表包括第一网络节点的编排信息、第二网络节点的编排信息、以及第三网络节点的编排 信息,第一网络节点的编排信息为第一编排信息,第二网络节点的编排信息为第二编排信息,第三网络节点的编排信息为第三编排信息。基于该示例4,控制节点向第一网络节点发送第一编排信息、第二编排信息、以及第三编排信息,第一网络节点向第二网络节点发送第二编排信息和第三编排信息,第二网络节点向第三网络节点发送第三编排信息。
第三种可能的实现方式,控制节点向N个网络节点中的各个网络节点发送各个网络节点的编排信息。
举例来说,假设N个网络节点包括第一网络节点和第二网络节点,编排表包括第一网络节点的编排信息和第二网络节点的编排信息,第一网络节点的编排信息为第一编排信息,第二网络节点的编排信息为第二编排信息。基于该实现方式,控制节点向第一网络节点发送第一编排信息,控制节点向第二网络节点发送第二编排信息。
可以理解,上述几种可能的实现方式为示例性说明,对此不予限制。例如,控制节点也可向N个网络节点中的部分网络节点发送编排表,再由该部分网络节点向其它网络节点发送编排表或者各网络节点的编排信息。
还可以理解,在上述任一可能的实现方式中,当第一网络节点向第二网络节点发送第二网络节点的编排信息时,第一网络节点还可向第二网络节点发送第一任务的处理结果。这样,第二网络节点可以基于该第一任务的处理结果继续执行AI任务。
可选地,控制节点根据N个网络节点的AI能力,为AI任务确定编排表。例如,控制节点根据第一网络节点的AI能力为AI任务确定第一编排信息。这样,控制节点确定的编排信息可以与各网络节点的AI能力相匹配,降低网络节点无法执行AI任务的概率。
如前所述,网络节点的AI能力例如可以包括以下至少一项:网络节点的优先级、网络节点支持的算力、网络节点的硬件能力、网络节点支持的AI任务、网络节点本地AI模型的性能、网络节点本地数据集的性能。下面结合网络节点的AI能力,列举控制节点根据N个网络节点的AI能力为AI任务确定编排表的几个示例。
示例1,控制节点根据网络节点支持的AI任务为AI任务确定编排表,也即控制节点根据网络节点支持的AI任务为AI任务确定网络节点的编排信息。
举例来说,若AI任务为模型训练任务,则控制节点可以基于各个网络节点支持的AI任务确定哪些网络节点支持模型训练任务,并且控制节点可以从该支持模型训练任务的网络节点中确定参与执行AI任务的N个网络节点。此外,关于N个网络节点各自负责的操作以及提供的资源等,可以是控制节点基于网络节点的其它AI能力确定的,或者也可以是各网络节点在执行该AI任务过程中根据各自的AI能力自行确定的,不予限制。
示例2,控制节点根据网络节点支持的算力为AI任务确定编排表,也即控制节点根据网络节点支持的算力为AI任务确定网络节点的编排信息。
举例来说,控制节点基于各个网络节点支持的算力确定由算力较高的N个网络节点来执行AI任务。此外,控制节点还可以根据N个网络节点支持的算力确定各个网络节点负责的操作和/或各个网络节点提供的资源。可以理解,关于N个网络节点各自负责的操作以及提供的资源等,也可以是控制节点基于网络节点的其它AI能力确定的,或者也可以是各网络节点在执行该AI任务过程中根据各自的AI能力自行确定的,不予限制。
示例3,控制节点根据网络节点的硬件能力为AI任务确定编排表,也即控制节点根据网络节点的硬件能力为AI任务确定网络节点的编排信息。
举例来说,控制节点基于各个网络节点的硬件能力确定由硬件能力较高的N个网络节点来执行AI任务。此外,控制节点还可以根据N个网络节点的硬件能力确定各个网络节点负责的操作和/或各个网络节点提供的资源。可以理解,关于N个网络节点各自负责的操作以及提供的资源等,也可以是控制节点基于网络节点的其它AI能力确定的,或者也可以是各网络节点在执行该AI任务过程中根据各自的AI能力自行确定的,不予限制。
示例4,控制节点根据网络节点本地AI模型的性能为AI任务确定编排表,也即控制节点根据网络节点本地AI模型的性能为AI任务确定网络节点的编排信息。
作为示例,网络节点本地AI模型的性能,可以包括但不限于:准确性和时效性。其中,准确性可以表征AI模型在执行若干任务时的性能。时效性可以表征AI模型的生成时间。
举例来说,控制节点基于网络节点本地AI模型的性能,确定由性能较高的N个网络节点执行AI任务。此外,控制节点还可以根据N个网络节点本地AI模型的性能确定各个网络节点负责的操作和/或各个网络节点提供的资源。可以理解,关于N个网络节点各自负责的操作以及提供的资源等,也可以是控制节点基于网络节点的其它AI能力确定的,或者也可以是各网络节点在执行该AI任务过程中根据各自的AI能力自行确定的,不予限制。
示例5,控制节点根据网络节点本地数据集的性能为AI任务确定编排表,也即控制节点根据网络节点本地数据集的性能为AI任务确定网络节点的编排信息。
作为示例,网络节点本地数据集的性能,可以包括但不限于:准确性和时效性。其中,准确性可以表征该数据集在若干测试模型下的性能。时效性可以表征该数据集的生成时间。
举例来说,控制节点基于网络节点本地数据集的性能,确定由性能较高的N个网络节点执行AI任务。此外,控制节点还可以根据N个网络节点本地数据集的性能确定各个网络节点负责的操作和/或各个网络节点提供的资源。可以理解,关于N个网络节点各自负责的操作以及提供的资源等,也可以是控制节点基于网络节点的其它AI能力确定的,或者也可以是各网络节点在执行该AI任务过程中根据各自的AI能力自行确定的,不予限制。
示例6,控制节点根据网络节点的优先级确定编排表,也即控制节点根据网络节点的优先级为AI任务确定网络节点的编排信息。
举例来说,控制节点基于网络节点的优先级,确定由优先级较高的N个网络节点执行AI任务。此外,关于N个网络节点各自负责的操作以及提供的资源等,也可以是控制节点基于网络节点的其它AI能力确定的,或者也可以是各网络节点在执行该AI任务过程中根据各自的AI能力自行确定的,不予限制。
示例7,控制节点根据网络节点支持的AI任务以及N个网络节点支持的算力为AI任务确定编排表,也即控制节点根据网络节点支持的AI任务以及N个网络节点支持的算力为AI任务确定网络节点的编排信息。
举例来说,若通信节点请求的AI任务为模型训练任务,则控制节点可以基于各个网络节点支持的AI任务确定哪些网络节点支持模型训练任务,并且控制节点可以从该支持模型训练任务的网络节点中确定参与执行AI任务的N个网络节点。进一步地,控制节点可以基于N个网络节点支持的算力,确定各个网络节点各自负责的操作和提供的资源。
可以理解,上述几个示例为示例性说明,对此不予限制。例如,控制节点可以基于以 下至少一项为AI任务确定编排表:网络节点的优先级、网络节点支持的算力、网络节点的硬件能力、网络节点支持的AI任务、网络节点本地AI模型的性能、网络节点本地数据集的性能。
可选地,控制节点可通过以下任一方式获知N个网络节点的AI能力。
第一种可能的实现方式,控制节点本地维护至少一个网络节点的AI能力,控制节点可直接基于本地维护的至少一个网络节点的AI能力,为AI任务确定编排表。其中,至少一个网络节点包括N个网络节点。
作为示例,至少一个网络节点的AI能力可以以表格,函数,或,字符串的形式存在,如存储或传输,如下表2为以表格形式呈现至少一个网络节点的AI能力的示例。
表2
网络节点 网络节点的AI能力
第一网络节点 第一网络节点的AI能力
第二网络节点 第二网络节点的AI能力
第三网络节点 第三网络节点的AI能力
可以理解,表2仅是示例性说明,对此不予限制,任何属于表2的变形,都适用于本申请。例如,表2中的还可以包括更多数量的网络节点。
第二种可能的实现方式,控制节点在确认AI任务后,向其它节点请求至少一个网络节点的AI能力,进而可以基于该至少一个网络节点的AI能力为该AI任务确定编排表。其中,至少一个网络节点包括N个网络节点。
第三种可能的实现方式,控制节点在确认AI任务后,向至少一个网络节点请求各自的AI能力,进而可以基于该至少一个网络节点的AI能力为该AI任务确定编排表。其中,至少一个网络节点包括N个网络节点。
可选地,网络节点的AI能力可进行更新。以控制节点维护网络节点的AI能力为例,下面介绍两个示例。
一示例,控制节点周期性地更新网络节点的AI能力。例如,网络节点周期性地向控制节点上报自身的AI能力,进而控制节点可周期性地更新该网络节点的AI能力。再例如,控制节点周期性地向该网络节点发送信息,该信息用于触发该网络节点向控制节点上报自身的AI能力,进而控制节点可周期性地更新该网络节点的AI能力。
另一示例,在事件触发后,控制节点更新网络节点的AI能力。例如,网络节点向控制节点上报自身的AI能力,若网络节点上报的AI能力与之前存储的该网络节点的AI能力不一致,则控制节点更新该网络节点的AI能力。再例如,网络节点在为某个AI任务确定编排信息后,更新网络节点的AI能力。
可选地,方法300还包括:控制节点接收来自第一网络节点的响应信息,响应信息指示第一网络节点是否同意第一编排信息。
一种可能的情形,第一网络节点同意第一编排信息,因此第一网络节点向控制节点发送的响应信息指示第一网络节点同意第一编排信息。
另一种可能的情形,第一网络节点不同意第一编排信息,因此第一网络节点向控制节 点发送的响应信息指示第一网络节点不同意第一编排信息。
其中,第一网络节点是否同意(或者称为是否接受)第一编排信息,可以理解为第一网络节点是否同意执行第一任务。
其中,响应信息指示第一网络节点是否同意第一编排信息,包括以下任一实现方式。
第一种可能的实现方式,响应信息直接指示第一网络节点是否同意第一编排信息。
例如,响应信息可通过肯定应答和否定应答实现。例如,若第一网络节点同意第一编排信息,则第一网络节点向控制节点发送肯定应答;第一网络节点不同意第一编排信息,则第一网络节点向控制节点发送否定应答。
再例如,响应信息可通过至少一个比特来实现。例如,假设通过1比特来指示第一网络节点是否同意第一编排信息。若该比特设置为“1”,则表示第一网络节点=同意第一编排信息;若该比特设置为“0”,则表示第一网络节点不同意第一编排信息。应理解,上述仅是一种示例性说明,不予限制。
第二种可能的实现方式,响应信息间接指示第一网络节点是否同意第一编排信息。
例如,第一网络节点向控制节点发送第一网络节点调整后的第一编排信息,该调整后的第一编排信息可间接指示第一网络节点不同意第一编排信息,也即控制节点基于该调整后的第一编排信息,获知第一网络节点不同意第一编排信息。其中,调整后的第一编排信息例如可以包括但不限于:调整后的第一任务和/或第一网络节点执行第一任务能够提供的资源。
可以理解,上述两种实现方式为示例性说明,对此不予限制。例如,控制节点在一段时间(为区分,记为时间段#1)内没有收到来自第一网络节点的否定应答,则控制节点默认第一网络节点同意第一编排信息(相当于一种隐性形式的响应信息指示第一网络节点同意第一编排信息)。作为示例,时间段#1的起始时刻可以是控制节点发送第一编排信息的时刻,时间段#1的时长可以是预定义的,或者也可以是根据历史情况估计的,不予限制。作为示例,时间段#1可通过定时器实现。
作为示例,在第一网络节点不同意第一编排信息的情况下,可包括以下几种实现方式。
第一种可能的实现方式,控制节点调整第一编排信息。
基于该实现方式,控制节点获知第一网络节点不同意第一编排信息后,可以重新确定第一编排信息。
举例来说,假设控制节点为AI任务确定AI任务的编排表,该编排表包括N个网络节点的编排信息,N个网络节点包括第一网络节点,控制节点获知第一网络节点不同意第一编排信息后,可以重新确定编排表。
第二种可能的实现方式,第一网络节点调整第一编排信息,并向控制节点发送调整后的第一编排信息。
基于该实现方式,第一网络节点不同意第一编排信息后,可以调整第一编排信息,并且向控制节点发送调整后的第一编排信息。
举例来说,假设控制节点为AI任务确定AI任务的编排表,该编排表包括N个网络节点的编排信息,N个网络节点包括第一网络节点,作为示例,控制节点可基于调整后的第一编排信息,调整N个网络节点中除第一网络节点之外的至少一个网络节点的编排信息。
第三种可能的实现方式,第一网络节点向第二网络节点发送第一任务或第一任务的部 分任务,第二网络节点为参与执行AI任务的至少一个网络节点。
其中,第二网络节点可以是第一网络节点确定的,如第一网络节点选择的相邻的网络节点;或者,第二网络节点也可以是控制节点确定的,如控制节点选择的第一网络节点的下一个网络节点。
举例来说,第一网络节点直接将第一任务透传给第二网络节点,由第二网络节点执行该第一任务。再举例来说,第一网络节点执行第一任务的部分任务,然后将该第一任务的其余部分任务发送给第二网络节点,由第二网络节点执行该第一任务的其余部分任务。
可以理解,上述几种可能的实现方式为示例性说明,对此不予限制。例如,第一网络节点可先执行第一任务的部分任务,待后续再执行第一任务的其余部分任务。
可选地,网络节点调度至少一个终端来参与网络节点的操作,也即至少一个终端与网络节点共同协作执行AI任务。这样,通过利用至少一个终端来协作执行AI任务,可以降低网络节点执行AI任务带来的开销。
作为示例,至少一个终端,可以是网络节点提供通信服务的终端,或者说可以是网络节点管理的小区内的终端,或者说网络节点本小区内的终端。举例来说,网络节点可调度本小区的终端参与网络节点的操作。
举例来说,以第一网络节点为例,第一网络节点本小区内的终端包括终端#1和终端#2,且第一网络节点可调度至少一个终端来参与第一网络节点的操作,即执行第一任务。例如,第一网络节点向至少一个终端发送第一任务或第一任务的部分任务。
一种可能的实现方式,第一网络节点执行第一任务的部分任务,终端#1和/或终端#2执行第一任务的其余部分任务。基于该实现方式,作为示例,参与执行第一任务的终端可将执行第一任务的处理结果,发送给第一网络节点。
另一种可能的实现方式,终端#1和/或终端#2执行第一任务。
例如,终端#1或终端#2执行完整的第一任务。此情况下,作为示例,执行完整的第一任务的终端可将第一任务的处理结果,发送给第一网络节点。
再例如,终端#1和终端#2分别执行完整的第一任务。此情况下,作为示例,终端#1和终端#2可将第一任务的处理结果,发送给第一网络节点,第一网络节点可以对终端#1和终端#2提供的处理结果进行合并或者筛选等操作。
再例如,终端#1执行第一任务的部分任务,终端#2执行第一任务的其余部分任务。此情况下,作为示例,终端#1和终端#2可将第一任务的处理结果,发送给第一网络节点,第一网络节点可以对终端#1和终端#2提供的处理结果进行合并或者筛选等操作。
可选地,网络节点根据终端的AI状态,确定终端是否参与执行AI任务,具体的后面结合方法500详细说明。
可以理解,上述方法300主要以控制节点为AI任务确定网络节点的编排信息进行了说明,对此不予限制。一示例,控制节点也可以为AI任务确定至少一个核心网节点的编排信息,也即该至少一个核心网节点可基于各自的编排信息协作执行AI任务。又一示例,控制节点也可以为AI任务确定至少一个终端的编排信息,也即该至少一个终端可基于各自的编排信息协作执行AI任务。
参见图4,作为示例,图4是本申请另一实施例提供的AI任务指示的方法400的示意图。方法400可以包括如下步骤。
410,第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
其中,目标状态信息用于指示AI任务的目标结果,或者说目标状态信息用于指示AI任务的最终状态。
举例来说,以AI任务为模型相关的任务为例,目标状态信息用于指示模型的最终状态,或者说可用于描述模型在网络中停止流动时的状态。作为示例,目标状态信息,包括以下至少一项信息:准确性、时效性、模型结构。其中,准确性可以表征模型在执行若干任务时的性能。时效性可以表征模型的生成时间。
再举例来说,以AI任务为数据集相关的任务为例,目标状态信息用于指示数据集的最终状态,或者说可用于描述数据集在网络中停止流动时的状态。作为示例,目标状态信息,包括以下至少一项信息:准确性、时效性、成分、属性。其中,准确性可以表征数据集在若干测试模型下的性能。时效性可以表征数据集的生成时间。成分可以表征数据集包含数据的成分。属性可以表征数据集包含数据的类型、量化、维度等。
可选地,方法400还包括步骤420。
420,第二网络节点基于第一任务的处理结果和目标状态信息,执行AI任务的第二任务。
其中,第一任务的处理结果和目标状态信息,可隐式指示第二网络节点参与执行AI任务,如第二网络节点执行AI任务的第二任务。也就是说,第二网络节点收到第一任务的处理结果和目标状态信息后,可获知自己要参与执行AI任务。
基于本申请实施例,网络节点之间可协作执行AI任务,并且通过当前的处理结果和目标状态信息确定是否参与执行AI任务。
作为示例,第一任务的处理结果表示AI任务的当前状态信息。当前状态信息用于指示AI任务的当前结果,或者说当前状态信息用于指示AI任务的当前状态。一种可能的情形,第二网络节点收到AI任务的当前状态信息和目标状态信息,第二网络节点根据当前状态信息和目标状态信息不一致,确定要参与执行AI任务,并且第二网络节点可以以目标状态信息为AI任务的最终结果,执行AI任务。
举例来说,以AI任务为模型相关的任务为例,当前状态信息用于指示模型的当前状态,也即模型在第一网络节点生成时的状态。作为示例,当前状态信息,包括以下至少一项信息:准确性、时效性、模型结构。关于各个信息的描述,可参考步骤410中的相关描述,此处不再赘述。
再举例来说,以AI任务为数据集相关的任务为例,当前状态信息用于指示数据集的当前状态,也即数据集第一网络节点生成时的状态。作为示例,当前状态信息,包括以下至少一项信息:准确性、时效性、成分、属性。关于各个信息的描述,可参考步骤410中的相关描述,此处不再赘述。
可选地,步骤410中第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,包括:基于第二网络节点的AI能力,第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
例如,若第一网络节点基于第二网络节点的AI能力,获知第二网络节点支持该AI任务,则第一网络节点向第二网络节点发送该AI任务的第一任务的处理结果和目标状态 信息。
再例如,若第一网络节点基于第二网络节点的AI能力,获知第二网络节点支持的算力满足预设值,则第一网络节点向第二网络节点发送该AI任务的第一任务的处理结果和目标状态信息。其中,预设值可以是预先定义的,如协议预定义的,或者也可以是根据历史情况估计的,不予限制。
再例如,若第一网络节点基于第二网络节点的AI能力,获知第二网络节点本地AI模型的性能满足预设条件,则第一网络节点向第二网络节点发送该AI任务的第一任务的处理结果和目标状态信息。其中,预设条件可以是预先定义的,如协议预定义的,或者也可以是根据历史情况估计的,不予限制。
关于第一网络节点和第二网络节点,可以满足以下任一项。
一示例,第一网络节点和第二网络节点是相邻的网络节点。其中,相邻的网络节点例如可以是位置相邻的网络节点,或者在网络拓扑结构中的具有相邻关系的网络节点。
另一示例,第一网络节点和第二网络节点之间的相对位置满足预设条件。第一网络节点和第二网络节点之间的相对位置,可以理解为,以第一网络节点为基准,第二网络节点相对于该第一网络节点的位置;或者也可以描述为,以第二网络节点为基准,第一网络节点相对于该第二网络节点的位置。相对位置,可以包括:距离和/或角度。
可以理解,上述关于第一网络节点和第二网络节点的描述为示例性说明,本申请实施例不限于此。例如,第二网络节点是能够为终端提供服务的网络节点,具体来说,第一网络节点也可以在收到终端的AI任务的任务请求后,获取可以为终端提供服务的第二网络节点的AI能力。再例如,第二网络节点可以是之前协作第一网络节点执行AI任务的网络节点。再例如,第二网络节点可以是任意的网络节点。
第一网络节点可通过以下任一方式获知第二网络节点的AI能力。
第一种可能的实现方式,第一网络节点从控制节点获取第二网络节点的AI能力。
一示例,第一网络节点向其控制节点查询第二网络节点的AI能力。例如,第一网络节点向控制节点发送第一请求信息,该第一请求信息用于请求第二网络节点的AI能力;控制节点基于第一网络节点的请求,向第一网络节点发送第一请求信息的响应信息,该响应信息指示第二网络节点的AI能力。其中,响应信息指示第二网络节点的AI能力,可以是直接指示,如,响应信息中包括第二网络节点的AI能力;或者也可以是间接指示,如响应信息中包括其他信息,根据其他信息可间接获知第二网络节点的AI能力。
举例来说,第一网络节点可以在无法完成AI任务时,向控制节点查询第二网络节点的AI能力。这样可以根据实际情况确定是否从控制节点获取第二网络节点的AI能力。例如,若第一网络节点的AI能力无法完成AI任务,则第一网络节点可以请求其它网络节点(如第二网络节点)协作完成AI任务,因此第一网络节点可以从控制节点获取第二网络节点的AI能力,以便可以根据第二网络节点的AI能力判断是否该第二网络节点是否可以协作完成该AI任务。
另一示例,第一网络节点向控制节点订阅第二网络节点的AI能力。例如,第一网络节点向控制节点订阅第二网络节点的AI能力,控制节点获知第二网络节点的AI能力后,响应于第一网络节点的订阅,向第一网络节点发送第二网络节点的AI能力。也即第一网络节点先从控制节点获取第二网络节点的AI能力,并保存该第二网络节点的AI能力,这 样在确定AI任务后,可以直接使用该第二网络节点的AI能力,降低了为执行AI任务带来的时延。
第二种可能的实现方式,第一网络节点从第二网络节点获取第二网络节点的AI能力。
一示例,第一网络节点向第二网络节点查询第二网络节点的AI能力。例如,第一网络节点向第二网络节点发送第一请求信息,该第一请求信息用于请求第二网络节点的AI能力;第二网络节点基于第一网络节点的请求,向第一网络节点发送第一请求信息的响应信息,该响应信息指示第二网络节点的AI能力。
另一示例,第一网络节点向第二网络节点订阅第二网络节点的AI能力。
关于上述两个示例,可以参考前面第一种可能的实现方式中的描述,此处不再赘述。
可以理解,上述实现方式为示例性说明,本申请实施例不限于此。例如,控制节点也可以主动向第一网络节点发送第二网络节点的AI能力。再例如,第一网络节点也可以从其他网络节点或核心网节点获取第二网络节点的AI能力。
可选地,在步骤410之前,方法400还包括:第一网络节点向第二网络节点发送第二请求信息,第二请求信息请求第二网络节点协作执行AI任务。基于此,第一网络节点确定第二网络节点同意协作执行AI任务的情况下,第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
举例来说,第一网络节点向第二网络节点发送第二请求信息,若第一网络节点收到来自第二网络节点的第二请求信息的响应,该第二请求信息的响应用于指示第二网络节点同意协作执行AI任务,则第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
再举例来说,第一网络节点向第二网络节点发送第二请求信息,若第一网络节点在一段时间(为区分,记为时间段#2)内没有收到来自第二网络节点的否定应答,则第一网络节点默认第二网络节点同意协作执行AI任务(相当于一种隐性形式的响应信息指示第二网络节点同意协作执行AI任务),因此第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。作为示例,时间段#2的起始时刻可以是第一网络节点发送第二请求信息的时刻,时间段#2的时长可以是预定义的,或者也可以是根据历史情况估计的,不予限制。作为示例,时间段#2可通过定时器实现。
可选地,方法400还包括:第一网络节点向第二网络节点发送区域信息,区域信息用于第二网络节点确定协作执行AI任务的网络节点。
其中,区域信息,用于辅助当前网络节点确定协作执行AI任务的其它节点。例如,第一网络节点向第二网络节点发送的区域信息用于辅助第二网络节点确定协作执行AI任务的网络节点。
作为示例,区域信息可以包括地理位置信息,或者区域信息也可用一些参数(为区分,记为参数#A)体现,该参数#A例如可以是:业务类型、终端类型、节点算力类型。其中,业务类型可以是某一区域内支持的或者运行的业务类型。终端类型可以是某一区域内的终端类型。节点类型可以是某一区域内的节点的算力类型。一般来说,同一区域内的节点的参数#A比较接近,在节点选择时通过参考各个区域的参数#A,这样可以根据实际的需要选择合适的下一个节点。例如,第二网络节点选择下一个网络节点时,可以选择参数#A差异比较大的区域中的网络节点,或者可以选择参数#A差异比较接近的区域中的网络节 点。
可选地,网络节点调度至少一个终端来参与网络节点的操作,也即至少一个终端与网络节点共同协作执行AI任务。这样,通过利用至少一个终端来协作执行AI任务,可以降低网络节点执行AI任务带来的开销。具体可以参考方法300中的相关描述,此处不再赘述。进一步可选地,网络节点根据终端的AI状态,确定终端是否参与执行AI任务,具体的后面结合方法500详细说明。
可以理解,上述方法400主要以多个网络节点之间协作执行AI任务进行了说明,对此不予限制。一示例,网络节点也可以与终端协作执行AI任务。举例来说,第一网络节点向至少一个终端发送AI任务的第一任务的处理结果和目标状态信息。又一示例,网络节点也可以与核心网节点协作执行AI任务。举例来说,第一网络节点向至少一个核心网节点发送AI任务的第一任务的处理结果和目标状态信息。
参见图5,作为示例,图5是本申请另一实施例提供的AI任务指示的方法500的示意图。方法500可以包括如下步骤。
510,网络节点向终端发送AI任务,其中,该终端处于预设状态。
可选地,方法500还包括步骤520。
520,终端执行该AI任务。
基于此,通过对终端定义不同的状态(为区分,将该状态记为AI状态),可以便于网络节点基于终端的状态获知终端是否能够参与执行AI任务。例如,网络节点可向处于预设状态的终端发送AI任务,也就是说,处于预设状态的AI可参与执行AI任务。
进一步可选地,若终端的状态不是预设状态,则网络节点向终端发送通知信息,通知信息通知将该终端调整为预设状态,也即通知信息通知将终端的AI状态调整为预设状态。作为示例,通知信息可以为以下任一项或多项的组合:无线资源控制信令、MAC层信令、物理层信令、AI寻呼。其中,无线资源控制信令例如包RRC信令,MAC层信令例如包括MAC CE,物理层信令例如包括DCI。其中,AI寻呼可以由网络节点发送,用于触发特定终端、或特定AI状态的终端,进行AI转态的转换。
作为示例,终端的AI状态可以为以下任一种:AI-空闲状态、AI-激活状态、AI-暂留状态。预设状态例如可以为AI-激活状态。其中,各个AI状态的命名仅是一种示例,其命名不对本申请实施例的保护范围造成限定。
1)AI-空闲状态:终端与AI节点未建立连接,且本地无AI模型。若终端处于AI-空闲状态,则终端可执行AI寻呼监听、AI节点选择、AI连接建立等操作。
举例来说,若终端处于AI-空闲状态,则网络节点可先触发该终端转换为AI-激活状态,然后再调度该终端参与执行AI任务。作为示例,网络节点通过信令的方式,如向终端发送通知信息,触发该终端完成AI状态转换。
2)AI-激活状态:终端与AI节点建立了AI连接。若终端处于AI-激活状态,则终端可执行AI调度监听、按照调度执行AI任务、AI节点选择等操作。
举例来说,若终端处于AI-激活状态,则网络节点可调度该终端参与执行AI任务。
3)AI-暂留状态:终端与AI节点未建立AI连接,且本地部署AI模型。若终端处于AI-暂留状态,则终端可终端执行AI寻呼监听、AI节点选择、AI连接建立等操作。
举例来说,若终端处于AI-暂留状态,则网络节点可先触发该终端转换为AI-激活状 态,然后再调度该终端参与执行AI任务。作为示例,网络节点通过信令的方式,如向终端发送通知信息,触发该终端完成AI状态转换。
可以理解,上述为示例性说明,对此不予限制。例如,步骤510中由控制节点向终端发送AI任务。
还可以理解,上述关于终端的AI状态的描述,仅是示例性说明,对此不予限制。例如终端的AI状态除了上述AI-空闲状态、AI-激活状态、AI-暂留状态之外,还可以包括其它的AI状态。
还可以理解,方法500可以单独使用,也可以与前面的方法300或方法400结合使用,对此不予限制。
例如,方法500可以与方法300结合使用。举例来说,以第一网络节点为例,第一网络节点向至少一个终端发送第一任务或第一任务的部分任务,该至少一个终端的状态为预设状态。基于此,若终端的状态为预设状态,则第一网络节点向该终端发送第一任务或第一任务的部分任务,由该终端协作第一网络节点执行第一任务。
再例如,方法500可以与方法400结合使用。举例来说,以第一网络节点为例,第一网络节点向至少一个终端发送第一任务或第一任务的部分任务,该至少一个终端的状态为预设状态。基于此,若终端的状态为预设状态,则第一网络节点向该终端发送第一任务或第一任务的部分任务,由该终端协作第一网络节点执行第一任务。
为了便于理解,下面以网络节点为RAN,控制节点为AI-MF为例,结合图6至图9对本申请实施例进行示例性说明。其中涉及到的步骤以及术语具体可以参考上文描述。
参见图6,作为示例,图6是根据本申请一实施例提供的AI任务指示的方法600的示意性流程图。该方法600可以用于实现如方法400的方案。方法600可以适用于终端向RAN请求模型相关的AI任务的场景。作为示例,方法600可以包括如下步骤。
601,AI-MF维护至少一个RAN的AI能力。
其中,RAN的AI能力,可以包括以下至少一项:RAN的优先级、RAN支持的算力、RAN的硬件能力、RAN支持的AI任务(或者说RAN能执行的操作类型)、RAN本地AI模型的性能、RAN本地数据集的性能。进一步可选地,若RAN的AI能力包括RAN支持的AI任务,则RAN的AI能力还包括RAN支持的AI任务关联的参数。
例如,若RAN的AI能力包括RAN支持的AI任务,且RAN支持的AI任务包括模型训练,则进一步可选地,RAN的AI能力包括模型训练关联的参数。作为示例,模型训练关联的参数,包括以下至少一项:模型结构、训练集、可用算力。
再例如,若RAN的AI能力包括RAN支持的AI任务,且RAN支持的AI任务包括模型融合,则进一步可选地,RAN的AI能力包括模型融合关联的参数。作为示例,模型融合关联的参数,包括以下至少一项:模型融合策略、支持融合的模型结构、本地知识库信息。
再例如,若RAN的AI能力包括RAN支持的AI任务,且RAN支持的AI任务包括模型测试,则进一步可选地,RAN的AI能力包括模型测试关联的参数。作为示例,模型测试关联的参数,包括以下至少一项:模型测试能力、测试集。
作为示例,RAN的AI能力可以以表格,函数,或,字符串的形式存在,如存储或传输,如下表3为以表格形式呈现RAN的AI能力的示例。
表3
Figure PCTCN2022126752-appb-000001
以表3为例,对于RAN#1来说,RAN#1支持的AI任务包括任务A,且RAN#1的本地模型执行任务A时,准确性为值1(value 1),时效性为value2。其中,准确性可以表征模型在执行若干任务时的性能。时效性可以表征模型的生成时间。
一示例,RAN支持的AI任务可以通过至少一个比特表示。以与模型相关的AI任务为例,例如,假设与模型相关的AI任务包括:模型训练任务、模型测试任务、模型融合任务,且通过2比特来指示RAN支持的AI任务。若该比特设置为“00”,则表示RAN支持的AI任务为模型训练任务;若该比特设置为“01”,则表示RAN支持的AI任务为模型测试任务;若该比特设置为“10”,则表示RAN支持的AI任务为模型融合任务。应理解,上述仅是一种示例性说明,不予限制。
另一示例,RAN支持的AI任务可以通过比特位图(bitmap)表示。以与模型相关的AI任务为例,例如,假设与模型相关的AI任务包括:模型训练任务、模型测试任务、模型融合任务,且比特取值为“1”表示支持,比特取值为“0”表示不支持。举例来说,若RAN支持的AI任务表示为“110”,“110”中的3个比特分别对应模型训练任务、模型测试任务、模型融合任务,因此“110”表示该RAN支持模型训练任务和模型测试任务,且不支持模型融合任务。再举例来说,若RAN支持的AI任务表示为“101”,“101”中的3个比特分别对应模型训练任务、模型测试任务、模型融合任务,因此“101”则表示该RAN支持模型训练任务和模型融合任务,且不支持模型测试任务。可以理解,上述例子为示例性说明,本申请实施例不限于此。
可以理解,表3仅是示例性说明,对此不予限制,任何属于表3的变形,都适用于本申请。例如,表3中的还可以包括更多数量的RAN。再例如,表3中RAN#1和RAN#2支持更多数量的AI任务。再例如,表3中还可以包括更多数量的表征本地AI模型的性能的参数。
在本申请实施例中主要以与模型相关的AI任务为例进行示例性说明,因此,上述关于RAN的AI能力主要介绍了模型相关的能力,对此不予限制。
在本申请实施例中,假设终端向第一RAN发布AI任务,且终端向第一RAN发布的AI任务为模型训练任务。
602,终端向第一RAN发送初始模型的相关信息。
例如,终端对初始模型进行封装操作,在封装中携带初始模型的相关信息。可选地,终端还可对初始模型进行分段操作,从而便于第一RAN正确恢复初始模型。
其中,初始模型为待执行模型训练的模型。
其中,初始模型的相关信息,可以包括以下至少一项:初始模型的参数集、当前状态 信息、目标状态信息、区域信息、初始模型的版本。下面简单介绍一下上述各项信息,未详细描述的可参考上文方法400中的相关说明。
1)当前状态信息,可用于描述模型在当前节点生成时的状态。作为示例,当前状态信息,可以包括以下至少一项信息:准确性、时效性。对于模型压缩/模型蒸馏等涉及模型结构变更的操作,还可以在状态信息中增加其模型结构的描述。
可以理解,对于初始模型来说,也可以不携带当前状态信息。
2)目标状态信息,或者说初始模型的目标状态信息,可用于描述模型的最终状态,或者说可用于描述模型在网络中停止流动时的状态。作为示例,目标状态信息,包括以下至少一项信息:准确性、时效性。对于模型压缩/模型蒸馏等涉及模型结构变更的操作,还可以在状态信息中增加其模型结构的描述。
3)区域信息,用于辅助当前节点决策协作执行模型训练任务的其它节点。例如,初始模型的相关信息中的区域信息,可用于辅助第一RAN节点决策协作执行模型训练任务的RAN。
4)初始模型的参数集中可包括该初始模型所对应的神经网络的训练权重。
5)初始模型的版本,如记为t1,表示终端提供的初始模型执行过t1次模型训练,t1为大于0或等于0的整数。例如,初始模型的版本为0,表示终端提供的初始模型还未进行模型训练。再例如,初始模型的版本为1,表示终端提供的初始模型已执行过1次模型训练(如终端已执行过一次模型训练)。
可以理解,上述信息为示例性说明,对此不予限制。例如,初始模型的相关信息还可以包括:初始模型所对应的神经网络的结构、初始模型参数的运算规则、初始模型的编号等等。
关于步骤602,可以包括如下实现方式。
一种可能的实现方式,在步骤602中,终端向第一RAN发送初始模型的相关信息,该初始模型的相关信息可隐式指示第一RAN需要对该初始模型进行模型训练。举例来说,初始模型的相关信息包括当前状态信息和目标状态信息,第一RAN根据当前状态信息和目标状态信息不一致,确定对该初始模型进行模型训练。
另一种可能的实现方式,在步骤602中,终端向第一RAN发送指示信息和初始模型的相关信息,该指示信息指示对初始模型进行模型训练。
603,第一RAN对初始模型进行模型训练,得到第一模型。
第一RAN可以基于终端在步骤602中提供的初始模型执行模型训练任务。为区分,将第一RAN对初始模型进行模型训练得到的模型记为第一模型。
在本申请实施例中,假设第一RAN无法独自完成模型训练任务,也即第一RAN对初始模型进行模型训练得到的第一模型的状态不满足终端所需的目标状态,因此第一RAN可借助其它RAN的协作完成模型训练任务。假设第一RAN确定的协作执行模型训练任务的RAN为第二RAN。
可选地,第一RAN还可调度本小区内的终端参与操作,如参与对初始模型进行模型训练。具体的实现,可以参考方法500中的相关描述,此处不再赘述。
604,第一RAN从AI-MF获取第二RAN的AI能力。
假设步骤601中的至少一个RAN包括第二RAN,也即AI-MF维护第二RAN的AI 能力,那么第一RAN可从AI-MF获取第二RAN的AI能力。
关于第一RAN和第二RAN,可以参考前面方法400中关于第一网络节点和第二网络节点的描述,此处不再赘述。
关于第一RAN从AI-MF获取第二RAN的AI能力,可以参考前面方法400中关于第一网络节点从控制节点获取第二网络节点的AI能力的描述,此处不再赘述。
可以理解,本申请实施例主要以一个第二RAN为例进行说明,关于第二RAN的数量不予限制。例如,第一RAN可以从AI-MF获取至少一个第二RAN的AI能力。
还可以理解,步骤604为示例性说明,本申请实施例不限于此。例如,第一RAN也可以从第二RAN获取第二RAN的AI能力,具体可以参考前面方法400中关于第一网络节点从第二网络节点获取第二网络节点的AI能力的描述,此处不再赘述。
605,第一RAN向第二RAN发送第一模型的相关信息。
例如,第一RAN可对第一模型进行封装操作,在封装中携带第一模型的相关信息。可选地,第一RAN还可对第一模型进行分段操作,从而便于第二RAN正确恢复第一模型。
其中,第一模型为第一RAN执行模型训练后得到的模型。
其中,第一模型的相关信息,可以包括以下至少一项:第一模型的参数集、当前状态信息、目标状态信息、区域信息、第一模型的版本。下面简单介绍一下当前状态信息、区域信息、以及第一模型的版本,其它未详细介绍的可参考步骤602中的相关描述。
1)当前状态信息:如前所述,当前状态信息用于描述模型在当前节点生成时的状态,因此此处第一RAN向第二RAN提供的当前状态信息,表示第一模型的当前状态信息,用于描述该第一模型在第一RAN生成时的状态。
2)区域信息:如前所述,区域信息用于辅助当前节点决策协作执行模型训练任务的其它节点,因此此处第一模型的相关信息中的区域信息可用于辅助第二RAN决策协作执行模型训练任务的RAN。
第一模型的相关信息中的区域信息与步骤602中初始模型的相关信息中的区域信息,可以相同,也可以不同。
例如,第一模型的相关信息中的区域信息与初始模型的相关信息中的区域信息相同,如都为终端能够接收到信号的区域信息。
再例如,第一模型的相关信息中的区域信息与初始模型的相关信息中的区域信息不同,第一模型的相关信息中的区域信息为低频基站覆盖区域的信息,初始模型的相关信息中的区域信息为高频基站覆盖区域的信息。
3)第一模型的版本,如记为t2,表示第一RAN提供的第一模型执行过t2次模型训练,t2为大于1或等于1的整数。例如,第一模型的版本为1,表示第一RAN提供的第一模型已执行过1次模型训练,换句话说第一模型为对初始模型执行过1次模型训练的模型,或者说,第一RAN为第一次对初始模型进行模型训练的RAN。
第一RAN向第二RAN发送第一模型的相关信息,可以包括以下两种方式:
一种可能的实现方式,第一RAN确定第二RAN可以执行模型训练任务的情况下,第一RAN向第二RAN发送第一模型的相关信息。举例来说,第一RAN基于在步骤604中获得的第二RAN的AI能力确定第二RAN可以执行模型训练任务,如第二RAN的AI能力包括第二RAN支持的AI任务,且第二RAN支持的AI任务包括模型训练任务,因此 第一RAN向第二RAN发送第一模型的相关信息。
另一种可能的实现方式,第一RAN对初始模型进行模型训练后,直接向第二RAN发送第一模型的相关信息。举例来说,第一RAN可默认或者可假设第二RAN可以执行模型训练任务,因此在对初始模型进行模型训练后,直接向第二RAN发送第一模型的相关信息。
可选地,第一RAN向第二RAN发送第一模型的相关信息,包括:第一RAN确定第二RAN同意协作执行模型训练任务的情况下,第一RAN向第二RAN发送第一模型的相关信息。
举例来说,在步骤605之前,方法600还包括:第一RAN向第二RAN请求执行模型训练任务。在得到第二RAN的确认后,即同意协作第一RAN执行模型训练任务后,第一RAN向第二RAN发送第一模型的相关信息。
606,第二RAN对第一模型进行模型训练,得到第二模型。
第二RAN可以基于第一RAN生成的第一模型执行模型训练任务。为区分,将第二RAN对第一模型进行模型训练得到的模型记为第二模型。
第一种可能的情况,第二RAN执行模型训练任务后,模型达到目标状态,也即第二模型的状态符合目标状态,则方法600还包括步骤607。
第二种可能的情况,第二RAN执行模型训练任务后,模型未达到目标状态,也即第二模型的状态不符合目标状态,则第二RAN可以获取第三RAN的AI能力,并且向第三RAN发送第二模型的相关信息,具体可参考步骤604和步骤605,此处不再赘述。依次类推,直到模型达到目标状态,并且向终端发送最终生成的模型。
可选地,第二RAN还可调度本小区内的终端参与操作,如参与对第一模型进行模型训练。具体的实现,可以参考方法500中的相关描述,此处不再赘述。
607,第二RAN向终端发送第二模型。
假设第二RAN执行模型训练任务后,模型达到目标状态,也即第二模型的状态符合目标状态,则一种可能的实现方式,第二RAN向终端发送第二模型;或者另一种可能的实现方式,第二RAN向第一RAN发送第二模型,由第一RAN向终端转发该第二模型,对此不予限制。
参见图7,作为示例,图7是适用于本申请实施例的示意图。如图7所示,编号i(i=1,2,3,4,5,……,N)代表不同的RAN,N为大于1的整数。
举例来说,终端向第一RAN(如编号为1的RAN)发送初始模型的相关信息,第一RAN对初始模型进行模型训练,得到第一模型,并向第二RAN(如编号为2的RAN)发送第一模型的相关信息,如包括:区域信息、当前模型状态(也即第一模型的状态描述信息)、目标模型状态(也即目标状态信息)、模型版本1;第二RAN对第一模型进行模型训练,得到第二模型,并向第三RAN(如编号为3的RAN)发送第二模型的相关信息,如包括:区域信息、当前模型状态(也即第二模型的状态描述信息)、目标模型状态(也即目标状态信息)、模型版本2;依次类推,直至当前模型状态达到目标模型状态。其中,模型版本1表示第一RAN提供的模型是对初始模型进行第一次模型训练得到的模型,或者说,第一RAN为第一次对初始模型进行模型训练的RAN。模型版本2表示第二RAN提供的模型是对初始模型进行第二次模型训练得到的模型,或者说,第二RAN为第二次 对初始模型进行模型训练的RAN。
可以理解,方法600主要以模型训练任务为例进行了示例性说明,可以理解,上述模型训练任务可以替换为其它任何与模型相关的任务。
还可以理解,上述各个步骤仅是示例性说明,对此不作严格限定。此外,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。例如,上述步骤604和步骤602之间并没有严格的先后顺序,如可以先执行步骤604,再执行步骤602;或者也可以先执行步骤602,再执行步骤604;或者也可以同步进行,对此不作限定。
还可以理解,上述主要以一个RAN确定下一个协作RAN为例进行示例性说明,对此不予限制。例如,一个RAN可以确定多个协作RAN,并由多个协作RAN协作执行AI任务。
上文结合图6示例地介绍了终端向RAN请求模型相关的AI任务的场景。基于上述实施例,多个RAN可协作完成终端请求的AI任务。此外,各个RAN可基于从上一RAN收到的模型的相关信息协作执行AI任务。
参见图8,作为示例,图8是根据本申请另一实施例提供的AI任务指示的方法800的示意性流程图。该方法800可以用于实现如方法400的方案。方法800可以适用于终端向RAN请求数据集相关的AI任务的场景。作为示例,方法800可以包括如下步骤。
801,AI-MF维护至少一个RAN的AI能力。
其中,RAN的AI能力,可以包括以下至少一项:RAN的优先级、RAN支持的算力、RAN的硬件能力、RAN支持的AI任务(或者说RAN能执行的操作类型)、RAN本地AI模型的性能、RAN本地数据集的性能。进一步可选地,若RAN的AI能力包括RAN支持的AI任务,则RAN的AI能力还包括RAN支持的AI任务关联的参数。
例如,若RAN的AI能力包括RAN支持的AI任务,且RAN支持的AI任务包括数据清理操作,则进一步可选地,RAN的AI能力包括数据清理操作关联的参数。作为示例,数据清理操作关联的参数,包括以下至少一项:对特定属性的数据增补、冗余识别、真伪验证等。
再例如,若RAN的AI能力包括RAN支持的AI任务,且RAN支持的AI任务包括数据扩增操作,则进一步可选地,RAN的AI能力包括数据扩增操作关联的参数。作为示例,数据扩增操作关联的参数,包括支持的扩增策略,如对单一数据源的数据增强(单样本增强、多样本增强、生成对抗网络(generative adversarial networks,GAN)生成、自动增强等),对多数据源的数据集成等。
再例如,若RAN的AI能力包括RAN支持的AI任务,且RAN支持的AI任务包括数据归约操作,则进一步可选地,RAN的AI能力包括数据归约操作的参数。作为示例,数据归约操作关联的参数,包括采用的归约策略,如包括针对特定任务的维度归约、维度变换等。
作为示例,RAN的AI能力可以以表格,函数,或,字符串的形式存在,如存储或传输,如下表4为以表格形式呈现RAN的AI能力的示例。
表4
Figure PCTCN2022126752-appb-000002
Figure PCTCN2022126752-appb-000003
可以理解,表4和表3的区别在于,表3中主要以与模型相关的AI任务为例进行说明,表4中主要以与数据集相关的AI任务为例进行说明。
以表4为例,对于RAN#1来说,RAN#1支持的AI任务包括任务A,且任务A的准确性为value 1,时效性为value2。其中,准确性可以表征数据集在测试模型下的性能,时效性可以表征数据集的生成时间。
一示例,RAN支持的AI任务可以通过至少一个比特表示。以与数据集相关的AI任务为例,例如,假设与数据集相关的AI任务包括:数据清理、数据扩增、数据归约、数据转换,且通过2比特来指示RAN支持的AI任务。若该比特设置为“00”,则表示RAN支持的AI任务为数据清理;若该比特设置为“01”,则表示RAN支持的AI任务为数据扩增;若该比特设置为“10”,则表示RAN支持的AI任务为数据归约;若该比特设置为“11”,则表示RAN支持的AI任务为数据转换。应理解,上述仅是一种示例性说明,不予限制。
另一示例,RAN支持的AI任务可以通过bitmap表示。以与数据集相关的AI任务为例,例如,假设与数据集相关的AI任务包括:数据清理、数据扩增、数据归约、数据转换,且比特取值为“1”表示支持,比特取值为“0”表示不支持。举例来说,若RAN支持的AI任务表示为“0110”,“0110”中的4个比特分别对应数据清理、数据扩增、数据归约、数据转换,因此“0110”表示该RAN支持数据扩增和数据归约,且不支持数据清理和数据转换。再举例来说,若RAN支持的AI任务表示为“1011”,“1011”中的4个比特分别对应数据清理、数据扩增、数据归约、数据转换,因此“1011”表示该RAN支持数据清理、数据归约、以及数据转换,且不支持数据扩增。可以理解,上述例子为示例性说明,本申请实施例不限于此。
可以理解,表4仅是示例性说明,对此不予限制,任何属于表4的变形,都适用于本申请。例如,表4中的还可以包括更多数量的RAN。再例如,表4中RAN#1和RAN#2支持的AI任务不同,或者RAN#1和RAN#2支持更多数量的AI任务。再例如,表4中还可以包括更多数量的表征本地数据集的性能的参数。
在本申请实施例中主要以与数据集相关的AI任务为例进行示例性说明,因此,上述关于RAN的AI能力主要介绍了与数据集相关的能力,对此不予限制。
在本申请实施例中,假设终端向第一RAN发布AI任务,且终端向第一RAN发布的AI任务为数据扩增。
802,终端向第一RAN发送初始数据集的相关信息。
其中,初始数据集为待执行数据扩增任务的数据集。
其中,初始数据集的相关信息,可以包括以下至少一项:当前状态信息、目标状态信息、区域信息、初始数据集的版本。下面简单介绍一下上述各项信息。
1)当前状态信息,可用于描述数据集在当前节点生成时的状态。作为示例,当前状 态信息,可以包括以下至少一项信息:准确性、时效性、成分、属性。其中,准确性可以表征数据集在若干测试模型下的性能。时效性可以表征数据集的生成时间。成分可以表征数据集包含数据的成分。属性可以表征数据集包含数据的类型、量化、维度等。
可以理解,对于初始数据集来说,也可以不携带当前状态信息。
2)目标状态信息,或者说初始数据集的目标状态信息,可用于描述数据集的最终状态,或者说可用于描述数据集在网络中停止流动时的状态。作为示例,目标状态信息,包括以下至少一项信息:准确性、时效性、成分、属性。关于各项信息可参考前面的描述,此处不再赘述。
3)区域信息,用于辅助当前节点决策协作执行数据集扩增任务的其它节点。例如,初始数据集的相关信息中的区域信息,可用于辅助第一RAN节点决策协作执行数据扩增任务的RAN。
5)初始数据集的版本,如记为t1,表示终端提供的初始数据集执行过t1次数据集扩增,t1为大于0或等于0的整数。具体可参考方法600中模型的版本的相关描述,此处不再赘述。
可以理解,上述信息为示例性说明,对此不予限制。
关于步骤802,可以包括如下实现方式。
一种可能的实现方式,在步骤802中,终端向第一RAN发送初始数据集的相关信息,该初始数据集的相关信息可隐式指示第一RAN需要对该初始数据集进行数据集扩增操作。举例来说,初始数据集的相关信息包括当前状态信息和目标状态信息,第一RAN根据当前状态信息和目标状态信息不一致,确定对该初始数据集进行数据集扩增操作。
另一种可能的实现方式,在步骤802中,终端向第一RAN发送指示信息和初始数据集的相关信息,该指示信息指示对初始数据集进行数据集扩增。
803,第一RAN对初始数据集进行数据扩增,得到第一数据集。
第一RAN可以基于终端在步骤602中提供的初始数据集执行数据扩增任务。为区分,将第一RAN对初始数据集进行数据扩增得到的数据集记为第一数据集。
在本申请实施例中,假设第一RAN无法独自完成数据扩增任务,也即第一RAN对初始数据集进行数据扩增得到的第一数据集的状态不满足终端所需的目标状态,因此第一RAN可借助其它RAN的协作完成数据扩增任务。假设第一RAN确定的协作执行数据扩增任务的RAN为第二RAN。
可选地,第一RAN还可调度本小区内的终端参与操作,如参与对初始数据集进行数据扩增。具体的实现,可以参考方法500中的相关描述,此处不再赘述。
804,第一RAN从AI-MF获取第二RAN的AI能力。
假设步骤601中的至少一个RAN包括第二RAN,也即AI-MF维护第二RAN的AI能力,那么第一RAN可从AI-MF获取第二RAN的AI能力。
关于第一RAN和第二RAN,以及第一RAN从AI-MF获取第二RAN的AI能力的方式,可以参考步骤601中的相关描述,此处不再赘述。
805,第一RAN向第二RAN发送第一数据集的相关信息。
其中,第一数据集为第一RAN执行数据集扩增后得到的数据集。
其中,第一数据集的相关信息,可以包括以下至少一项:当前状态信息、目标状态信 息、区域信息、第一数据集的版本。下面简单介绍一下当前状态信息、区域信息、以及第一模型的版本,其它未详细介绍的可参考步骤802中的相关描述。
1)当前状态信息:如前所述,当前状态信息用于描述数据集在当前节点生成时的状态,因此此处第一RAN向第二RAN提供的当前状态信息,表示第一数据集的当前状态信息,用于描述该第一数据集在第一RAN生成时的状态。
2)区域信息:如前所述,区域信息用于辅助当前节点决策协作执行数据扩增任务的其它节点,因此此处第一数据集的相关信息中的区域信息可用于辅助第二RAN决策协作执行数据扩增任务的RAN。该区域信息可参考步骤605中的区域信息,此处不再赘述。
3)第一数据集的版本,如记为t2,表示第一RAN提供的第一数据集执行过t2次数据扩增,t2为大于1或等于1的整数。例如,第一数据集的版本为1,表示第一RAN提供的第一数据集已执行过1次数据扩增,换句话说第一数据集为对初始数据集执行过1次数据扩增的数据集,或者说,第一RAN为第一次对初始数据集进行数据扩增的RAN。
第一RAN向第二RAN发送第一模型的相关信息的相关方案,可以参考步骤605中的描述,此处不再赘述。
806,第二RAN对第一数据集进行数据扩增,得到第二数据集。
第二RAN可以基于第一RAN生成的第一数据集执行数据扩增任务。为区分,将第二RAN对第一数据集进行数据扩增得到的数据集记为第二数据集。
第一种可能的情况,第二RAN执行数据扩增任务后,数据集达到目标状态,也即第二数据集的状态符合目标状态,则方法800还包括步骤807。
第二种可能的情况,第二RAN执行数据扩增任务后,数据集未达到目标状态,也即第二数据集的状态不符合目标状态,则第二RAN可以获取第三RAN的AI能力,并且向第三RAN发送第二数据集的相关信息,具体可参考步骤804和步骤805,此处不再赘述。依次类推,直到数据集达到目标状态,并且向终端发送最终生成的数据集。
可选地,第二RAN还可调度本小区内的终端参与操作,如参与对第一数据集进行数据扩增。具体的实现,可以参考方法500中的相关描述,此处不再赘述。
807,第二RAN向终端发送第二数据集。
假设第二RAN执行数据扩增任务后,数据集达到目标状态,也即第二数据集的状态符合目标状态,则一种可能的实现方式,第二RAN向终端发送第二数据集;或者另一种可能的实现方式,第二RAN向第一RAN发送第二数据集,由第一RAN向终端转发该第二数据集,对此不予限制。
可以理解,方法800主要以数据集相关任务为例进行了示例性说明,可以理解,上述数据扩增任务可以替换为其它任何与数据集相关的任务。
还可以理解,上述各个步骤仅是示例性说明,对此不作严格限定。此外,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。例如,上述步骤804和步骤802之间并没有严格的先后顺序,如可以先执行步骤804,再执行步骤802;或者也可以先执行步骤802,再执行步骤804;或者也可以同步进行,对此不作限定。
还可以理解,上述主要以一个RAN确定下一个协作RAN为例进行示例性说明,对此不予限制。例如,一个RAN可以确定多个协作RAN,并由多个协作RAN协作执行AI 任务。
上文结合图8示例地介绍了终端向RAN请求数据集相关的AI任务的场景。基于上述实施例,多个RAN可协作完成终端请求的AI任务。此外,各个RAN可基于从上一RAN收到的数据集的相关信息协作执行AI任务。
参见图9,作为示例,图9是根据本申请另一实施例提供的AI任务指示的方法900的示意性流程图。该方法900可以用于实现如方法300的方案。方法900可以适用于终端发起AI任务请求的场景。作为示例,方法900可以包括如下步骤。
901,AI-MF维护至少一个RAN的AI能力。
步骤901可参考步骤601或步骤801中的描述,此处不再赘述。
902,终端向AI-MF发送任务请求信息。
其中,任务请求信息用于请求执行AI任务,换句话说,用于请求AI-MF确定执行AI任务的编排信息。为区分,将终端请求执行的AI任务记为AI任务#1。
作为示例,AI任务#1例如可以包括:与模型相关的AI任务、与数据集相关的AI任务等。
一种可能的实现方式,在终端向AI-MF发送任务请求信息之前,终端与AI-MF建立连接,终端基于与AI-MF建立的连接,向AI-MF发送任务请求信息。另一种可能的实现方式,终端通过其它设备(如RAN)向AI-MF发送任务请求信息。
903,AI-MF基于至少一个RAN的AI能力,为AI任务#1确定编排表。
AI-MF收到来自终端的任务请求信息后,可以基于至少一个RAN的AI能力,为AI任务#1确定编排表。
其中,编排表包括N个RAN的编排信息,N为大于1或等于1的整数。也就是说,在步骤903中,AI-MF基于至少一个RAN的AI能力,为AI任务#1确定N个RAN的编排信息。
关于编排表、编排信息、以及AI-MF基于至少一个RAN的AI能力为AI任务#1确定编排表的方案,可以参考方法300中的相关描述,此处不再赘述。
904,AI-MF向N个RAN中的至少一个RAN发送编排表或编排信息。
AI-MF向N个RAN中的至少一个RAN发送编排表或编排信息,可以包括如下实现方式。
第一种可能的实现方式,AI-MF向N个RAN中的各个RAN发送编排表。
第二种可能的实现方式,AI-MF向N个RAN中的一个RAN发送编排表。
第三种可能的实现方式,AI-MF向N个RAN中的各个RAN发送各个RAN的编排信息。
关于上述三种实现方式,可以参考方法300中关于各个网络节点的编排信息的传输方式,此处不再赘述。
905,N个RAN中的至少一个RAN向AI-MF发送响应信息。
其中,响应信息可用于向AI-MF通知成功接收编排信息或者编排表,或者可用于向AI-MF通知是否同意编排信息或者编排表。
一种可能的实现方式,若在步骤904中,AI-MF向N个RAN中的各个RAN发送编排表,或者,AI-MF向N个RAN中的各个RAN发送各个RAN的编排信息,则在步骤 905中,该N个RAN分别向AI-MF发送响应信息。
另一种可能的实现方式,若在步骤904中,AI-MF向N个RAN中的一个RAN(如记为第一RAN)发送编排表,则在步骤905中,该第一RAN向AI-MF发送响应信息。
关于响应信息的具体实现,可参考方法300中的相关描述,此处不再赘述。
方法900主要以各RAN同意各自的编排信息为例进行说明,关于RAN不同意编排信息的方案,可以参考方法300中的相关描述。
906,AI-MF向终端发送任务请求信息的响应信息。
其中,任务请求信息的响应信息可用于向终端通知已为终端请求的AI任务#1确定编排表,这样终端可以向参与执行AI任务#1的RAN提供初始模型或初始数据集。可以理解,若AI-MF确定编排表失败,如AI-MF在步骤901中维护的至少一个RAN的AI能力中,各个RAN均不支持AI任务#1,则AI-MF也可以向终端发送任务请求信息的响应信息,该任务请求信息的响应信息用于向终端通知无法为终端请求的AI任务#1提供编排表。
一种可能的实现方式,AI-MF在收到编排信息的响应后,向终端发送任务请求信息的响应。另一种可能的实现方式,AI-MF为AI任务#1确定编排表后,向终端发送任务请求信息的响应。
907,终端向N个RAN发送AI任务#1。
一种可能的实现方式,终端向N个RAN中的第一个RAN发送AI任务#1。
其中,第一个RAN表示N个RAN第一个执行该AI任务#1的RAN。
例如,若AI任务#1为模型训练任务,则终端向N个RAN中的第一个RAN发送初始模型。
再例如,若AI任务#1为数据集采集任务,则终端向N个RAN中的第一个RAN发送需要采集的数据集的属性。
908,N个RAN协作执行AI任务#1。
协作执行AI任务的方式包括:基于上一个RAN执行的AI任务的结果继续进行,或者各个RAN同时执行各个RAN负责的任务。
可选地,RAN还可调度本小区内的终端参与操作。具体的实现,可以参考方法500中的相关描述,此处不再赘述。
909,RAN向终端发送AI任务#1的处理结果。
步骤909中的RAN,可以是N个RAN中的任一个RAN。例如,步骤909中的RAN可以是参与执行AI任务#1的最后一个RAN,或者也可以是参与执行AI任务#1的第一个RAN,对此不予限制。
可以理解,上述各个步骤仅是示例性说明,对此不作严格限定。此外,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
上文结合图9示例地介绍了控制节点AI-MF为AI任务确认编排表的场景。基于上述实施例,由控制节点确定各网络节点RAN执行AI任务的操作,可以提高全局效率。
可以理解,在上述实施例中,多个RAN执行某一AI任务时,各个RAN可以执行该AI任务的部分任务,进而共同完成该AI任务。
还可以理解,在上述实施例中,主要以多个RAN依次执行终端请求的AI任务为例进 行示例性说明,对此不予限制。例如,AI-MF确定各个RAM负责的任务,各个RAN可以同时或者说同步执行各自负责的任务。
还可以理解,在本申请的各实施例中涉及到一些消息或信息名称,其命名不对本申请实施例的保护范围造成限定。以A向B发送消息为例,只要可以用于A和B之间的消息都适用于本申请实施例。
还可以理解,在上述一些实施例中,多次提及发送消息。以A向B发送消息为例,A向B发送消息,可以包括A直接向B发送消息,也可以包括A通过其它装置向B发送消息,对此不予限制。
还可以理解,本申请的各实施例中的一些可选地特征,在某些场景下,可以不依赖于其它特征,也可以在某些场景下,与其它特征进行结合,不作限定。
还可以理解,本申请的各实施例中的方案可以进行合理的组合使用,并且实施例中出现的各个术语的解释或说明可以在各个实施例中互相参考或解释,对此不作限定。
还可以理解,上述各个方法实施例中,由设备(如终端,又如控制节点,又如网络节点)实现的方法和操作,也可以由设备的组成部件(例如芯片或者电路)来实现。
相应于上述各方法实施例给出的方法,本申请实施例还提供了相应的装置,所述装置包括用于执行上述各个方法实施例相应的模块。该模块可以是软件,也可以是硬件,或者是软件和硬件结合。可以理解的是,上述各方法实施例所描述的技术特征同样适用于以下装置实施例。
参见图10,作为示例,图10是本申请实施例提供的一种通信装置1000的示意性框图。该装置1000包括收发单元1010和处理单元1020。收发单元1010可以用于实现相应的通信功能。收发单元1010还可以称为通信接口或通信单元。处理单元1020可以用于实现相应的处理功能,如确定编排信息,又如执行AI任务等。
作为一种设计,该装置1000用于执行图3所示实施例中控制节点执行的步骤或者流程,图9所示实施例中AI-MF执行的步骤或者流程。
一种可能的实现方式,处理单元1020,用于为AI任务确定第一编排信息,第一编排信息指示第一网络节点执行AI任务的第一任务;收发单元1010,用于向第一网络节点发送第一编排信息。
一示例,处理单元1020,还用于为AI任务确定第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务;收发单元1010,还用于向第一网络节点发送第二编排信息,或者,向第二网络节点发送第二编排信息。
又一示例,处理单元1020,还用于为AI任务确定第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务;收发单元1010,还用于向第二网络节点发送第一编排信息和第二编排信息;收发单元1010,用于向第一网络节点发送第一编排信息包括:收发单元1010,用于向第一网络节点发送第一编排信息和第二编排信息。
又一示例,第一网络节点为参与执行AI任务的第一个网络节点。
又一示例,第一编排信息包括以下至少一项信息:第一任务、第一网络节点的标识、第一网络节点执行第一任务提供的资源、第一网络节点执行第一任务的退出条件。
又一示例,处理单元1020,用于为AI任务确定第一编排信息,包括:处理单元1020,用于根据第一网络节点的AI能力,为AI任务确定第一编排信息。
又一示例,收发单元1010,还用于接收来自第一网络节点的响应信息,响应信息指示第一网络节点是否同意第一编排信息。
作为另一种设计,该装置1000用于执行图3所示实施例中网络节点执行的步骤或者流程,图9所示实施例中RAN执行的步骤或者流程。
一种可能的实现方式,收发单元1010,用于接收来自控制节点的第一编排信息,第一编排信息指示第一网络节点执行AI任务的第一任务;处理单元1020,用于根据第一编排信息,执行第一任务。
一示例,收发单元1010,用于接收来自控制节点的第一编排信息,包括:收发单元1010,用于接收来自控制节点的第一编排信息和第二编排信息,第二编排信息指示第二网络节点执行AI任务的第二任务;收发单元1010,还用于向第二网络节点发送第二编排信息。
又一示例,收发单元1010,用于向第二网络节点发送第二编排信息,包括:收发单元1010,用于向第二网络节点发送第一任务的处理结果和第二编排信息。
又一示例,第一网络节点为参与执行AI任务的第一个网络节点。
又一示例,第一编排信息包括以下至少一项信息:第一任务、第一网络节点的标识、第一网络节点执行第一任务提供的资源、第一网络节点执行第一任务的退出条件。
又一示例,收发单元1010,还用于向控制节点发送第一网络节点的AI能力。
又一示例,收发单元1010,还用于向控制节点发送响应信息,响应信息指示第一网络节点是否同意第一编排信息。
又一示例,收发单元1010,还用于向至少一个终端装置发送第一任务或第一任务的部分任务;或者,向第二网络节点发送第一任务或第一任务的部分任务,第二网络节点为参与执行AI任务的至少一个网络节点。
又一示例,至少一个终端装置处于预设状态。
又一示例,收发单元1010,还用于向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
作为另一种设计,该装置1000用于执行图4所示实施例中第一网络节点执行的步骤或者流程,图6或图8所示实施例中第一RAN执行的步骤或者流程。
一种可能的实现方式,收发单元1010,用于向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,目标状态信息用于指示AI任务的目标结果。
一示例,收发单元1010,用于向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,包括:收发单元1010,用于基于第二网络节点的AI能力,向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
又一示例,收发单元1010,还用于向控制节点或第二网络节点发送第一请求信息,第一请求信息请求第二网络节点的AI能力;接收第一请求信息的响应信息,第一请求信息的响应信息指示第二网络节点的AI能力。
又一示例,收发单元1010,还用于向第二网络节点发送第二请求信息,第二请求信息请求第二网络节点协作执行AI任务。
又一示例,第一任务的处理结果表示AI任务的当前状态信息。
又一示例,收发单元1010,还用于还向第二网络节点发送区域信息,区域信息用于 第二网络节点确定协作执行AI任务的网络节点。
又一示例,收发单元1010,还用于向至少一个终端装置发送第一任务或第一任务的部分任务。
又一示例,至少一个终端装置处于预设状态。
又一示例,收发单元1010,还用于向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
作为另一种设计,该装置1000用于执行图4所示实施例中第二网络节点执行的步骤或者流程,图6或图8所示实施例中第二RAN执行的步骤或者流程。
一种可能的实现方式,收发单元1010,用于接收来自第一网络节点的AI任务的第一任务的处理结果和目标状态信息,目标状态信息用于指示AI任务的目标结果;处理单元1020,用于基于第一任务的处理结果和目标状态信息,执行AI任务的第二任务。
一示例,收发单元1010,还用于向控制节点或第一网络节点发送第二网络节点的AI能力。
又一示例,收发单元1010,还用于接收来自第一网络节点的第二请求信息,第二请求信息请求第二网络节点协作执行AI任务。
又一示例,第一任务的处理结果表示AI任务的当前状态信息;处理单元1020,用于基于第一任务的处理结果和目标状态信息,执行AI任务的第二任务,包括:处理单元1020,用于基于AI任务的当前状态信息和目标状态信息,执行AI任务的第二任务。
又一示例,收发单元1010,还用于接收来自第一网络节点的区域信息,区域信息用于第二网络节点确定协作执行AI任务的网络节点。
又一示例,收发单元1010,还用于向至少一个终端装置发送第二任务或第二任务的部分任务。
又一示例,至少一个终端装置处于预设状态。
又一示例,收发单元1010,还用于向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
作为另一种设计,该装置1000用于执行图5所示实施例中网络节点执行的步骤或者流程。
一种可能的实现方式,收发单元1010,用于向至少一个终端装置发送AI任务,其中,至少一个终端装置处于预设状态。
又一示例,收发单元1010,还用于向至少一个终端装置发送通知信息,通知信息通知将至少一个终端装置调整为预设状态。
作为另一种设计,该装置1000用于执行图5所示实施例中终端执行的步骤或者流程。
一种可能的实现方式,收发单元1010,用于接收来自网络节点的AI任务,其中,终端装置处于预设状态;处理单元1020,用于执行AI任务。
又一示例,收发单元1010,还用于接收来自网络节点的通知信息,通知信息通知将终端装置调整为预设状态。
应理解,各单元执行上述相应步骤的具体过程在上述各方法实施例中已经详细说明,为了简洁,在此不再赘述。
还应理解,这里的装置1000以功能单元的形式体现。这里的术语“单元”可以指应 用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。
示例地,本申请实施例提供的装置1000的产品实现形态是可以在计算机上运行的程序代码。
示例地,本申请实施例提供的装置1000可以是通信设备,也可以是应用于通信设备上的芯片、芯片系统(例如:片上系统(system on chip,SoC))或电路。当该装置1000为通信设备时,收发单元1010可以是收发器,或,输入/输出接口;处理单元1020可以是处理器。当该装置1000为用于通信设备中的芯片、芯片系统或电路时,收发单元1010可以是该芯片、芯片系统或电路上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等;处理单元1020可以是处理器、处理电路或逻辑电路等。
此外,上述收发单元1010还可以是收发电路(例如可以包括接收电路和发送电路),处理单元可以是处理电路。
参见图11,作为示例,图11是本申请实施例提供的一种通信装置1100的示意性框图。该装置1100包括处理器1110,处理器1110与存储器1120耦合。可选地,还包括存储器1120,用于存储计算机程序或指令和/或数据,处理器1110用于执行存储器1120存储的计算机程序或指令,或读取存储器1120存储的数据,以执行上文各方法实施例中的方法。
可选地,处理器1110为一个或多个。
可选地,存储器1120为一个或多个。
可选地,该存储器1120与该处理器1110集成在一起,或者分离设置。
可选地,如图11所示,该装置1100还包括收发器1130,收发器1130用于信号的接收和/或发送。例如,处理器1110用于控制收发器1130进行信号的接收和/或发送。
作为一种方案,该装置1100用于实现上文各个方法实施例中由控制节点执行的操作。
例如,处理器1110用于执行存储器1120存储的计算机程序或指令,以实现上文各个方法实施例中控制节点的相关操作。例如,图3所示实施例中控制节点执行的方法,或图9所示实施例中AI-MF执行的方法。
作为另一种方案,该装置1100用于实现上文各个方法实施例中由网络节点执行的操作。
例如,处理器1110用于执行存储器1120存储的计算机程序或指令,以实现上文各个方法实施例中网络节点的相关操作。例如,图3所示实施例中网络节点执行的方法,以及图9所示实施例中RAN执行的方法;或者,图4所示实施例中第一网络节点执行的方法以及图6和图8所示实施例中第一RAN执行的方法;或者,图4所示实施例中第二网络节点执行的方法以及图6和图8所示实施例中第二RAN执行的方法;或者,图5所示实施例中网络节点执行的方法。
作为另一种方案,该装置1100用于实现上文各个方法实施例中由终端执行的操作。
例如,处理器1110用于执行存储器1120存储的计算机程序或指令,以实现上文各个方法实施例中网络节点的相关操作。例如,图5所示实施例中终端执行的方法。
在实现过程中,上述方法的各步骤可以通过处理器1110中的硬件的集成逻辑电路或 者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1120,处理器1110读取存储器1120中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,处理器可以为一个或多个集成电路,用于执行相关程序,以执行本申请方法实施例。
处理器(例如,处理器1110)可包括一个或多个处理器并实现为计算设备的组合。处理器可分别包括以下一种或多种:微处理器、微控制器、数字信号处理器(digital signal processor,DSP)、数字信号处理设备(digital signal processing device,DSPD)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)、可编程逻辑器件(programmable logic device,PLD)、选通逻辑、晶体管逻辑、分立硬件电路、处理电路或其它合适的硬件、固件和/或硬件和软件的组合,用于执行本公开中所描述的各种功能。处理器可以是通用处理器或专用处理器。例如,处理器1110可以是基带处理器或中央处理器。基带处理器可用于处理通信协议和通信数据。中央处理器可用于使装置执行软件程序,并处理软件程序中的数据。此外,处理器的一部分还可以包括非易失性随机存取存储器。例如,处理器还可以存储设备类型的信息。
本申请中的程序在广义上用于表示软件。软件的非限制性示例包括:程序代码、程序、子程序、指令、指令集、代码、代码段、软件模块、应用程序、或软件应用程序等。程序可以在处理器和/或计算机中运行。以使得装置执行本申请中描述的各种功能和/或过程。
存储器(例如,存储器1120)可存储供处理器(例如,处理器1110)在执行软件时所需的数据。存储器可以使用任何合适的存储技术实现。例如,存储器可以是处理器和/或计算机能够访问的任何可用存储介质。存储介质的非限制性示例包括:随机存取存储器(random access memory,RAM)、只读存储器(read-only memory,ROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、光盘只读存储器(Compact Disc-ROM,CD-ROM)、静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)、可移动介质、光盘存储器、磁盘存储介质、磁存储设备、闪存、寄存器、状态存储器、远程挂载存储器、本地或远程存储器组件,或能够携带或存储软件、数据或信息并可由处理器/计算机访问的任何其它介质。需要说明的是,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
存储器(例如,存储器1120)和处理器(例如,处理器1110)可以分开设置或集成在一起。存储器可以用于与处理器连接,使得处理器能够从存储器中读取信息,在存储器中存储和/或写入信息。存储器可以集成在处理器中。存储器和处理器可以设置在集成电路中(例如,该集成电路可以设置在UE或BS或其他网络节点中)。
参见图12,作为示例,图12是本申请实施例提供的一种芯片系统1200的示意性框 图。该芯片系统1200(或者也可以称为处理系统)包括逻辑电路1210以及输入/输出接口(input/output interface)1220。
其中,逻辑电路1210可以为芯片系统1200中的处理电路。逻辑电路1210可以耦合连接存储单元,调用存储单元中的指令,使得芯片系统1200可以实现本申请各实施例的方法和功能。输入/输出接口1220,可以为芯片系统1200中的输入输出电路,将芯片系统1200处理好的信息输出,或将待处理的数据或信令信息输入芯片系统1200进行处理。
作为一种方案,该芯片系统1200用于实现上文各个方法实施例中由控制节点执行的操作。
例如,逻辑电路1210用于实现上文方法实施例中由控制节点执行的处理相关的操作,如,图3所示实施例中控制节点执行的处理相关的操作,或图9所示实施例中AI-MF执行的处理相关的操作;输入/输出接口1220用于实现上文方法实施例中由控制节点执行的发送和/或接收相关的操作,如,图3所示实施例中的控制节点执行的发送和/或接收相关的操作,或图9所示实施例中AI-MF执行的发送和/或接收相关的操作。
作为另一种方案,该芯片系统1200用于实现上文各个方法实施例中由网络节点执行的操作。
例如,逻辑电路1210用于实现上文方法实施例中由网络节点执行的处理相关的操作,如,图3所示实施例中网络节点执行的处理相关的操作,或图9所示实施例中RAN执行的处理相关的操作;输入/输出接口1220用于实现上文方法实施例中由网络节点执行的发送和/或接收相关的操作,如,图3所示实施例中的网络节点执行的发送和/或接收相关的操作,或图9所示实施例中RAN执行的发送和/或接收相关的操作。
再例如,逻辑电路1210用于实现上文方法实施例中由网络节点执行的处理相关的操作,如,图4所示实施例中第一网络节点和第二网络节点执行的处理相关的操作,或图6和图8所示实施例中第一RAN和第二RAN执行的处理相关的操作;输入/输出接口1220用于实现上文方法实施例中由网络节点执行的发送和/或接收相关的操作,如,图4所示实施例中的第一网络节点和第二网络节点执行的发送和/或接收相关的操作,或图6和图8所示实施例中第一RAN和第二RAN执行的发送和/或接收相关的操作。
作为另一种方案,该芯片系统1200用于实现上文各个方法实施例中由终端执行的操作。
例如,逻辑电路1210用于实现上文方法实施例中由终端执行的处理相关的操作,如,图5所示实施例中终端执行的处理相关的操作;输入/输出接口1220用于实现上文方法实施例中由终端执行的发送和/或接收相关的操作,如,图5所示实施例中的终端执行的发送和/或接收相关的操作。
本申请实施例还提供一种计算机可读存储介质,其上存储有用于实现上述各方法实施例中由控制节点或网络节点或终端执行的方法的计算机指令。
本申请实施例还提供一种计算机程序产品,包含指令,该指令被计算机执行时以实现上述各方法实施例中由控制节点或网络节点或终端执行的方法。
本申请实施例还提供一种通信系统,该通信系统包括上文各实施例中的控制节点、网络节点、终端中的至少一项。
上述提供的任一种装置中相关内容的解释及有益效果均可参考上文提供的对应的方 法实施例,此处不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅是示意性的,例如,上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。此外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元实现本申请提供的方案。
另外,在本申请各个实施例中的各功能单元可以集成在一个单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。例如,计算机可以是个人计算机,服务器,或者网络设备等。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。关于计算机可读存储介质,可以参考上文描述。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (46)

  1. 一种人工智能AI任务指示的方法,其特征在于,包括:
    控制节点为AI任务确定第一编排信息,所述第一编排信息指示第一网络节点执行所述AI任务的第一任务;
    所述控制节点向所述第一网络节点发送所述第一编排信息。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述控制节点为所述AI任务确定第二编排信息,所述第二编排信息指示第二网络节点执行所述AI任务的第二任务;
    所述控制节点向所述第一网络节点发送所述第二编排信息,或者,所述控制节点向所述第二网络节点发送所述第二编排信息。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述控制节点为所述AI任务确定第二编排信息,所述第二编排信息指示第二网络节点执行所述AI任务的第二任务;
    所述控制节点向所述第二网络节点发送所述第一编排信息和所述第二编排信息;
    所述控制节点向所述第一网络节点发送所述第一编排信息,包括:
    所述控制节点向所述第一网络节点发送所述第一编排信息和所述第二编排信息。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述第一网络节点为参与执行所述AI任务的第一个网络节点。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,
    所述第一编排信息包括以下至少一项信息:所述第一任务、所述第一网络节点的标识、所述第一网络节点执行所述第一任务提供的资源、所述第一网络节点执行所述第一任务的退出条件。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述控制节点为AI任务确定第一编排信息,包括:
    所述控制节点根据所述第一网络节点的AI能力,为所述AI任务确定所述第一编排信息。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    所述控制节点接收来自所述第一网络节点的响应信息,所述响应信息指示所述第一网络节点是否同意所述第一编排信息。
  8. 一种人工智能AI任务指示的方法,其特征在于,包括:
    第一网络节点接收来自控制节点的第一编排信息,所述第一编排信息指示所述第一网络节点执行AI任务的第一任务;
    所述第一网络节点根据所述第一编排信息,执行所述第一任务。
  9. 根据权利要求8所述的方法,其特征在于,第一网络节点接收来自控制节点的第一编排信息,包括:
    所述第一网络节点接收来自所述控制节点的所述第一编排信息和第二编排信息,所述第二编排信息指示第二网络节点执行所述AI任务的第二任务;
    所述方法还包括:
    所述第一网络节点向所述第二网络节点发送所述第二编排信息。
  10. 根据权利要求9所述的方法,其特征在于,所述第一网络节点向所述第二网络节点发送所述第二编排信息,包括:
    所述第一网络节点向所述第二网络节点发送所述第一任务的处理结果和所述第二编排信息。
  11. 根据权利要求8至10中任一项所述的方法,其特征在于,所述第一网络节点为参与执行所述AI任务的第一个网络节点。
  12. 根据权利要求8至11中任一项所述的方法,其特征在于,
    所述第一编排信息包括以下至少一项信息:所述第一任务、所述第一网络节点的标识、所述第一网络节点执行所述第一任务提供的资源、所述第一网络节点执行所述第一任务的退出条件。
  13. 根据权利要求8至12中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一网络节点向所述控制节点发送所述第一网络节点的AI能力。
  14. 根据权利要求8至13中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一网络节点向所述控制节点发送响应信息,所述响应信息指示所述第一网络节点是否同意所述第一编排信息。
  15. 根据权利要求8至14中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一网络节点向至少一个终端装置发送所述第一任务或所述第一任务的部分任务;或者,
    所述第一网络节点向第二网络节点发送所述第一任务或所述第一任务的部分任务,所述第二网络节点为参与执行所述AI任务的至少一个网络节点。
  16. 根据权利要求15所述的方法,其特征在于,所述至少一个终端装置处于预设状态。
  17. 根据权利要求16所述的方法,其特征在于,在所述第一网络节点向至少一个终端装置发送所述第一任务或所述第一任务的部分任务之前,所述方法还包括:
    所述第一网络节点向所述至少一个终端装置发送通知信息,所述通知信息通知将所述至少一个终端装置调整为所述预设状态。
  18. 一种人工智能AI任务指示的方法,其特征在于,包括:
    第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,所述目标状态信息用于指示所述AI任务的目标结果。
  19. 根据权利要求18所述的方法,其特征在于,所述第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息,包括:
    基于所述第二网络节点的AI能力,所述第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息。
  20. 根据权利要求19所述的方法,其特征在于,所述方法还包括:
    所述第一网络节点向控制节点或所述第二网络节点发送第一请求信息,所述第一请求信息请求所述第二网络节点的AI能力;
    所述第一网络节点接收所述第一请求信息的响应信息,所述第一请求信息的响应信息 指示第二网络节点的AI能力。
  21. 根据权利要求18至20中任一项所述的方法,其特征在于,在所述第一网络节点向第二网络节点发送AI任务的第一任务的处理结果和目标状态信息之前,所述方法还包括:
    所述第一网络节点向所述第二网络节点发送第二请求信息,所述第二请求信息请求所述第二网络节点协作执行所述AI任务。
  22. 根据权利要求18至21中任一项所述的方法,其特征在于,所述第一任务的处理结果表示所述AI任务的当前状态信息。
  23. 根据权利要求18至22中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一网络节点还向所述第二网络节点发送区域信息,所述区域信息用于所述第二网络节点确定协作执行所述AI任务的网络节点。
  24. 根据权利要求18至23中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一网络节点向至少一个终端装置发送所述第一任务或所述第一任务的部分任务。
  25. 根据权利要求24所述的方法,其特征在于,所述至少一个终端装置处于预设状态。
  26. 根据权利要求25所述的方法,其特征在于,在所述第一网络节点向至少一个终端装置发送所述第一任务或所述第一任务的部分任务之前,所述方法还包括:
    所述第一网络节点向所述至少一个终端装置发送通知信息,所述通知信息通知将所述至少一个终端装置调整为所述预设状态。
  27. 一种人工智能AI任务指示的方法,其特征在于,包括:
    第二网络节点接收来自第一网络节点的AI任务的第一任务的处理结果和目标状态信息,所述目标状态信息用于指示所述AI任务的目标结果;
    所述第二网络节点基于所述第一任务的处理结果和所述目标状态信息,执行所述AI任务的第二任务。
  28. 根据权利要求27所述的方法,其特征在于,所述方法还包括:
    所述第二网络节点向控制节点或所述第一网络节点发送所述第二网络节点的AI能力。
  29. 根据权利要求27或28所述的方法,其特征在于,在所述第二网络节点接收来自第一网络节点的AI任务的第一任务的处理结果和目标状态信息之前,所述方法还包括:
    所述第二网络节点接收来自所述第一网络节点的第二请求信息,所述第二请求信息请求所述第二网络节点协作执行所述AI任务。
  30. 根据权利要求27至29中任一项所述的方法,其特征在于,所述第一任务的处理结果表示所述AI任务的当前状态信息;
    所述第二网络节点基于所述第一任务的处理结果和所述目标状态信息,执行所述AI任务的第二任务,包括:
    所述第二网络节点基于所述AI任务的当前状态信息和所述目标状态信息,执行所述AI任务的第二任务。
  31. 根据权利要求27至30中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二网络节点接收来自所述第一网络节点的区域信息,所述区域信息用于所述第 二网络节点确定协作执行所述AI任务的网络节点。
  32. 根据权利要求27至31中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二网络节点向至少一个终端装置发送所述第二任务或所述第二任务的部分任务。
  33. 根据权利要求32所述的方法,其特征在于,所述至少一个终端装置处于预设状态。
  34. 根据权利要求33所述的方法,其特征在于,在所述第二网络节点向至少一个终端装置发送所述第二任务或所述第二任务的部分任务之前,所述方法还包括:
    所述第二网络节点向所述至少一个终端装置发送通知信息,所述通知信息通知将所述至少一个终端装置调整为所述预设状态。
  35. 一种人工智能AI任务指示的方法,其特征在于,包括:
    网络节点向至少一个终端装置发送AI任务,其中,所述至少一个终端装置处于预设状态。
  36. 根据权利要求35所述的方法,其特征在于,在所述网络节点向至少一个终端装置发送AI任务之前,所述方法还包括:
    所述网络节点向所述至少一个终端装置发送通知信息,所述通知信息通知将所述至少一个终端装置调整为所述预设状态。
  37. 一种人工智能AI任务指示的方法,其特征在于,包括:
    终端装置接收来自网络节点的AI任务,其中,所述终端装置处于预设状态;
    所述终端装置执行所述AI任务。
  38. 根据权利要求37所述的方法,其特征在于,在所述终端装置接收来自网络节点的AI任务之前,所述方法还包括:
    所述终端装置接收来自所述网络节点的通知信息,所述通知信息通知将所述终端装置调整为所述预设状态。
  39. 一种通信装置,其特征在于,包括:用于执行如权利要求1至38中任一项所述的方法的单元。
  40. 一种通信装置,其特征在于,包括:
    通信接口,用于输入和/或输出信息;
    处理器,用于执行计算机程序,以使得所述装置实现如权利要求1至38中任一项所述的方法。
  41. 一种通信装置,其特征在于,包括:
    存储器,用于存储可执行指令;
    处理器,用于调用并运行所述存储器中的所述可执行指令,以执行权利要求1至38中任一项所述的方法。
  42. 根据权利要求39至41中任一项所述的装置,其特征在于,所述装置为以下任一项:通信设备、芯片或芯片系统。
  43. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,当所述程序指令由处理器运行时,实现权利要求1至38中任一项所述的方法。
  44. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码, 当所述计算机程序代码在计算机上运行时,实现权利要求1至38中任一项所述的方法。
  45. 一种通信系统,其特征在于,所述系统包括控制节点和第一网络节点,
    其中,所述控制节点用于执行如权利要求1至7中任一项所述的方法,所述第一网络节点用于执行如权利要求8至17中任一项所述的方法。
  46. 一种通信系统,其特征在于,所述系统包括第一网络节点和第二网络节点,
    其中,所述第一网络节点用于执行如权利要求18至26中任一项所述的方法,所述第一网络节点用于执行如权利要求27至34中任一项所述的方法。
PCT/CN2022/126752 2022-10-21 2022-10-21 Ai任务指示的方法、通信装置和系统 WO2024082274A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/126752 WO2024082274A1 (zh) 2022-10-21 2022-10-21 Ai任务指示的方法、通信装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/126752 WO2024082274A1 (zh) 2022-10-21 2022-10-21 Ai任务指示的方法、通信装置和系统

Publications (1)

Publication Number Publication Date
WO2024082274A1 true WO2024082274A1 (zh) 2024-04-25

Family

ID=90736652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126752 WO2024082274A1 (zh) 2022-10-21 2022-10-21 Ai任务指示的方法、通信装置和系统

Country Status (1)

Country Link
WO (1) WO2024082274A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717727A (zh) * 2019-09-02 2020-01-21 深圳壹账通智能科技有限公司 基于多平台的信息处理方法、装置、计算机设备和存储介质
JP2020057062A (ja) * 2018-09-28 2020-04-09 シャープ株式会社 ネットワークシステム、サーバおよび情報処理方法
CN112181612A (zh) * 2020-08-31 2021-01-05 深圳市优必选科技股份有限公司 任务处理方法、装置、电子设备及计算机可读存储介质
CN113377503A (zh) * 2020-03-09 2021-09-10 阿尔法云计算(深圳)有限公司 一种协作式ai的任务调度方法、装置与系统
CN113516250A (zh) * 2021-07-13 2021-10-19 北京百度网讯科技有限公司 一种联邦学习方法、装置、设备以及存储介质
CN113873546A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 一种计算服务的实现方法及装置
CN114095969A (zh) * 2020-08-24 2022-02-25 华为技术有限公司 一种智能的无线接入网络

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020057062A (ja) * 2018-09-28 2020-04-09 シャープ株式会社 ネットワークシステム、サーバおよび情報処理方法
CN110717727A (zh) * 2019-09-02 2020-01-21 深圳壹账通智能科技有限公司 基于多平台的信息处理方法、装置、计算机设备和存储介质
CN113377503A (zh) * 2020-03-09 2021-09-10 阿尔法云计算(深圳)有限公司 一种协作式ai的任务调度方法、装置与系统
CN113873546A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 一种计算服务的实现方法及装置
CN114095969A (zh) * 2020-08-24 2022-02-25 华为技术有限公司 一种智能的无线接入网络
CN112181612A (zh) * 2020-08-31 2021-01-05 深圳市优必选科技股份有限公司 任务处理方法、装置、电子设备及计算机可读存储介质
CN113516250A (zh) * 2021-07-13 2021-10-19 北京百度网讯科技有限公司 一种联邦学习方法、装置、设备以及存储介质

Similar Documents

Publication Publication Date Title
US20230209390A1 (en) Intelligent Radio Access Network
EP4099635A1 (en) Method and device for selecting service in wireless communication system
US11792729B2 (en) Method and apparatus for mutually exclusive access to network slices in wireless communication system
US20230300210A1 (en) Computing aware-session management method and communication apparatus
WO2021081959A1 (zh) 通信方法、设备及系统
US20230189057A1 (en) Service traffic steering method and apparatus
WO2021023139A1 (zh) 一种切换的方法及装置
US20240015534A1 (en) Model processing method, communication apparatus, and system
JP2019522388A (ja) 通信方法と通信装置
WO2024082274A1 (zh) Ai任务指示的方法、通信装置和系统
US20240163741A1 (en) Ran node, ue, and method
US20230319597A1 (en) Network node and a method performed in a wireless communication network for handling configuration of radio network nodes using reinforcement learning
WO2023246267A1 (zh) 通信方法、通信装置和系统
WO2024036453A1 (zh) 一种联邦学习方法及相关装置
WO2024067245A1 (zh) 模型匹配的方法和通信装置
WO2024067248A1 (zh) 一种获取训练数据集的方法和装置
WO2024007156A1 (zh) 一种通信方法和装置
US20240179603A1 (en) Communication method and apparatus
WO2024036454A1 (zh) 一种数据特征测量方法及相关装置
WO2023138514A1 (zh) 信息处理方法及通信装置
WO2024011581A1 (zh) 一种通信方法及装置
WO2024093503A1 (zh) 一种处理模型的方法和装置
WO2023051259A1 (zh) 切换方法、通信装置、以及计算机存储介质
WO2024087573A1 (zh) 一种联邦学习方法及装置
WO2023213134A1 (zh) 一种数据报告的方法、装置及系统