WO2024088119A1 - Procédé et appareil de traitement de données, terminal et dispositif côté réseau - Google Patents

Procédé et appareil de traitement de données, terminal et dispositif côté réseau Download PDF

Info

Publication number
WO2024088119A1
WO2024088119A1 PCT/CN2023/125102 CN2023125102W WO2024088119A1 WO 2024088119 A1 WO2024088119 A1 WO 2024088119A1 CN 2023125102 W CN2023125102 W CN 2023125102W WO 2024088119 A1 WO2024088119 A1 WO 2024088119A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
model
information
side device
target
Prior art date
Application number
PCT/CN2023/125102
Other languages
English (en)
Chinese (zh)
Inventor
孙晓文
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2024088119A1 publication Critical patent/WO2024088119A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Definitions

  • the present application belongs to the field of communication technology, and specifically relates to a data processing method, device, terminal and network side equipment.
  • AI artificial intelligence
  • terminals can perform image processing, channel prediction, etc. through AI models.
  • network-side devices often only update AI models for terminals based on changes in terminal-side processing tasks, which easily leads to the difficulty of terminals to effectively process data related to processing tasks (i.e., AI tasks) based on the updated AI models. It can be seen that in the prior art, there is a problem of poor accuracy in updating AI models on the terminal side by the network side.
  • the embodiments of the present application provide a data processing method, apparatus, terminal and network-side equipment, which can solve the problem of poor accuracy of updating the AI model on the terminal side by the network side in the prior art.
  • a data processing method comprising:
  • the terminal sends first information to the network side device, wherein the first information includes a model update request and capability information of the terminal, the model update request is used to request an update of the artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal;
  • the terminal receives a first AI model from the network-side device, wherein the first AI model is used for a target AI task;
  • the terminal processes the first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • a data processing device which is applied to a terminal and includes:
  • a first sending module configured to send first information to a network side device, wherein the first information includes a model update request and capability information of the terminal, the model update request is used to request an update of an artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal;
  • a first receiving module configured to receive a first AI model from the network-side device, wherein the first AI model is used for a target AI task;
  • a processing module is used to process the first data based on the first AI model to obtain a first processing result, wherein , the first data is the data corresponding to the target AI task.
  • a third aspect provides a data processing method, the method comprising:
  • the network side device receives first information from the terminal, wherein the first information includes capability information of the terminal;
  • the network-side device determines a first artificial intelligence (AI) model according to the capability information of the terminal, wherein the first AI model is used for a target AI task;
  • AI artificial intelligence
  • the network side device sends the first AI model to the terminal.
  • a data processing device which is applied to a network side device, and includes:
  • a first receiving module configured to receive first information from a terminal, wherein the first information includes capability information of the terminal;
  • a determination module configured to determine a first artificial intelligence (AI) model according to the capability information of the terminal, wherein the first AI model is used for a target AI task;
  • AI artificial intelligence
  • the first sending module is used to send the first AI model to the terminal.
  • a terminal comprising a processor and a memory, wherein the memory stores a program or instruction that can be run on the processor, and when the program or instruction is executed by the processor, the steps of the method described in the first aspect are implemented.
  • a terminal comprising a processor and a communication interface, wherein the communication interface is used to send first information to a network side device, wherein the first information includes a model update request and capability information of the terminal, the model update request is used to request an update of an artificial intelligence (AI) model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal; a first AI model is received from the network side device, wherein the first AI model is used for a target AI task; the processor is used to process first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • AI artificial intelligence
  • a network side device which includes a processor and a memory, wherein the memory stores programs or instructions that can be run on the processor, and when the program or instructions are executed by the processor, the steps of the method described in the third aspect are implemented.
  • a network side device comprising a processor and a communication interface, wherein the communication interface is used to receive first information from a terminal, wherein the first information includes capability information of the terminal; the processor is used to determine a first artificial intelligence (AI) model based on the capability information of the terminal, wherein the first AI model is used for a target AI task; and the communication interface is also used to send the first AI model to the terminal.
  • AI artificial intelligence
  • a data processing system comprising: a terminal and a network side device, wherein the terminal can be used to execute the steps of the data processing method as described in the first aspect, and the network side device can be used to execute the steps of the data processing method as described in the third aspect.
  • a readable storage medium on which a program or instruction is stored.
  • the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented, or the steps of the method described in the third aspect are implemented.
  • a chip comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or instructions to implement the steps of the method described in the first aspect, or to implement the steps of the method described in the third aspect.
  • a computer program/program product is provided, wherein the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement the steps of the method described in the first aspect, or to implement the steps of the method described in the third aspect.
  • a terminal sends a first message to a network-side device, wherein the first message includes a model update request and the capability information of the terminal, wherein the model update request is used to request an update of the artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network-side device to update the AI model corresponding to the terminal;
  • the terminal receives a first AI model from the network-side device, wherein the first AI model is used for a target AI task;
  • the terminal processes first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • the capability information of the terminal is sent to the network-side device through the terminal, so that the network-side device can update the AI model for the terminal based on the capability information of the terminal, which can ensure that the updated AI model can more accurately match the capability of the terminal, improve the accuracy of the terminal-side AI model update, and enable the terminal side to more effectively process data related to the processing task based on the updated AI model.
  • FIG1 is a block diagram of a wireless communication system applicable to an embodiment of the present application.
  • FIG2 is a flow chart of a data processing method provided in an embodiment of the present application.
  • FIG3 is a flow chart of another data processing method provided in an embodiment of the present application.
  • FIG4 is a second block diagram of a wireless communication system applicable to the embodiment of the present application.
  • FIG5 is a third block diagram of a wireless communication system applicable to an embodiment of the present application.
  • FIG6 is a fourth block diagram of a wireless communication system applicable to an embodiment of the present application.
  • FIG7 is a structural diagram of a data processing device provided in an embodiment of the present application.
  • FIG8 is a structural diagram of another data processing device provided in an embodiment of the present application.
  • FIG9 is a structural diagram of a communication device provided in an embodiment of the present application.
  • FIG10 is a structural diagram of a terminal provided in an embodiment of the present application.
  • FIG. 11 is a structural diagram of a network-side device provided in an embodiment of the present application.
  • first, second, etc. in the specification and claims of this application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the terms used in this way can be interchangeable under appropriate circumstances.
  • the objects distinguished by “first” and “second” are generally of the same type, and the number of objects is not limited.
  • the first object can be one or more.
  • “and/or” in the specification and claims means at least one of the connected objects, and the character “/” generally means that the objects connected before and after are in an “or” relationship.
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency Division Multiple Access
  • NR new radio
  • FIG1 shows a block diagram of a wireless communication system applicable to an embodiment of the present application.
  • the wireless communication system includes a terminal 11 and a network side device 12 .
  • the terminal 11 may be a mobile phone, a tablet computer, a laptop computer or a notebook computer, a personal digital assistant (PDA), a handheld computer, a netbook, an ultra-mobile personal computer (UMPC), a mobile Internet device (MID), an augmented reality (AR)/virtual reality (VR) device, a robot, a wearable device (Wearable Device), a vehicle user equipment (VUE), a pedestrian terminal (Pedestrian User Equipment, PUE), a smart home (a home appliance with wireless communication function, such as a refrigerator, a television, a washing machine or furniture, etc.), a game console, a personal computer (PC), a teller machine or a self-service machine and other terminal side devices, and the wearable device includes: a smart watch, a smart bracelet, a smart headset, a smart glasses, smart jewelry (
  • the network side device 12 may include an access network device or a core network device, wherein the access network device may also be referred to as a radio access network device, a radio access network (RAN), a radio access network function or a radio access network unit.
  • the access network device may include a base station, a wireless local area network (WLAN) access point or a WiFi node, etc.
  • WLAN wireless local area network
  • the base station may be referred to as a node B, an evolved node B (eNB), an access point, a base transceiver station (BTS), a radio base station, a radio transceiver, a basic service set (BSS), an extended service set (ESS), a home node B, a home evolved node B, a transmitting and receiving point (TRP) or some other suitable term in the field.
  • eNB evolved node B
  • BTS basic service set
  • ESS extended service set
  • TRP transmitting and receiving point
  • the base station is not limited to a specific technical vocabulary. It should be noted that in the embodiment of the present application, only the base station in the NR system is used as an example for introduction, and the specific type of the base station is not limited.
  • the core network device may include but is not limited to at least one of the following: a core network node, a core network function, a mobile management entity (Mobility Management Entity, MME), Access and Mobility Management Function (Access and Mobility Management Function, AMF), Session Management Function (Session Management Function, SMF), User Plane Function (User Plane Function, UPF), Policy Control Function (Policy Control Function, PCF), Policy and Charging Rules Function (Policy and Charging Rules Function, PCRF), Edge Application Server Discovery Function (Edge Application Server Discovery Function, EASDF), Unified Data Management (Unified Data Management, UDM), Unified Data Repository (Unified Data Repository, UDR), Home Subscriber Server (Home Subscriber Server, HSS), Centralized network configuration (CNC), Network Repository Function (NRF), Network Exposure Function (NEF), Local NEF (Local NEF, or L-NEF), Binding Support Function (Binding Support Function, BSF), Application Function (Application Function, AF), etc. It should
  • AI model segmentation When a terminal performs an AI processing task, such as image processing such as image recognition, it may be necessary to segment the AI model used for the AI processing task if the terminal has limited capabilities, and hand over part of the AI processing task to the edge computing server or centralized server on the network side for processing.
  • AI processing task of object recognition takes the above AI processing task of object recognition as an example for explanation.
  • the split model of the trained AI model (also referred to as a machine learning (ML) model) for object recognition can be distributed among multiple endpoints, typically between the network and the UE.
  • the split point of the above AI model may depend on at least one factor, such as at least one of the user equipment (UE) capability, network conditions, model characteristics, user/task specific requirements, etc., where:
  • the above-mentioned UE capabilities may include the device/UE capabilities to run the entire AI model or part of the AI model, such as the required memory, processing power, energy consumption, and inference latency.
  • the above-mentioned network conditions may include network conditions for transmitting media and/or intermediate data, for example, the amount of data for transmitting an image in one shot or the amount of data for transmitting a video at a specific frame rate, the bandwidth delay required in the UL and/or DL that has different effects on the network load and the related uplink (UL) and downlink (DL) networks, and network inference delay also needs to be considered.
  • network conditions for transmitting media and/or intermediate data for example, the amount of data for transmitting an image in one shot or the amount of data for transmitting a video at a specific frame rate, the bandwidth delay required in the UL and/or DL that has different effects on the network load and the related uplink (UL) and downlink (DL) networks, and network inference delay also needs to be considered.
  • the above model features can include split inference of task-specific models running on the UE for object recognition. For example, in one UE, the task is to recognize pedestrians, while in another UE, the task is to recognize traffic signs.
  • the core of the network model and the input image/video are the same, but the tasks in the UE (and their required task-specific models) are different.
  • the above user or task specific requirements may require some processing tasks to be performed on the terminal to protect privacy or because they are latency sensitive operations. They can mainly include the following two cases, both involving image or video processing:
  • the UE captures an image or video and first feeds the input data to the UE inference model (e.g., to protect privacy). The UE then uploads the intermediate output data from the UE inference model to the network inference, which in turn executes the rest of the model (e.g., processes intensive calculations) and finally returns the result or processed image/video to the UE.
  • the UE inference model e.g., to protect privacy.
  • the UE then uploads the intermediate output data from the UE inference model to the network inference, which in turn executes the rest of the model (e.g., processes intensive calculations) and finally returns the result or processed image/video to the UE.
  • the UE uploads the captured image or video to the network, which processes the input video/image by reasoning, and then sends the intermediate data back to the UE to reason about the remaining layers of the execution model (e.g., task-specific operations) and returns the final result.
  • the execution model e.g., task-specific operations
  • FIG. 2 is a flow chart of a data processing method provided in an embodiment of the present application.
  • the method can be executed by a terminal, as shown in FIG. 2, and includes the following steps:
  • Step 201 The terminal sends first information to a network side device, wherein the first information includes a model update request and capability information of the terminal, the model update request is used to request an update of the AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal.
  • the first information includes a model update request and capability information of the terminal
  • the model update request is used to request an update of the AI model corresponding to the terminal
  • the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal.
  • the AI model corresponding to the above-mentioned terminal can be understood as an AI model used to perform AI tasks on the terminal side, wherein the AI model corresponding to the above-mentioned terminal may include an AI model for performing a complete AI task, or may include an AI model for performing part of the AI task, for example, a sub-model obtained by segmenting the AI model for performing a complete AI task.
  • the above-mentioned model update request is used to request the network side device to update the AI model corresponding to the terminal. For example, when the memory occupied by the AI model of the terminal currently executing the target AI task exceeds the first threshold or the performance of the terminal is degraded (for example, for AI tasks related to multimedia services, the quality of multimedia currently output by the terminal is degraded), the terminal sends a model update request to the network side device to actively trigger the update of the AI model, and the network side device can respond to the model update request to update the AI model for the terminal.
  • the capability information of the terminal may include but is not limited to at least one of the following: memory information of the terminal, central processing unit (CPU) information of the terminal, hard disk information of the terminal, computing capability information of the terminal, current load information of the terminal, location information of the terminal, and power information of the terminal.
  • the CPU information may include CPU occupancy
  • the hard disk information may include hard disk occupancy
  • the computing capability information may include floating-point operations per second (FLOPS).
  • the network side device when receiving the above-mentioned first information, can respond to the above-mentioned model update request and update the AI model corresponding to the terminal based on the capability information of the above-mentioned terminal. For example, the network side device can determine the AI model used for the terminal side based on the capability information of the above-mentioned terminal and send it to the terminal to update the AI model corresponding to the terminal.
  • Step 202 The terminal receives a first AI model from the network side device, wherein the first AI model is used for a target AI task.
  • the target AI task may be any AI task that the terminal needs to perform, such as image rendering or image recognition.
  • the first AI model may be an AI model for the target AI task determined by the network device based on the capability information of the terminal. Target AI task) determines a first AI model and sends it to the terminal, and then the terminal can perform the target AI task based on the first AI model.
  • the above-mentioned first AI model can be an AI model used to fully execute the above-mentioned target AI task, or it can be an AI model used to execute part of the above-mentioned target AI task.
  • the above-mentioned first AI model can be a sub-AI model obtained by splitting the AI model used to fully execute the above-mentioned target AI task.
  • Step 203 The terminal processes the first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • the above-mentioned first data may be the source data corresponding to the target IA task (i.e., data provided by the data source (Data Source)).
  • the above-mentioned first data may be image data acquired by an image acquisition device; or the above-mentioned first data may be the intermediate data corresponding to the target AI task.
  • the target IA task is to identify the actions of people in an image
  • the above-mentioned first data may be data after person recognition, and the terminal may further identify the actions of each person in the image based on the above-mentioned first data.
  • the above-mentioned first processing result may be the final processing result for the target AI task.
  • the above-mentioned first processing result is the recognition result of the actions of the characters in the image.
  • the above-mentioned first processing result may also be the intermediate processing result for the target AI task.
  • the above-mentioned first processing result is only the recognition result of the characters in the image.
  • the data processing method provided in the embodiment of the present application is to send first information to the network side device through the terminal, wherein the first information includes a model update request and the capability information of the terminal, the model update request is used to request to update the artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal; the terminal receives the first AI model from the network side device, wherein the first AI model is used for the target AI task; the terminal processes the first data based on the first AI model to obtain a first processing result, wherein the first data is the data corresponding to the target AI task.
  • the first information includes a model update request and the capability information of the terminal
  • the model update request is used to request to update the artificial intelligence AI model corresponding to the terminal
  • the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal
  • the terminal receives the first AI model from the network side device, wherein the first AI model is used for the target AI task
  • the terminal
  • the capability information of the terminal is sent to the network side device through the terminal, so that the network side device can update the AI model for the terminal based on the capability information of the terminal, so that the updated AI model can be guaranteed to match the capability of the terminal more accurately, improve the accuracy of the terminal side AI model update, and enable the terminal side to process the data related to the processing task more effectively based on the updated AI model.
  • the first AI model is a sub-AI model obtained by segmenting a target AI model, and the target AI model is used for the target AI task.
  • the target AI model may be an AI model for completely executing the target AI task.
  • the network side device may segment the target AI model based on the capability information of the terminal and the capability information of the network side device to obtain a first AI model for the terminal side and a second AI model for the network side.
  • the target AI model may be an AI model selected according to the target AI task.
  • an AI model that can be used to perform the target AI task among multiple AI models stored on the network side may be determined as the target AI model.
  • the target AI model may be an AI model selected according to the target AI task and the capability information of the terminal.
  • an AI model that can be used to perform the target AI task may be selected from the network side device based on the capability information of the terminal. Select the target AI model from multiple AI models for the service.
  • the method further includes:
  • the terminal sends the first processing result to the network side device, wherein the first processing result is used for processing by a second AI model corresponding to the network side device;
  • the terminal receives a second processing result from the network side device, wherein the second processing result is a processing result corresponding to the target AI task.
  • the first processing result is an intermediate processing result for the target AI task.
  • the second AI model may be an AI model obtained by segmenting the target AI task, and the second AI model is different from the first AI model.
  • the terminal when the data source corresponding to the above-mentioned target AI task is on the terminal side, the terminal can infer the source data corresponding to the above-mentioned target AI task based on the first AI model to obtain a first processing result, and send the first processing result to the network side device. Then, the network side device can infer the first processing result based on the second AI model to obtain a second processing result, and return the second processing result to the terminal. This can not only reduce the resources required for data transmission between multiple terminals, but also improve the efficiency of data processing.
  • the above-mentioned second processing result may be an intermediate processing result for the target AI task, or may be a final processing result for the target AI task.
  • the method further includes:
  • the terminal sends second information for the target AI task to the network side device, wherein the second information includes at least one of quality of service (QoS) information and monitoring information.
  • QoS quality of service
  • the QoS information may include but is not limited to processing time, bandwidth, etc.
  • the monitoring information may include but is not limited to monitoring time, required monitoring content, etc.
  • the terminal can send second information for the above-mentioned target AI task to the network side device, so that the network side device can execute the above-mentioned target AI task based on the above-mentioned second information.
  • the above-mentioned target AI task can be executed based on the above-mentioned QoS information, so that the processing of the target AI task can meet the corresponding QoS requirements.
  • the method further includes:
  • the terminal receives a second processing result from the network side device; wherein the first data is the second processing result;
  • the terminal processes the first data based on the first AI model to obtain a first processing result, including:
  • the terminal processes the second processing result based on the first AI model to obtain the first processing result.
  • the network side device can infer the source data corresponding to the target AI task based on the second AI model to obtain a second processing result and send it to the terminal.
  • the terminal can then infer the second processing result based on the first AI model to obtain the first processing result, which not only reduces the resources required for data transmission between multiple terminals, but also improves the efficiency of data processing.
  • the first processing result when the first processing result is the final processing result corresponding to the target AI task, the first processing result may be sent to a data destination (Data Destination) corresponding to the target AI task.
  • Data Destination a data destination corresponding to the target AI task.
  • the data destination may be a multimedia playback device or a display device, etc.
  • the method may further include:
  • the terminal receives third information for the target AI task from the network side device, wherein the third information includes at least one of QoS information and monitoring information.
  • the QoS information may include but is not limited to processing time, bandwidth, etc.
  • the monitoring information may include but is not limited to monitoring time, required monitoring content, etc.
  • the network side device sends the above-mentioned third information to the terminal, so that the terminal can execute the above-mentioned target AI task based on the above-mentioned third information, which can make the execution of the above-mentioned target AI task on the terminal side more in line with the requirements.
  • FIG. 3 is a flow chart of a data processing method provided in an embodiment of the present application.
  • the method can be executed by a network side device, as shown in FIG. 3, and includes the following steps:
  • Step 301 A network-side device receives first information from a terminal, wherein the first information includes capability information of the terminal.
  • the terminal may periodically report the capability information of the terminal.
  • the terminal may report the capability information of the terminal to the network device at preset intervals; or, upon receiving a capability reporting request sent by the network device, the terminal may report the capability information of the terminal to the network device; or, upon determining that the AI model needs to be updated, the terminal may report the capability information of the terminal to the network device.
  • the capability information of the terminal may refer to the relevant description of the aforementioned embodiment, which will not be described in detail here.
  • Step 302 The network-side device determines a first AI model according to the capability information of the terminal, wherein the first AI model is used for a target AI task.
  • the network-side device may directly determine the first AI model based on the capability information of the terminal.
  • the first AI model may be an AI model used to completely execute the target AI task.
  • the network-side device may determine the first AI model based on the capability information of the terminal and the capability information of the network-side device.
  • the first AI model may be a sub-AI model obtained by segmenting the AI model used to completely execute the target AI task.
  • Step 303 The network side device sends the first AI model to the terminal.
  • the terminal may actively trigger the update of the AI model on the terminal side.
  • the network side device may, upon receiving a model update request sent by the terminal, determine the first AI model according to the capability information of the terminal and send the first AI model to the terminal in response to the model update request; or, the network side device may actively trigger the update of the AI model on the terminal side.
  • the network side device may, upon determining that the AI model corresponding to the terminal needs to be updated according to the capability information of the terminal, determine the first AI model according to the capability information of the terminal and send the first AI model to the terminal. The first AI model.
  • the data processing method provided in the embodiment of the present application receives first information from a terminal through a network-side device, wherein the first information includes capability information of the terminal; the network-side device determines a first AI model based on the capability information of the terminal, wherein the first AI model is used for a target AI task; and the network-side device sends the first AI model to the terminal. That is, the network-side device updates the AI model for the terminal based on the capability information of the terminal, so as to ensure that the updated AI model can more accurately match the capabilities of the terminal, improve the accuracy of the terminal-side AI model update, and enable the terminal side to more effectively process data related to the processing task based on the updated AI model.
  • the first information also includes a model update request, and the model update request is used to request an update of the AI model corresponding to the terminal.
  • the terminal actively triggers the update of the AI model on the terminal side. For example, when the memory occupied by the AI model of the target AI task currently executed by the UE exceeds a first threshold or the performance of the UE is degraded (for example, for AI tasks related to multimedia services, the quality of multimedia currently output by the terminal is degraded), the terminal sends a model update request to the network side device, and the network side device can respond to the model update request, determine the first IA model based on the terminal's capability information, and send it to the terminal.
  • a first threshold or the performance of the UE for example, for AI tasks related to multimedia services, the quality of multimedia currently output by the terminal is degraded
  • the terminal sends a model update request to the network side device, and the network side device can respond to the model update request, determine the first IA model based on the terminal's capability information, and send it to the terminal.
  • the network-side device determines a first artificial intelligence AI model according to the capability information of the terminal, including:
  • the network side device determines according to the capability information of the terminal that the AI model corresponding to the terminal needs to be updated, determines the first AI model according to the capability information of the terminal.
  • the network side device actively triggers the update of the AI model on the terminal side. For example, the network side device can evaluate whether the AI model on the terminal side needs to be updated based on the received terminal capability information and the AI task currently executed by the terminal; or, the network side device can evaluate whether the AI model on the terminal side needs to be updated based on the terminal capability information, the capability information of the network side device and the AI task currently executed.
  • the capability information of the terminal includes at least one of the following: memory information of the terminal, CPU information of the terminal, hard disk information of the terminal, computing power information of the terminal, current load information of the terminal, location information of the terminal, and power information of the terminal.
  • the network-side device determines the first AI model according to the capability information of the terminal, including:
  • the network side device determines, according to the capability information of the terminal and the capability information of the network side device, a segmentation point of a target AI model, wherein the target AI model is used for the target AI task;
  • the network-side device divides the target AI model according to the division points of the target AI model to obtain a first AI model and a second AI model.
  • the above-mentioned target AI model can be understood as an AI model that can be used to fully execute the above-mentioned target AI task.
  • the network-side device when the network-side device stores only one AI model for each AI task, the network-side device can obtain the target AI model (i.e., the AI model corresponding to the target AI task) from the AI model stored in the network-side device according to the target AI task; when the completion device stores multiple AI models for each AI task, the network-side device can obtain the target AI model from the multiple AI models stored in the network-side device according to the target AI task and the capability information of the terminal. Model.
  • the target AI model i.e., the AI model corresponding to the target AI task
  • the network side device When the network side device obtains the target AI model, it can determine the split point of the target AI model according to the capability information of the terminal and the capability information of the network side device, and can split the target AI model based on the split point to obtain a first AI model and a second AI model, wherein the first AI model and the second AI model are both sub-AI models of the target AI model, and the first AI model can be used to perform the above-mentioned target AI task on the terminal side, and the above-mentioned second AI model can be used to perform the above-mentioned target AI task on the network side.
  • the network side device may consider at least one item of information such as network conditions, model characteristics, user/task specific requirements, etc. in addition to the capability information of the above-mentioned terminal and the capability information of the network side device.
  • This embodiment comprehensively determines the segmentation point of the target AI model by combining the capability information of the terminal and the capability information of the network side device, and segments the target AI model based on the segmentation point to obtain a first AI model and a second AI model, which can make the segmentation of the target AI model more reasonable and accurate.
  • the method further includes:
  • the network-side device receives a first processing result from the terminal, wherein the first processing result is a processing result corresponding to the target AI task;
  • the network-side device infers the first processing result based on the second AI model to obtain a second processing result
  • the network side device sends the second processing result to the terminal.
  • the method further includes:
  • the network side device receives second information for the target AI task from the terminal, wherein the second information includes at least one of quality of service QoS information and monitoring information.
  • the method further includes:
  • the network-side device processes the second data based on the second AI model to obtain a second processing result, wherein the second data is data corresponding to the target AI task;
  • the network side device sends the second processing result to the terminal, wherein the second processing result is used by the terminal to perform processing based on the first AI model.
  • the capability information of the network side device includes at least one of the following: memory information of the processing server, CPU information of the processing server, hard disk information of the processing server, computing power information of the processing server, and current load information of the processing server; wherein the processing server is a processing server in the network side device for processing the target AI task. service server.
  • the processing server is a server on the network side for executing the target AI task.
  • the CPU information may include CPU occupancy
  • the hard disk information may include hard disk occupancy
  • the computing power information may include FLOPS.
  • the method further includes:
  • the network side device sends second information for the target AI task to the terminal, wherein the second information includes at least one of quality of service QoS information and monitoring information.
  • Example 1 The terminal completely executes the target AI task.
  • the terminal may include a UE application (UE application), a UE capability delivery function (UE Capability Delivery Funcion), an AI model inference engine (AI Model Inference engine), an AI model access function (AI Model Access Function), a data source (Data Source) and a data destination (Data Destination).
  • UE application UE application
  • UE Capability Delivery Funcion UE Capability Delivery Funcion
  • AI model inference engine AI Model Inference engine
  • AI Model Access Function AI Model Access Function
  • Data Source data source
  • Data Destination Data Destination
  • the above-mentioned UE application can be understood as an application used on the UE side, which is used to provide AI media services through an AI model inference engine and an AI model access function, that is, the target AI task is an AI task related to AI media services.
  • the above-mentioned UE capability delivery function is used to periodically collect UE capability information, such as the terminal's memory, CPU, hard disk data, computing power, current load conditions, terminal location information, terminal power information, etc., and transmit it to the network through 5GS for the network side to select AI models based on the UE capability information.
  • UE capability information such as the terminal's memory, CPU, hard disk data, computing power, current load conditions, terminal location information, terminal power information, etc.
  • the above-mentioned AI model access function can be used to receive an updated AI model (i.e., the above-mentioned first AI model) through 5GS and send it to the AI model inference engine.
  • the above-mentioned AI model access function can also be used for receiving-end optimization or decompression of the AI model.
  • the above-mentioned AI model access function can receive the first AI model from the network side through 5GS and can optimize the first AI model.
  • the above-mentioned AI model inference engine can be used to perform inference using data from a data source (Data Source, for example, a camera or other media source) as input data of an AI model (i.e., the first AI model) received from an AI model access function, obtain inference output data, and send the inference output data to a data destination (for example, a multimedia player).
  • Data Source for example, a camera or other media source
  • an AI model i.e., the first AI model
  • a data destination for example, a multimedia player
  • the network side may include a network application (Network Application), an AI model capability collection function (AI Model Capability Collection Function), an AI model selection function (AI Model Selection Function), an AI model repository (AI Model Repository), and an AI model delivery function (AI Model Delivery Function).
  • Network Application An AI model capability collection function
  • AI Model Capability Collection Function An AI model selection function
  • AI Model repository AI Model Repository
  • AI Model Delivery Function An AI model delivery function
  • the AI model capability collection function can be used to collect capability information of terminals that perform AI tasks, such as the terminal's memory, central processing unit (CPU), hard disk data, computing power Flops, current load conditions, terminal location information, terminal power information, etc.
  • the above-mentioned AI model selection function can be used to select a suitable new AI model based on the relevant information of the AI service (AI Service) to be processed (i.e. the above-mentioned target AI task) (for example, image rendering or image recognition) and the capabilities of the terminal collected by the AI model capability collection function to realize AI model update.
  • AI Service AI service
  • target AI task for example, image rendering or image recognition
  • the above-mentioned network application can be understood as an application for the network side, which can be used to select an AI model in the AI model warehouse for the AI media service through the above-mentioned AI model selection function, and send it to the AI model delivery function to deliver the selected AI model to the UE.
  • the AI model delivery function can be used to send the AI model to the UE through the 5GS.
  • the AI model delivery function may also include functions related to quality of service (QoS) request and monitoring, and functions related to optimization or compression of the AI model.
  • QoS quality of service
  • the UE can actively trigger the update of the AI model. For example, when the memory occupied by the AI model of the target AI task currently executed by the UE exceeds the first threshold or the performance of the UE degrades (for example, for AI tasks related to multimedia services, the quality of multimedia currently output by the terminal degrades), the UE actively triggers the update of the model. For example, when the UE capability delivery function transmits the UE's capability information, the UE carries a model update request to actively trigger the AI model selection function on the network side to perform a model update operation.
  • the network side e.g., the AI model selection function on the network side
  • the AI model selection function can actively update the AI model by collecting the capability information of the terminal and evaluating the current AI task.
  • each function on the terminal side shown in FIG4 can be set separately, or can be set together according to actual needs.
  • each function on the network side shown in FIG4 can be set separately, or can be set together according to actual needs, for example, the network application and the AI model warehouse can be set together, that is, the network application and the AI model warehouse can be used as a module.
  • Example 2 The target AI task is performed by the terminal and the network in collaboration, and the data source is located on the network side
  • the terminal may include a UE application (UE application), a UE capability delivery function (UE Capability Delivery Funcion), an AI model inference engine (AI Model Inference engine), an AI model access function (AI Model Access Function), an intermediate data access function (Intermediate Data Access Function) and a data destination (Data Destination).
  • UE application UE application
  • UE Capability Delivery Funcion UE Capability Delivery Funcion
  • AI model inference engine AI Model Inference engine
  • AI Model Access Function AI model access function
  • Intermediate Data Access Function Intermediate Data Access Function
  • Data Destination a data destination
  • the above-mentioned intermediate data access function can be used to receive intermediate data from the network through 5GS, and send it to the UE's AI model inference engine for UE reasoning.
  • the output result of the UE's AI model reasoning engine ie, the final reasoning output
  • the data destination for example, a media player.
  • the UE application, UE capability delivery function, AI model reasoning engine, AI model access function and data destination of this example can be found in the relevant description of Example 1, which will not be repeated here.
  • the UE capability information collected by the above-mentioned UE capability delivery function is also used for AI model segmentation on the network side.
  • the network side may include a network application, an AI model capability collection function, an AI model selection function, an AI model repository, and an AI model delivery function. Function), AI Model Inference Engine, Intermediate Data Delivery Function, and Data Source.
  • the above-mentioned AI model capability collection function can be used to collect UE capability information, and also used to obtain the processing capabilities of the network side through network applications or other means, such as the memory, CPU, hard disk data, computing power, and current load conditions of the processing server (for example, edge computing server or central cloud server, etc.).
  • the processing server for example, edge computing server or central cloud server, etc.
  • the above-mentioned AI model selection function can be used to decide the latest AI model segmentation scheme based on the collected terminal capability information and the capability information of the processing server on the network side, so as to determine the AI model (i.e., the first AI model and the second AI model) for the terminal and the network to process the AI task, i.e., the decision split point (Split Points).
  • the above-mentioned AI model selection function can have the ability to actively trigger the update of the AI model, which can perform active model update, update the model split point, etc. through the currently collected terminal and network capability information and evaluate the current AI task.
  • the above-mentioned AI model inference engine can be used to infer data from a data source (Data Source, for example, a camera or other media source) as input data of an AI model (i.e., the above-mentioned second AI model) received from an AI model selection function, obtain intermediate data (i.e., the above-mentioned second processing result), and send it to the intermediate data access function.
  • Data Source for example, a camera or other media source
  • an AI model i.e., the above-mentioned second AI model
  • intermediate data i.e., the above-mentioned second processing result
  • the intermediate data delivery function can be used to send the intermediate data received by the 5GS to the UE.
  • the intermediate data delivery function can also include functions related to QoS request and monitoring.
  • the UE can actively trigger the update of the AI model. For example, when the memory occupied by the AI model of the target AI task currently executed by the UE exceeds the first threshold or the performance of the UE degrades (for example, for AI tasks related to multimedia services, the quality of multimedia currently output by the terminal degrades), the UE actively triggers the update of the model. For example, when the UE capability delivery function transmits the UE's capability information, the UE carries a model update request to actively trigger the AI model selection function on the network side to perform a model update operation.
  • the network side (for example, the AI model selection function on the network side) can also actively trigger the update of the AI model.
  • the AI model capability collection function can actively trigger the update of the AI model, that is, send an AI model update request to the AI model selection function on the network side to actively trigger the AI model selection function to perform the AI model update operation.
  • each function on the terminal side shown in FIG5 can be set separately, or can be set together according to actual needs.
  • each function on the network side shown in FIG5 can be set separately, or can be set together according to actual needs, for example, the network application and the AI model warehouse can be set together, that is, the network application and the AI model warehouse can be used as a module.
  • Example 3 The target AI task is performed by the terminal and the network in collaboration, and the data source is located on the terminal side
  • the terminal may include a UE application, a UE capability delivery function, an AI model inference engine, an AI model access function, an intermediate data delivery function, a data source, an inference output access function, and a data destination.
  • the above-mentioned AI model inference engine can be used to perform inference using data from a data source (for example, a camera or other media source) as input data of an AI model (i.e., a first AI model) received from an AI model access function to obtain intermediate data, i.e., partial inference output.
  • the above-mentioned intermediate data submission function is used to receive intermediate data from the AI model inference engine of the UE and send it to the network side through 5GS.
  • the above-mentioned intermediate data submission function may also include functions related to QoS requests and monitoring.
  • the above-mentioned inference output access function is used to receive inference output from the network side through 5GS and send it to the data destination, for example, a multimedia player.
  • the UE application and UE capability delivery function of this example can refer to the relevant description of Example 2, which will not be repeated here.
  • the network side may include a network application (Network Application), an AI model capability collection function (AI Model Capability Collection Function), an AI model selection function (AI Model Selection Function), an AI model repository (AI Model Repository), an AI model delivery function (AI Model Delivery Function), an AI model inference engine (AI Model Inference Engine), an intermediate data access function (Intermediate Data Access Function) and an inference output delivery function (Inference Output Delivery Function).
  • Network Application an AI model capability collection function
  • AI Model Selection Function AI Model Selection Function
  • AI Model Repository AI model repository
  • AI model delivery function AI Model Delivery Function
  • AI Model Delivery Function AI model inference engine
  • an intermediate data access function Intermediate Data Access Function
  • an inference output delivery function Intelligent Output Delivery Function
  • the above-mentioned intermediate data access function is used to receive intermediate data from the UE through 5GS, and send it to the AI model inference engine on the network side for reasoning.
  • the above-mentioned AI model inference engine is used to use the received intermediate as the input data of the AI model (i.e., the above-mentioned second AI model) received from the AI model selection function for reasoning, obtain the output result (i.e., the final reasoning output), and send it to the reasoning output delivery function.
  • the above-mentioned reasoning output delivery function is used to send the final reasoning output to the UE through 5GS.
  • the AI model capability collection function and the AI model selection function in this example can be found in the relevant description of Example 2, which will not be repeated here.
  • the UE can actively trigger the update of the AI model. For example, when the memory occupied by the AI model of the target AI task currently executed by the UE exceeds the first threshold or the performance of the UE degrades (for example, for AI tasks related to multimedia services, the quality of multimedia currently output by the terminal degrades), the UE actively triggers the update of the model. For example, when the UE capability delivery function transmits the UE's capability information, the UE carries a model update request to actively trigger the AI model selection function on the network side to perform a model update operation.
  • the network side (for example, the AI model selection function on the network side) can also actively trigger the update of the AI model.
  • the AI model capability collection function can actively trigger the update of the AI model, that is, send an AI model update request to the AI model selection function on the network side to actively trigger the AI model selection function to perform the AI model update operation.
  • each function on the terminal side shown in FIG6 can be set separately, or can be set together according to actual needs.
  • each function on the network side shown in FIG6 can be set separately, or can be set together according to actual needs, for example, the network application and the AI model warehouse can be set together, that is, the network application and the AI model warehouse can be set as one module.
  • the data processing method provided in the embodiment of the present application can be executed by a data processing device, or a control module in the data processing device for executing the data processing method.
  • the data processing device provided in the embodiment of the present application is described by taking the execution of the data processing method by the data processing device as an example.
  • FIG. 7 is a structural diagram of a data processing device provided in an embodiment of the present application.
  • the data processing device 700 includes:
  • a first sending module 701 is used to send first information to a network side device, wherein the first information includes a model update request and capability information of the terminal, the model update request is used to request an update of an artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal;
  • a first receiving module 702 is configured to receive a first AI model from the network-side device, wherein the first AI model is used for a target AI task;
  • the processing module 703 is used to process the first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • the capability information of the terminal includes at least one of the following: memory information of the terminal, central processing unit CPU information of the terminal, hard disk information of the terminal, computing power information of the terminal, current load information of the terminal, location information of the terminal, and power information of the terminal.
  • the first AI model is a sub-AI model obtained by segmenting a target AI model, and the target AI model is used for the target AI task.
  • the device further comprises:
  • a second sending module configured to process the first data based on the first AI model, and after obtaining a first processing result, send the first processing result to the network side device, wherein the first processing result is used for processing by a second AI model corresponding to the network side device;
  • the second receiving module is used to receive a second processing result from the network side device, wherein the second processing result is a processing result corresponding to the target AI task.
  • the device further comprises:
  • the third sending module is used to send second information for the target AI task to the network side device, wherein the second information includes at least one of quality of service QoS information and monitoring information.
  • the device further comprises:
  • a third receiving module is configured to receive a second processing result from the network side device before processing the first data based on the first AI model to obtain the first processing result; wherein the first data is the second processing result;
  • the processing module is specifically used for:
  • the second processing result is processed based on the first AI model to obtain the first processing result.
  • the device further comprises:
  • the fourth receiving module is used to receive third information for the target AI task from the network side device, wherein the third information includes at least one of quality of service QoS information and monitoring information.
  • the data processing device in the embodiment of the present application may be an electronic device, such as an electronic device with an operating system, or a component in an electronic device, such as an integrated circuit or a chip.
  • the electronic device may be a terminal, or may be other devices other than a terminal.
  • the terminal may include but is not limited to the types of terminals 11 listed above, and the terminal 11 may be a terminal 11.
  • the other device may be a server, a network attached storage (NAS), etc., which is not specifically limited in the embodiments of the present application.
  • NAS network attached storage
  • the data processing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of Figure 2 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • FIG. 8 is a structural diagram of a data processing device provided in an embodiment of the present application.
  • the data processing device 800 includes:
  • a first receiving module 801 is configured to receive first information from a terminal, wherein the first information includes capability information of the terminal;
  • a determination module 802 is used to determine a first artificial intelligence AI model according to the capability information of the terminal, wherein the first AI model is used for a target AI task;
  • the first sending module 803 is used to send the first AI model to the terminal.
  • the first information also includes a model update request, and the model update request is used to request an update of the AI model corresponding to the terminal.
  • the determining module is specifically used for:
  • a first AI model is determined according to the capability information of the terminal.
  • the capability information of the terminal includes at least one of the following: memory information of the terminal, central processing unit CPU information of the terminal, hard disk information of the terminal, computing power information of the terminal, current load information of the terminal, location information of the terminal, and power information of the terminal.
  • the determining module is specifically used for:
  • the target AI model is segmented according to the segmentation points of the target AI model to obtain a first AI model and a second AI model.
  • the device further comprises:
  • a second receiving module configured to receive a first processing result from the terminal after sending the first AI model to the terminal, wherein the first processing result is a processing result corresponding to the target AI task;
  • a first processing module configured to infer the first processing result based on the second AI model to obtain a second processing result
  • the second sending module is used to send the second processing result to the terminal.
  • the device further comprises:
  • the third receiving module is used to receive second information for the target AI task from the terminal, wherein the second information includes at least one of quality of service QoS information and monitoring information.
  • the device further comprises:
  • a second processing module configured to, after sending the first AI model to the terminal, process the first AI model based on the second AI
  • the model processes the second data to obtain a second processing result, wherein the second data is data corresponding to the target AI task;
  • the third sending module is used to send the second processing result to the terminal, wherein the second processing result is used by the terminal to process based on the first AI model.
  • the capability information of the network side device includes at least one of the following: memory information of the processing server, CPU information of the processing server, hard disk information of the processing server, computing power information of the processing server, and current load information of the processing server; wherein, the processing server is a server in the network side device used to process the target AI task.
  • the device further comprises:
  • the fourth sending module is used for the network side device to send the second information for the target AI task to the terminal, wherein the second information includes at least one of quality of service QoS information and monitoring information.
  • the data processing device in the embodiment of the present application may be an electronic device, such as an electronic device with an operating system, or a component in an electronic device, such as an integrated circuit or a chip.
  • the electronic device may be a network-side device, or may be a device other than a network-side device.
  • the network-side device may include but is not limited to the types of network-side devices 12 listed above, and other devices may be servers, network attached storage (NAS), etc., which are not specifically limited in the embodiment of the present application.
  • the data processing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of Figure 3 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the embodiment of the present application further provides a communication device 900, including a processor 901 and a memory 902, wherein the memory 902 stores a program or instruction that can be run on the processor 901.
  • the communication device 900 is a terminal
  • the program or instruction is executed by the processor 901 to implement the various steps of the above-mentioned terminal-side data processing method embodiment, and can achieve the same technical effect.
  • the communication device 900 is a network-side device
  • the program or instruction is executed by the processor 901 to implement the various steps of the above-mentioned network-side device-side data processing method embodiment, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the embodiment of the present application also provides a terminal, including a processor and a communication interface, the communication interface is used to send a first information to a network side device, wherein the first information includes a model update request and the capability information of the terminal, the model update request is used to request an update of the artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal; receiving a first AI model from the network side device, wherein the first AI model is used for a target AI task; the processor is used to process first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • FIG. 10 is a schematic diagram of the hardware structure of a terminal that implements an embodiment of the present application.
  • the terminal 1000 includes but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a storage unit 1010, and a storage unit 1011. At least some components of the memory 1009 and the processor 1010, etc.
  • the terminal 1000 can also include a power supply (such as a battery) for supplying power to each component, and the power supply can be logically connected to the processor 1010 through a power management system, so as to implement functions such as charging, discharging, and power consumption management through the power management system.
  • a power supply such as a battery
  • the terminal structure shown in FIG10 does not constitute a limitation on the terminal, and the terminal can include more or fewer components than shown in the figure, or combine certain components, or arrange components differently, which will not be described in detail here.
  • the input unit 1004 may include a graphics processing unit (GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes the image data of the static picture or video obtained by the image capture device (such as a camera) in the video capture mode or the image capture mode.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
  • the user input unit 1007 includes a touch panel 10071 and at least one of other input devices 10072.
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
  • the RF unit 1001 can transmit the data to the processor 1010 for processing; in addition, the RF unit 1001 can send uplink data to the network side device.
  • the RF unit 1001 includes but is not limited to an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
  • the memory 1009 can be used to store software programs or instructions and various data.
  • the memory 1009 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instruction required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the memory 1009 may include a volatile memory or a non-volatile memory, or the memory 1009 may include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDRSDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchronous link dynamic random access memory (SLDRAM) and a direct memory bus random access memory (DRRAM).
  • the memory 1009 in the embodiment of the present application includes but is not limited to these and any other suitable types of memory.
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 1010.
  • the radio frequency unit 1001 is used to send first information to the network side device, wherein the first information includes a model update request and the capability information of the terminal, the model update request is used to request an update of the artificial intelligence AI model corresponding to the terminal, and the capability information of the terminal is used by the network side device to update the AI model corresponding to the terminal; receive a first AI model from the network side device, wherein the first AI model is used for a target AI task;
  • Processor 1010 is used to process first data based on the first AI model to obtain a first processing result, wherein the first data is data corresponding to the target AI task.
  • the terminal sends the capability information of the terminal to the network side device through the terminal, so that the network side device can update the AI model for the terminal based on the capability information of the terminal.
  • the network side device can update the AI model for the terminal based on the capability information of the terminal.
  • the capability information of the terminal includes at least one of the following: memory information of the terminal, central processing unit CPU information of the terminal, hard disk information of the terminal, computing power information of the terminal, current load information of the terminal, location information of the terminal, and power information of the terminal.
  • the first AI model is a sub-AI model obtained by segmenting a target AI model, and the target AI model is used for the target AI task.
  • the radio frequency unit 1001 is further used for:
  • the first processing result is sent to the network side device, wherein the first processing result is used for processing by a second AI model corresponding to the network side device;
  • a second processing result is received from the network side device, wherein the second processing result is a processing result corresponding to the target AI task.
  • the radio frequency unit 1001 is further used for:
  • the network side device Sending second information for the target AI task to the network side device, wherein the second information includes at least one of quality of service QoS information and monitoring information.
  • the radio frequency unit 1001 is further used for:
  • the first data is processed based on the first AI model to obtain a first processing result, and a second processing result is received from the network side device; wherein the first data is the second processing result;
  • the processor 1010 is specifically configured to:
  • the second processing result is processed based on the first AI model to obtain the first processing result.
  • the radio frequency unit 1001 is further used for:
  • the terminal receives third information for the target AI task from the network side device, wherein the third information includes at least one of quality of service QoS information and monitoring information.
  • the embodiment of the present application also provides a network side device, including a processor and a communication interface, the communication interface is used to receive first information from a terminal, wherein the first information includes capability information of the terminal; the processor is used to determine a first artificial intelligence AI model according to the capability information of the terminal, wherein the first AI model is used for a target AI task; The communication interface is also used to send the first AI model to the terminal.
  • This network side device embodiment corresponds to the above network side device method embodiment, and each implementation process and implementation method of the above method embodiment can be applied to this network side device embodiment and can achieve the same technical effect.
  • the embodiment of the present application also provides a network side device.
  • the network side device 1100 includes: an antenna 1101, a radio frequency device 1102, a baseband device 1103, a processor 1104 and a memory 1105.
  • the antenna 1101 is connected to the radio frequency device 1102.
  • the radio frequency device 1102 receives information through the antenna 1101 and sends the received information to the baseband device 1103 for processing.
  • the baseband device 1103 processes the information to be sent and sends it to the radio frequency device 1102.
  • the radio frequency device 1102 processes the received information and sends it out through the antenna 1101.
  • the method executed by the network-side device in the above embodiment may be implemented in the baseband device 1103, which includes a baseband processor.
  • the baseband device 1103 may include, for example, at least one baseband board, on which multiple chips are arranged, as shown in Figure 11, one of which is, for example, a baseband processor, which is connected to the memory 1105 through a bus interface to call the program in the memory 1105 and execute the network device operations shown in the above method embodiment.
  • the network side device may also include a network interface 1106, which is, for example, a common public radio interface (CPRI).
  • a network interface 1106, which is, for example, a common public radio interface (CPRI).
  • CPRI common public radio interface
  • the network side device 1100 of the embodiment of the present invention also includes: instructions or programs stored in the memory 1105 and executable on the processor 1104.
  • the processor 1104 calls the instructions or programs in the memory 1105 to execute the method executed by each module shown in Figure 8 and achieves the same technical effect. To avoid repetition, it will not be repeated here.
  • An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored.
  • the various processes of the above-mentioned data processing method embodiment are implemented, or the various processes of the above-mentioned data processing method embodiment are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk.
  • An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes of the above-mentioned data processing method embodiment, or to implement the various processes of the above-mentioned data processing method embodiment, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
  • the embodiments of the present application further provide a computer program/program product, which is stored in a storage medium, and is executed by at least one processor to implement the various processes of the above-mentioned data processing method embodiments, or to implement the various processes of the above-mentioned data processing method embodiments, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • An embodiment of the present application also provides a data processing system, including: a terminal and a network side device, wherein the terminal is used to execute the various processes as shown in Figure 2 and the various method embodiments described above, and the network side device is used to execute the various processes as shown in Figure 3 and the various method embodiments described above, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for enabling a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Information Transfer Between Computers (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande appartient au domaine technique des communications. Sont divulgués un procédé et un appareil de traitement de données, un terminal et un dispositif côté réseau. Le procédé de traitement de données dans les modes de réalisation de la présente demande comprend les étapes suivantes : un terminal envoie des premières informations à un dispositif côté réseau, les premières informations comprenant une demande de mise à jour de modèle et des informations de capacité du terminal, la demande de mise à jour de modèle étant utilisée pour demander la mise à jour d'un modèle d'intelligence artificielle (IA) correspondant au terminal, et les informations de capacité du terminal étant utilisées par le dispositif côté réseau pour mettre à jour le modèle d'IA correspondant au terminal ; le terminal reçoit un premier modèle d'IA en provenance du dispositif côté réseau, le premier modèle d'IA étant utilisé pour une tâche d'IA cible ; et le terminal traite des premières données sur la base du premier modèle d'IA de façon à obtenir un premier résultat de traitement, les premières données étant des données correspondant à la tâche d'IA cible.
PCT/CN2023/125102 2022-10-25 2023-10-18 Procédé et appareil de traitement de données, terminal et dispositif côté réseau WO2024088119A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211313122.7A CN117978650A (zh) 2022-10-25 2022-10-25 数据处理方法、装置、终端及网络侧设备
CN202211313122.7 2022-10-25

Publications (1)

Publication Number Publication Date
WO2024088119A1 true WO2024088119A1 (fr) 2024-05-02

Family

ID=90830003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/125102 WO2024088119A1 (fr) 2022-10-25 2023-10-18 Procédé et appareil de traitement de données, terminal et dispositif côté réseau

Country Status (2)

Country Link
CN (1) CN117978650A (fr)
WO (1) WO2024088119A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021142609A1 (fr) * 2020-01-14 2021-07-22 Oppo广东移动通信有限公司 Procédé, appareil et dispositif de rapport d'informations, et support d'enregistrement
WO2022041947A1 (fr) * 2020-08-24 2022-03-03 华为技术有限公司 Procédé de mise à jour de modèle d'apprentissage automatique, et appareil de communication
CN114915983A (zh) * 2021-02-07 2022-08-16 展讯通信(上海)有限公司 一种数据获取方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021142609A1 (fr) * 2020-01-14 2021-07-22 Oppo广东移动通信有限公司 Procédé, appareil et dispositif de rapport d'informations, et support d'enregistrement
WO2022041947A1 (fr) * 2020-08-24 2022-03-03 华为技术有限公司 Procédé de mise à jour de modèle d'apprentissage automatique, et appareil de communication
CN114915983A (zh) * 2021-02-07 2022-08-16 展讯通信(上海)有限公司 一种数据获取方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
INTEL CORPORATION: "High level principle and Functional Framework of AI/ML enabled NG-RAN Network", 3GPP TSG-RAN WG3 MEETING #113-E, R3-213468, 6 August 2021 (2021-08-06), XP052035296 *

Also Published As

Publication number Publication date
CN117978650A (zh) 2024-05-03

Similar Documents

Publication Publication Date Title
CN107135128B (zh) 调用链数据采集方法、移动终端及计算机可读存储介质
US20170109756A1 (en) User Unsubscription Prediction Method and Apparatus
CN108011937A (zh) 消息推送方法、服务器、智能终端及计算机可读存储介质
WO2023093894A1 (fr) Procédé et appareil de mise en œuvre de service de détection, et dispositif côté réseau et terminal
CN118175052A (zh) 模型训练方法、终端及网络侧设备
WO2024088119A1 (fr) Procédé et appareil de traitement de données, terminal et dispositif côté réseau
WO2024078615A1 (fr) Procédé de sélection de modèle, terminal et dispositif côté réseau
WO2024153013A1 (fr) Procédé et appareil de transmission d'informations et dispositif de communication
WO2024149288A1 (fr) Procédé de distribution de modèle d'ia, procédé de réception de modèle d'ia, et terminal et dispositif côté réseau
US20240188046A1 (en) Computing session update method and apparatus, terminal, and network side device
WO2024140725A1 (fr) Appareil et procédé de transmission
WO2024078589A1 (fr) Procédé et appareil de rapport d'informations, dispositif de communication et support de stockage
WO2024022398A1 (fr) Procédé d'acquisition d'informations de sélection de réseau d'un réseau hébergé, terminal et dispositif côté réseau
WO2024067437A1 (fr) Procédé et appareil de désactivation de modèle, procédé et appareil d'envoi d'informations, et dispositif
WO2024078603A1 (fr) Procédé et appareil de collecte de données, dispositif de communication et support de stockage lisible
WO2023213270A1 (fr) Procédés de traitement d'apprentissage de modèle, appareil, terminal et dispositif côté réseau
WO2024140570A1 (fr) Procédé et appareil de configuration de politique, terminal, dispositif côté réseau, et support de stockage lisible
WO2023125932A1 (fr) Procédé et appareil de transmission d'informations de réseau ia, et dispositif de communication
WO2023143416A1 (fr) Procédé de traitement d'informations, terminal, et fonction de réseau
CN117320081A (zh) 算力处理方法、装置及通信设备
CN117119429A (zh) 终端路由选择策略信息管理方法及相关设备
CN117528712A (zh) 选网方法及终端
CN117910589A (zh) 模型请求方法、装置、通信设备及可读存储介质
CN116567781A (zh) 传输方法、装置、终端、网络侧设备及可读存储介质
CN116567588A (zh) 通知方法、第二终端及第一网络功能

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23881698

Country of ref document: EP

Kind code of ref document: A1