WO2023124947A1 - 一种任务处理方法、装置及相关设备 - Google Patents

一种任务处理方法、装置及相关设备 Download PDF

Info

Publication number
WO2023124947A1
WO2023124947A1 PCT/CN2022/138453 CN2022138453W WO2023124947A1 WO 2023124947 A1 WO2023124947 A1 WO 2023124947A1 CN 2022138453 W CN2022138453 W CN 2022138453W WO 2023124947 A1 WO2023124947 A1 WO 2023124947A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
processed
information
model
service
Prior art date
Application number
PCT/CN2022/138453
Other languages
English (en)
French (fr)
Inventor
张亚强
李茹杨
赵雅倩
李仁刚
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Publication of WO2023124947A1 publication Critical patent/WO2023124947A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the technical field of the Internet of Things, in particular to a task processing method, and also relates to a task processing device, a task processing device, and a computer non-volatile readable storage medium.
  • Deep Neural Network DNN
  • DNN Deep Neural Network
  • the purpose of this application is to provide a task processing method, which can improve task processing efficiency while solving the problem of limited network resources; another purpose of this application is to provide a task processing device, task processing equipment and Computer non-volatile readable storage media all have the above beneficial effects.
  • the present application provides a task processing method, including:
  • each service model is deployed on the edge server;
  • prior to prioritizing the tasks to be processed according to the task information it also includes:
  • the task information includes the inherent constraint time of the task to be processed, the estimated waiting time, and the estimated execution time, and then the tasks to be processed are prioritized according to the task information, including:
  • the level evaluation value of the corresponding task to be processed is obtained
  • the level evaluation value corresponding to the task to be processed is calculated and obtained according to the inherent constraint time, the estimated waiting time, and the estimated execution time, including:
  • level evaluation formula Use the level evaluation formula to calculate the inherent constraint time, estimated waiting time, and estimated execution time to obtain the level evaluation value; where the level evaluation formula is:
  • TC i represents the inherent constraint time of task i to be processed
  • WTP i represents the estimated waiting time of task i to be processed
  • EP i represents the estimated execution time of task i to be processed
  • Por i represents the level evaluation of task i to be processed value.
  • the generation process of the preset decision network model includes:
  • sample data of a preset number of sample tasks includes task information corresponding to the sample task, service information of each service model when the sample task is called, model information of the optimal service model for executing the sample task, and the use of the most The revenue value of the optimal server model to perform sample tasks;
  • the task processing method also includes:
  • the generation process of each service model includes:
  • the service model corresponding to each output layer is obtained.
  • the method further includes:
  • the present application also discloses a task processing device, including:
  • a task sorting module configured to obtain task information of each pending task, and perform priority sorting on each pending task according to the task information;
  • the service information acquisition module is used to obtain the target pending tasks in order of priority from high to low, and obtain the service information of each service model; wherein, each service model is deployed on the edge server;
  • the model income calculation module is used to process the task information and each service information of the target pending task by using the preset decision-making network model, and obtain the effective service model and the income value of each effective service model;
  • the task processing module is configured to use the effective service model corresponding to the maximum revenue value to process the target pending task.
  • the task processing device also includes:
  • the local processing module is used to calculate the resource consumption rate of the corresponding task to be processed according to the task information before prioritizing the tasks to be processed according to the task information; use the local service model to process the pending tasks whose resource consumption rate is lower than the preset threshold Tasks are processed.
  • the task sequencing module includes:
  • a grade evaluation value calculation unit configured to calculate and obtain the grade evaluation value corresponding to the task to be processed according to the inherent constraint time, the estimated waiting time and the estimated execution time;
  • the priority sorting unit is configured to sort the priority of each task to be processed according to the value of the grade evaluation value.
  • the grade evaluation value calculation unit is specifically configured to use a grade evaluation formula to calculate the inherent constraint time, estimated waiting time, and estimated execution time to obtain a grade evaluation value; wherein, the grade evaluation formula is:
  • TC i represents the inherent constraint time of task i to be processed
  • WTP i represents the estimated waiting time of task i to be processed
  • EP i represents the estimated execution time of task i to be processed
  • Por i represents the level evaluation of task i to be processed value.
  • the task processing device also includes:
  • the preset decision-making network model building block is used to obtain the sample data of a preset number of sample tasks; wherein, the sample data includes task information corresponding to the sample task, the service information of each service model at the time of the sample task is called, and the last time to execute the sample task is The model information of the optimal service model, as well as the revenue value of using the optimal server model to perform sample tasks; construct an initial decision network model, and use the initial decision network model to train each sample data to obtain a preset decision network model.
  • the task processing device also includes:
  • the preset decision-making network model deployment module is used to deploy the preset decision-making network model on the edge server.
  • the task processing device also includes:
  • the service model building module is used to obtain the service model corresponding to each output layer by setting the exit position at the specified output layer of the overall task processing model.
  • the task processing device also includes:
  • the result feedback module is configured to receive the task processing result fed back by the effective service model after processing the target pending task by using the effective service model corresponding to the maximum revenue value; and output the task processing result.
  • the present application also discloses a task processing device, including:
  • the processor is used to implement the steps of any one of the above task processing methods when executing the computer program.
  • the present application also discloses a computer non-volatile readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, any of the above tasks can be realized The steps of the processing method.
  • a task processing method provided by the present application includes obtaining task information of each task to be processed, and prioritizing each task to be processed according to the task information; obtaining target pending tasks in order of priority from high to low, And obtain the service information of each service model; wherein, each service model is deployed on the edge server; use the preset decision-making network model to process the task information and each service information of the target pending task, and obtain the effective service model and each effective service model The value of income; use the effective service model corresponding to the maximum income value to process the target pending task.
  • the tasks to be processed are processed in the order of priority from high to low through priority sorting, and then during the process of task processing, the preset decision-making network model is dynamically used for each task to be processed.
  • the optimal service model for processing task allocation, and each service model is deployed on the edge server, thereby realizing the task processing of edge-end collaboration, and then realizing the dynamic balance between delay and inference result accuracy during network operation, maximizing Edge network utility, that is, it can improve task processing efficiency while solving the problem of limited network resources.
  • a task processing device, a task processing device, and a computer non-volatile readable storage medium provided by the present application all have the above beneficial effects, and will not be repeated here.
  • Fig. 1 is a schematic flow chart of a task processing method provided by the present application
  • FIG. 2 is a schematic diagram of the principle of a preset decision-making network model provided by the present application
  • FIG. 3 is a schematic structural diagram of a multi-service model provided by the present application.
  • FIG. 4 is a schematic diagram of deployment of a multi-service model provided by the present application.
  • FIG. 5 is a schematic structural diagram of a task processing device provided by the present application.
  • FIG. 6 is a schematic structural diagram of a task processing device provided in the present application.
  • the computer non-volatile readable storage medium also has the above beneficial effects.
  • the embodiment of the present application provides a task processing method.
  • FIG. 1 is a schematic flowchart of a task processing method provided by the present application.
  • the task processing method may include:
  • S101 Acquiring task information of each pending task, and prioritizing each pending task according to the task information;
  • This step aims to implement priority ranking of pending tasks based on task information, so as to facilitate task processing according to priority.
  • the task information of each task to be processed can be collected first, and then the priority of each task to be processed can be sorted according to the task information, so that the tasks to be processed can be processed according to the order of priority from high to low. arrangement.
  • each pending task in order to facilitate task retrieval, all pending tasks can be stored in the task queue.
  • each pending task is also arranged in order of priority from high to low .
  • the specific content of the task information does not affect the implementation of this technical solution, and can be set by the technician according to the actual situation.
  • it can be the task execution time corresponding to the task to be processed, the resource utilization rate during the task execution process, etc. , which is not limited in this application.
  • the priority sorting operation of the tasks to be processed may be performed in real time or at regular intervals according to a preset time interval, which is not limited in this application.
  • S102 Obtain target pending tasks in order of priority from high to low, and obtain service information of each service model; wherein, each service model is deployed on an edge server;
  • This step aims to obtain the target pending tasks in order of priority from high to low, and at the same time obtain the service information of each service model, so as to combine the service information and the task information of the target pending tasks to realize the task of the target pending tasks deal with.
  • the service model is a network model used to implement task processing, and there are multiple network models. Different service models have different precisions, and task processing with corresponding accuracy can be realized according to actual task requirements. Moreover, all service models are deployed on the edge server, thereby realizing edge-end collaborative task processing. Compared with cloud-based collaborative task processing, the information transmission path in this implementation method is greatly shortened, which greatly improves the task processing efficiency.
  • the service information refers to relevant information corresponding to the service model, including but not limited to information such as the number of network layers and model accuracy.
  • S103 Use the preset decision-making network model to process the task information and each service information of the target pending task, and obtain an effective service model and the revenue value of each effective service model;
  • This step aims to filter out valid service models suitable for processing target pending tasks from all service models. It is understandable that due to the different tasks to be processed, not all service models deployed on the edge server are suitable for task processing of the current task to be processed. Based on this, effective service models can be selected from all service models first, and then Select the optimal service model from the valid service models.
  • the acquisition of the effective service model is realized based on the preset decision network. Specifically, after obtaining the target pending task from the task queue and the service information of each service model on the edge server, they can be input together into the preset decision-making network model, and the preset decision-making network model The task information of the target pending task and the service information of each service model are processed, and the corresponding output is the effective service model.
  • an effective service model refers to a service model that can realize the target task to be processed, there are generally several of them.
  • it can be set in the preset decision-making network model
  • Another branch is used to calculate the revenue value of each effective service model, so as to obtain the optimal service model from multiple effective service models according to the revenue value.
  • the revenue value refers to the immediate reward when using the current effective service model to process the target pending task.
  • the corresponding effective service model is the optimal service model.
  • S104 Process the target pending task by using the effective service model corresponding to the maximum revenue value.
  • This step aims to realize task processing.
  • the effective service model corresponding to the maximum revenue value is the optimal service model. Therefore, directly use the effective service model corresponding to the maximum revenue value to process the target The task can be processed.
  • the tasks to be processed are processed in order of priority from high to low through priority sorting, and then during the task processing process, the preset decision-making network model is used to dynamically
  • Each task to be processed is assigned the optimal service model, and each service model is deployed on the edge server, thereby realizing the task processing of edge-end collaboration, and then realizing the dynamic balance between delay and inference result accuracy during network operation. Maximizing the utility of the edge network can improve the efficiency of task processing while solving the problem of limited network resources.
  • prior to prioritizing the tasks to be processed according to the task information it may further include: calculating the resource consumption rate of the corresponding task to be processed according to the task information; The pending tasks within the preset threshold are processed.
  • the pending tasks with a low rate of consumption are directly processed locally, and the pending tasks with a high resource consumption rate are sent to the service model of the edge service for processing, so as to plan network resources more reasonably and further improve task processing efficiency.
  • the corresponding resource consumption rate can be calculated according to the task information of each task to be processed, and then the pending tasks whose resource consumption rate is lower than the preset threshold can be sent to the local
  • the service model performs task processing, prioritizes pending tasks whose resource consumption rate does not exceed the preset threshold, and stores them in the task queue for queuing to wait for processing.
  • the specific value of the preset threshold is not unique, and can be set by a technician according to the actual situation, which is not limited in the present application.
  • the above task information may include the inherent constraint time of the task to be processed, the estimated waiting time, and the estimated execution time, and then the prioritization of the tasks to be processed according to the task information may include: According to the inherent constraint time, the estimated waiting time and the estimated execution time, the level evaluation value of the corresponding task to be processed is calculated; the priority of each pending task is sorted according to the value of the level evaluation value.
  • the embodiment of the present application provides specific types of task information, so as to implement prioritization of tasks to be processed based on the task information.
  • the task information may specifically include the inherent constraint time (preset preset time) corresponding to the task to be processed, the preset waiting time (the waiting time for executing the task), and the estimated execution time (the time required to execute the task) duration), thus, the level evaluation value of the corresponding task to be processed can be calculated according to the task information, and then each task to be processed can be prioritized according to the value of the evaluation value of each level.
  • the above calculation based on the inherent constraint time, estimated waiting time, and estimated execution time to obtain the level evaluation value of the corresponding task to be processed may include: using the level evaluation formula to calculate the inherent constraint time, estimated waiting time The duration and estimated execution time are calculated to obtain the grade evaluation value; where, the grade evaluation formula is:
  • TC i represents the inherent constraint time of task i to be processed
  • WTP i represents the estimated waiting time of task i to be processed
  • EP i represents the estimated execution time of task i to be processed
  • Por i represents the level evaluation of task i to be processed value.
  • the embodiment of the present application provides a level evaluation formula for calculating the level evaluation value, wherein the smaller the level evaluation value, the stronger the urgency of the current task to be processed, that is, the higher the priority , and its storage position in the task queue should be properly advanced.
  • the above-mentioned generation process of the preset decision-making network model may include: obtaining sample data of a preset number of sample tasks; wherein, the sample data includes task information corresponding to the sample tasks, and each time point of the sample task is retrieved.
  • This application provides a method for constructing a preset decision-making network model, that is, using the initial decision-making network magic heart to train sample data to obtain a preset decision-making network model that meets the requirements.
  • the sample data refers to all kinds of relevant data information corresponding to the sample task, including but not limited to the task information of the sample task, the service model of each service model when calling the sample task, and the model of the optimal service model suitable for executing the sample task Information, and the benefit value of using the optimal service model to process sample tasks.
  • the specific value of the above-mentioned preset number does not affect the implementation of the technical solution, and it can be set by the technician according to the actual situation. This application does not limit this, but the larger the value, the model The higher the accuracy.
  • the construction process of the preset decision-making network model is as follows:
  • the scheduling process of tasks to be processed can be regarded as a decision problem within an infinite length range, and a problem description model based on Markov decision process can be constructed.
  • the system environment state S t includes the running status of each service model in the current stage and the task information of the pending tasks; the decision-making action A t means scheduling the pending tasks to a certain service model for processing; R t means executing the current decision.
  • the system benefits (that is, the value of benefits) brought about.
  • the decision objective function can be constructed as follows:
  • ⁇ and ⁇ represent the weight values of the two parts respectively
  • ti and p i represent the execution time and result precision of task i under the current decision
  • T represents The total number of pending tasks in the system.
  • the action value function representation method based on the deep neural network can be constructed, and the optimal decision network parameters can be obtained by further updating the value network:
  • represents the parameters of the QN network, and the system environment parameters at time t ⁇ TC i , Pr i , Ld i ⁇ as the input of QN, where, Indicates the current load status of the kth service model, K indicates all service models of the edge server, TC i , Pr i , Ld i respectively indicate the time constraint, precision constraint and task load of the current task i to be processed.
  • the output of the QN network is the action value corresponding to each decision action, that is, the Q value.
  • the edge server shifts to the next state S t+1 and calculates the immediate reward:
  • Storage information (including environmental status information, action information, reward information):
  • M samples are randomly sampled from D, and j is used to represent the jth sample in the M samples, and the corresponding sample information can be expressed as Calculate the target Q value according to the following formula:
  • the parameter ⁇ of the QN network can be updated according to the gradient backpropagation.
  • FIG. 2 is a schematic diagram of a preset decision-making network model provided in the present application, and the trained decision-making network model is deployed on an edge server. During the running process, at each change moment, the current system environment state information is collected, all effective decision-making actions are output, and finally the decision-making action with the optimal value function is selected and executed based on the greedy strategy.
  • the task processing method may further include: deploying a preset decision-making network model on the edge server.
  • the preset decision-making network model can also be deployed on the edge server to use the network resources of the edge server to achieve faster task allocation, further improve task processing efficiency, and effectively reduce the occupation of local resources.
  • the generating process of each service model may include: obtaining the service model corresponding to each output layer by setting an exit position at a specified output layer of the overall task processing model.
  • the embodiment of the present application provides a method for constructing a service model.
  • multi-precision deep learning model training can be carried out first to obtain a complete deep learning model; further, in order to meet pending tasks with different precision requirements, multiple exit positions can be set in different levels of the entire deep learning model , it is conceivable that the inference accuracy of these exit positions will decrease compared to running the entire model completely, but it will occupy less computing resources during the actual operation process, which is very efficient for long-term operation of resource-constrained edge servers. It is very important; finally, the deep learning inference models contained in different exit positions are deployed on the edge server in the form of services to generate service models with different accuracy.
  • FIG. 3 is a schematic structural diagram of a multi-service model provided by the present application, and Ser k represents the service model corresponding to the kth exit position.
  • the average inference accuracy of different exit positions can be obtained as the service information of the position.
  • the loadable service model of the terminal device can also be deployed on the device side to realize local task processing.
  • the above-mentioned complete deep learning model can be trained on the cloud, and after the training is completed on the cloud, it can be deployed to edge servers and terminal devices.
  • FIG. 4 is a schematic diagram of deployment of a multi-service model provided by this application.
  • the layers where different exit positions are located and all layers before it can be used as a sub-network model (also as-a-service model), package them in the form of edge services, and deploy these services to edge servers to wait for user requests to invoke them.
  • some service models that consume less computing resources are deployed to terminal devices to achieve local task processing.
  • after processing the target pending task by using the effective service model corresponding to the maximum revenue value it may further include: receiving the task processing result fed back by the effective service model; outputting the task processing result.
  • the task processing method provided in the embodiment of the present application can be used to realize the feedback of task processing results. Since each service model is deployed on the edge server, it is equivalent to the task processing of each task to be processed by the edge server. Based on this, after completing the task processing and obtaining the task processing result, the edge server can also process the task of the optimal service model The results are fed back to the terminal equipment, and the terminal equipment performs local output and storage, which is convenient for technicians to know the task processing results in a timely and effective manner.
  • the present application also provides a task processing device, please refer to FIG. 5, which is a schematic structural diagram of a task processing device provided in the present application.
  • the task processing device may include:
  • a task sorting module configured to obtain task information of each pending task, and perform priority sorting on each pending task according to the task information;
  • the service information acquisition module 2 is used to obtain the target pending tasks in order of priority from high to low, and obtain the service information of each service model; wherein, each service model is deployed on the edge server;
  • the model income calculation module 3 is used to process the task information and each service information of the target pending task by using the preset decision-making network model, and obtain the effective service model and the income value of each effective service model;
  • the task processing module 4 is configured to use the effective service model corresponding to the maximum revenue value to process the target pending task.
  • the task processing device first processes the tasks to be processed in the order of priority from high to low through priority sorting, and then uses the preset decision-making network model to dynamically
  • the optimal service model is assigned to each task to be processed, and each service model is deployed on the edge server, thereby realizing the task processing of edge-end collaboration, and then realizing the dynamics of delay and inference result accuracy during network operation Balance and maximize the utility of the edge network, that is, it can improve the efficiency of task processing while solving the problem of limited network resources.
  • the task processing device may further include a local processing module, configured to calculate the resource consumption rate of the corresponding task to be processed according to the task information before prioritizing the tasks to be processed according to the task information; Use the local service model to process pending tasks whose resource consumption rate is lower than a preset threshold.
  • the above task sequencing module 1 may include:
  • a grade evaluation value calculation unit configured to calculate and obtain the grade evaluation value corresponding to the task to be processed according to the inherent constraint time, the estimated waiting time and the estimated execution time;
  • the priority sorting unit is configured to sort the priority of each task to be processed according to the value of the grade evaluation value.
  • the above grade evaluation value calculation unit may be specifically configured to use a grade evaluation formula to calculate the inherent constraint time, estimated waiting time, and estimated execution time to obtain a grade evaluation value; wherein, the grade evaluation formula for:
  • TC i represents the inherent constraint time of task i to be processed
  • WTP i represents the estimated waiting time of task i to be processed
  • EP i represents the estimated execution time of task i to be processed
  • Por i represents the level evaluation of task i to be processed value.
  • the task processing device may also include a preset decision-making network model building module, which is used to obtain sample data of a preset number of sample tasks; wherein, the sample data includes task information corresponding to the sample task, and the call Take the service information of each service model at the time of the sample task, the model information of the optimal service model that executes the sample task, and the revenue value of using the optimal server model to execute the sample task; build an initial decision network model, and use the initial decision network model to analyze each The sample data is used for training to obtain the preset decision-making network model.
  • a preset decision-making network model building module which is used to obtain sample data of a preset number of sample tasks
  • the sample data includes task information corresponding to the sample task, and the call Take the service information of each service model at the time of the sample task, the model information of the optimal service model that executes the sample task, and the revenue value of using the optimal server model to execute the sample task; build an initial decision network model, and use the initial decision network model to analyze each The sample data is used for
  • the task processing device may further include a preset decision network model deployment module, configured to deploy the preset decision network model on the edge server.
  • the task processing device may further include a service model building module, configured to obtain a service model corresponding to each output layer by setting an exit position at a specified output layer of the overall task processing model.
  • a service model building module configured to obtain a service model corresponding to each output layer by setting an exit position at a specified output layer of the overall task processing model.
  • the task processing device may further include a result feedback module, configured to receive the feedback of the effective service model after processing the target task to be processed by using the effective service model corresponding to the maximum revenue value.
  • Task processing result output task processing result.
  • FIG. 6 is a schematic structural diagram of a task processing device provided in the present application.
  • the task processing device may include:
  • the processor is used to implement the steps of any one of the above-mentioned task processing methods when it is used to execute the computer program.
  • the task processing device may include: a processor 10 , a memory 11 , a communication interface 12 and a communication bus 13 .
  • the processor 10 , the memory 11 , and the communication interface 12 all communicate with each other through the communication bus 13 .
  • the processor 10 may be a central processing unit (Central Processing Unit, CPU), an application-specific integrated circuit, a digital signal processor, a field programmable gate array, or other programmable logic devices.
  • CPU Central Processing Unit
  • application-specific integrated circuit e.g., an application-specific integrated circuit
  • digital signal processor e.g., a digital signal processor
  • field programmable gate array e.g., a field programmable gate array
  • the processor 10 can call the program stored in the memory 11, specifically, the processor 10 can execute the operations in the embodiment of the task processing method.
  • the memory 11 is used to store one or more programs.
  • the programs may include program codes, and the program codes include computer operation instructions.
  • the memory 11 stores at least programs for realizing the following functions:
  • each service model is deployed on the edge server;
  • the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application program required by a function; the data storage area may store The data created.
  • the memory 11 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage devices.
  • the communication interface 12 may be an interface of a communication module, and is used for connecting with other devices or systems.
  • the structure shown in FIG. 6 does not constitute a limitation on the task processing device in the embodiment of the present application.
  • the task processing device may include more or less components than those shown in FIG. 6 , Or combine certain parts.
  • the present application also provides a computer non-volatile readable storage medium, the computer non-volatile readable storage medium stores a computer program, and when the computer program is executed by a processor, any of the above-mentioned task processing methods can be realized. A step of.
  • the non-volatile readable storage medium of the computer may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • U disk mobile hard disk
  • read-only memory Read-Only Memory
  • RAM random access memory
  • magnetic disk or optical disk etc.
  • each embodiment in the description is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other.
  • the description is relatively simple, and for relevant details, please refer to the description of the method part.
  • the steps of the methods or algorithms described in connection with the embodiments disclosed herein may be directly implemented by hardware, software modules executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM or known in the technical field in any other form of storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)
  • Computer And Data Communications (AREA)

Abstract

本申请公开了一种任务处理方法,包括获取各待处理任务的任务信息,并根据所述任务信息对各所述待处理任务进行优先级排序;按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各所述服务模型部署于边缘服务器上;利用预设决策网络模型对所述目标待处理任务的任务信息和各所述服务信息进行处理,获得有效服务模型以及各所述有效服务模型的收益价值;利用取值最大的收益价值对应的有效服务模型对所述目标待处理任务进行处理。应用本申请所提供的技术方案,可以在解决网络资源受限问题的同时,提高任务处理效率。本申请还公开了一种任务处理装置、任务处理设备及计算机非易失性可读存储介质,均具有上述有益效果。

Description

一种任务处理方法、装置及相关设备
相关申请的交叉引用
本申请要求于2021年12月29日提交中国专利局、申请号202111626248.5、申请名称为“一种任务处理方法、装置及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及物联网技术领域,特别涉及一种任务处理方法,还涉及一种任务处理装置、任务处理设备及计算机非易失性可读存储介质。
背景技术
随着物联网(Internet of Things,IoT)技术的广泛应用,海量感知数据需要被采集、处理、分析,以支撑工业生产中的各种决策活动。以深度神经网络(Deep Neural Network,DNN)为代表的机器学习方法,能够自动学习大规模输入数据的抽象表征,在物联网领域具有广泛的发展前景。对感知数据进行实时处理分析是IoT的重要特性,受物联网终端设备资源限制,无法直接运行各种规模庞大的深度神经网络模型。为了解决这一问题,相关技术中利用丰富的云计算资源部署深度学习网络模型,将物联网感知数据通过网络上传至云端,再由云端完成处理分析,并将结果返回给终端。然而,虽然云计算能够实现这一目标,但终端设备与云端之间的通信带宽资源有限,通信延迟较高,导致许多应用的实时性要求无法得到满足,造成任务执行效率的降低。
发明内容
本申请的目的是提供一种任务处理方法,该任务处理方法可以在解决网络资源受限问题的同时,提高任务处理效率;本申请的另一目的是提供一种任务处理装置、任务处理设备及计算机非易失性可读存储介质,均具有上述有益效果。
第一方面,本申请提供了一种任务处理方法,包括:
获取各待处理任务的任务信息,并根据任务信息对各待处理任务进行优先级排序;
按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各服务模型部署于边缘服务器上;
利用预设决策网络模型对目标待处理任务的任务信息和各服务信息进行处理,获得有效 服务模型以及各有效服务模型的收益价值;
利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理。
在一些实施例中,根据任务信息对各待处理任务进行优先级排序之前,还包括:
根据任务信息计算对应待处理任务的资源消耗率;
利用本地服务模型对资源消耗率低于预设阈值的待处理任务进行处理。
在一些实施例中,任务信息包括待处理任务的固有约束时间、预估等待时长以及预估执行时间,则根据任务信息对各待处理任务进行优先级排序,包括:
根据固有约束时间、预估等待时长以及预估执行时间计算获得对应待处理任务的等级评估值;
根据等级评估值的取值大小对各待处理任务进行优先级排序。
在一些实施例中,根据固有约束时间、预估等待时长以及预估执行时间计算获得对应待处理任务的等级评估值,包括:
利用等级评估公式对固有约束时间、预估等待时长以及预估执行时间进行计算,获得等级评估值;其中,等级评估公式为:
Figure PCTCN2022138453-appb-000001
其中,TC i表示待处理任务i的固有约束时间,WTP i表示待处理任务i的预估等待时长,EP i表示待处理任务i的预估执行时间,Por i表示待处理任务i的等级评估值。
在一些实施例中,预设决策网络模型的生成过程包括:
获取预设数量个样本任务的样本数据;其中,样本数据包括对应样本任务的任务信息,调取样本任务时刻各服务模型的服务信息,执行样本任务的最优服务模型的模型信息,以及利用最优服务器模型执行样本任务的收益价值;
构建初始决策网络模型,并利用初始决策网络模型对各样本数据进行训练,获得预设决策网络模型。
在一些实施例中,任务处理方法还包括:
将预设决策网络模型部署于边缘服务器。
在一些实施例中,各服务模型的生成过程包括:
通过在整体任务处理模型的指定输出层设置退出位置,得到各输出层对应的服务模型。
在一些实施例中,利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理之后,还包括:
接收有效服务模型反馈的任务处理结果;
输出任务处理结果。
第二方面,本申请还公开了一种任务处理装置,包括:
任务排序模块,用于获取各待处理任务的任务信息,并根据任务信息对各待处理任务进行优先级排序;
服务信息获取模块,用于按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各服务模型部署于边缘服务器上;
模型收益计算模块,用于利用预设决策网络模型对目标待处理任务的任务信息和各服务信息进行处理,获得有效服务模型以及各有效服务模型的收益价值;
任务处理模块,用于利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理。
在一些实施例中,任务处理装置还包括:
本地处理模块,用于在根据任务信息对各待处理任务进行优先级排序之前,根据任务信息计算对应待处理任务的资源消耗率;利用本地服务模型对资源消耗率低于预设阈值的待处理任务进行处理。
在一些实施例中,任务排序模块包括:
等级评估值计算单元,用于根据固有约束时间、预估等待时长以及预估执行时间计算获得对应待处理任务的等级评估值;
优先级排序单元,用于根据等级评估值的取值大小对各待处理任务进行优先级排序。
在一些实施例中,等级评估值计算单元具体用于利用等级评估公式对固有约束时间、预估等待时长以及预估执行时间进行计算,获得等级评估值;其中,等级评估公式为:
Figure PCTCN2022138453-appb-000002
其中,TC i表示待处理任务i的固有约束时间,WTP i表示待处理任务i的预估等待时长,EP i表示待处理任务i的预估执行时间,Por i表示待处理任务i的等级评估值。
在一些实施例中,任务处理装置还包括:
预设决策网络模型构建模块,用于获取预设数量个样本任务的样本数据;其中,样本数据包括对应样本任务的任务信息,调取样本任务时刻各服务模型的服务信息,执行样本任务的最优服务模型的模型信息,以及利用最优服务器模型执行样本任务的收益价值;构建初始决策网络模型,并利用初始决策网络模型对各样本数据进行训练,获得预设决策网络模型。
在一些实施例中,任务处理装置还包括:
预设决策网络模型部署模块,用于将预设决策网络模型部署于边缘服务器。
在一些实施例中,任务处理装置还包括:
服务模型构建模块,用于通过在整体任务处理模型的指定输出层设置退出位置,得到各输出层对应的服务模型。
在一些实施例中,任务处理装置还包括:
结果反馈模块,用于在利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理之后,接收有效服务模型反馈的任务处理结果;输出任务处理结果。
第三方面,本申请还公开了一种任务处理设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行计算机程序时实现如上的任一种任务处理方法的步骤。
第四方面,本申请还公开了一种计算机非易失性可读存储介质,计算机非易失性可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上的任一种任务处理方法的步骤。
本申请所提供的一种任务处理方法,包括获取各待处理任务的任务信息,并根据任务信息对各待处理任务进行优先级排序;按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各服务模型部署于边缘服务器上;利用预设决策网络模型对目标待处理任务的任务信息和各服务信息进行处理,获得有效服务模型以及各有效服务模型的收益价值;利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理。
应用本申请所提供的技术方案,首先通过优先级排序的方式,对待处理任务按照优先级从高到低的顺序进行处理,然后在任务处理过程中,利用预设决策网络模型动态的为各待处理任务分配最优的服务模型,并且,各服务模型部署于边缘服务器上,由此,实现了边端协同的任务处理,进而实现了网络运行过程中延迟和推理结果精度的动态平衡,最大化边缘网络效用,即可以在解决网络资源受限问题的同时,提高任务处理效率。
本申请所提供的一种任务处理装置、任务处理设备及计算机非易失性可读存储介质,均具有上述有益效果,在此不再赘述。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明现有技术和本申请实施例中的技术方案,下面将对现有技术和本申请 实施例描述中需要使用的附图作简要的介绍。当然,下面有关本申请实施例的附图描述的仅仅是本申请中的一部分实施例,对于本领域普通技术人员来说,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图,所获得的其他附图也属于本申请的保护范围。
图1为本申请所提供的一种任务处理方法的流程示意图;
图2为本申请所提供的一种预设决策网络模型的原理示意图;
图3为本申请所提供的一种多服务模型的结构示意图;
图4为本申请所提供的一种多服务模型的部署示意图;
图5为本申请所提供的一种任务处理装置的结构示意图;
图6为本申请所提供的一种任务处理设备的结构示意图。
具体实施方式
本申请的核心是提供一种任务处理方法,该任务处理方法可以在解决网络资源受限问题的同时,提高任务处理效率;本申请的另一核心是提供一种任务处理装置、任务处理设备及计算机非易失性可读存储介质,也具有上述有益效果。
为了对本申请实施例中的技术方案进行更加清楚、完整地描述,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行介绍。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供了一种任务处理方法。
请参考图1,图1为本申请所提供的一种任务处理方法的流程示意图,该任务处理方法可包括:
S101:获取各待处理任务的任务信息,并根据任务信息对各待处理任务进行优先级排序;
本步骤旨在基于任务信息实现待处理任务的优先级排序,以便于按照优先级高低进行任务处理。具体而言,在任务处理过程中,可以先采集各待处理任务的任务信息,然后根据任务信息对各待处理任务进行优先级排序,实现各待处理任务按照优先级由高到低的顺序进行排列。
其中,在实际任务处理过程中,为便于任务调取,可以将所有的待处理任务存放于任务队列,当然,在任务队列中,各待处理任务同样按照优先级由高到低的顺序进行排列。
其中,任务信息的具体内容并不影响本技术方案的实施,由技术人员根据实际情况进行 设定即可,例如,可以为对应待处理任务的任务执行时间、任务执行过程中的资源利用率等,本申请对此不做限定。
其中,待处理任务的优先级排序操作可以实时进行,也可以按照预设时间间隔定时进行,本申请对此同样不做限定。
S102:按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各服务模型部署于边缘服务器上;
本步骤旨在按照优先级从高到低的顺序获取目标待处理任务,同时获取各服务模型的服务信息,以便于结合该服务信息以及目标待处理任务的任务信息实现该目标待处理任务的任务处理。
其中,服务模型是用于实现任务处理的网络模型,其数量为多个,不同的服务模型具有不同的精度,可以根据实际任务需求实现对应准确度的任务处理。并且,所有的服务模型均部署于边缘服务器,由此,实现边端协同的任务处理,相较于云端协同的任务处理,该种实现方式中的信息传输路径得到大大缩短,极大的提高了任务处理效率。
其中,服务信息则是指对应服务模型的相关信息,包括但不限于网络层数、模型精度等信息。通过将各个服务模型的服务信息与目标待处理任务的任务信息相结合,可以方便的计算出最适用于该目标待处理任务的最优服务模型,从而基于该最优服务模型实现目标待处理任务的处理。
S103:利用预设决策网络模型对目标待处理任务的任务信息和各服务信息进行处理,获得有效服务模型以及各有效服务模型的收益价值;
本步骤旨在从所有服务模型中筛选出适用于处理目标待处理任务的有效服务模型。可以理解的是,由于待处理任务的不同,部署于边缘服务器的服务模型并非全部适用于进行当前待处理任务的任务处理,基于此,可以先从所有的服务模型中筛选出有效服务模型,然后从有效服务模型中筛选出最优服务模型。
其中,有效服务模型的获取基于预设决策网络实现。具体而言,在从任务队列中调取获得目标待处理任务,以及边缘服务器上各服务模型的服务信息之后,即可将其一同输入至预设决策网络模型中,由预设决策网络模型对目标待处理任务的任务信息和各服务模型的服务信息进行处理,对应的输出即为有效服务模型。
可以想到的是,由于有效服务模型是指可以实现目标待处理任务处理的服务模型,因此,其数量一般为多个,为进一步实现最优服务模型的确定,可以在预设决策网络模型中设置另一分支用于计算各有效服务模型的收益价值,以便根据收益价值从多个有效服务模型中 筛选得到最优服务模型。
其中,收益价值是指利用当前有效服务模型对目标待处理任务进行处理时的立即奖励,其取值越大,越适用于对该目标待处理任务进行处理,因此,收益价值取值最大时所对应的有效服务模型即为最优服务模型。
S104:利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理。
本步骤旨在实现任务处理,如上所述,收益价值取值最大时所对应的有效服务模型即为最优服务模型,因此,直接利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理即可。
可见,本申请所提供的任务处理方法,首先通过优先级排序的方式,对待处理任务按照优先级从高到低的顺序进行处理,然后在任务处理过程中,利用预设决策网络模型动态的为各待处理任务分配最优的服务模型,并且,各服务模型部署于边缘服务器上,由此,实现了边端协同的任务处理,进而实现了网络运行过程中延迟和推理结果精度的动态平衡,最大化边缘网络效用,即可以在解决网络资源受限问题的同时,提高任务处理效率。
在本申请的一个实施例中,上述根据任务信息对各待处理任务进行优先级排序之前,还可以包括:根据任务信息计算对应待处理任务的资源消耗率;利用本地服务模型对资源消耗率低于预设阈值的待处理任务进行处理。
为实现待处理任务的快速处理,在对待处理任务进行优先级排序之前,还可以先根据待处理任务的资源消耗率确定该待处理任务是否适合直接在本地进行处理,也就是说,将资源消耗率较低的待处理任务直接进行本地处理,将资源消耗率较高的待处理任务发送至边缘服务的服务模型上进行处理,以便于更为合理的规划网络资源,进一步提高任务处理效率。
具体而言,在对各待处理任务进行优先级排序之前,可以先根据各待处理任务的任务信息计算相应的资源消耗率,然后将资源消耗率低于预设阈值的待处理任务发送至本地服务模型进行任务处理,将资源消耗率不超出预设阈值的待处理任务进行优先级排序,并存放至任务队列中进行排队,以等待处理。
其中,预设阈值的具体取值并不唯一,由技术人员根据实际情况进行设置即可,本申请对此不做限定。
在本申请的一个实施例中,上述任务信息可以包括待处理任务的固有约束时间、预估等待时长以及预估执行时间,则上述根据任务信息对各待处理任务进行优先级排序,可以包括:根据固有约束时间、预估等待时长以及预估执行时间计算获得对应待处理任务的等级评估值;根据等级评估值的取值大小对各待处理任务进行优先级排序。
本申请实施例提供了具体类型的任务信息,以便于基于这些任务信息实现待处理任务的优先级排序。具体而言,任务信息具体可以包括对应待处理任务的固有约束时间(预先设定的预设时间)、预设等待时长(执行该任务的等待时长)以及预估执行时间(执行该任务所需占用的时长),由此,即可先根据这些任务信息计算获得对应待处理任务的等级评估值,然后根据各等级评估值的取值大小对各个待处理任务进行优先级排序。
在本申请的一个实施例中,上述根据固有约束时间、预估等待时长以及预估执行时间计算获得对应待处理任务的等级评估值,可以包括:利用等级评估公式对固有约束时间、预估等待时长以及预估执行时间进行计算,获得等级评估值;其中,等级评估公式为:
Figure PCTCN2022138453-appb-000003
其中,TC i表示待处理任务i的固有约束时间,WTP i表示待处理任务i的预估等待时长,EP i表示待处理任务i的预估执行时间,Por i表示待处理任务i的等级评估值。
本申请实施例提供了一种等级评估公式,以用于实现等级评估值的计算,其中,等级评估值的取值越小,表示当前待处理任务的紧迫性越强,也即优先级越高,应适当提前其在任务队列中的存放位置。
在本申请的一个实施例中,上述预设决策网络模型的生成过程可以包括:获取预设数量个样本任务的样本数据;其中,样本数据包括对应样本任务的任务信息,调取样本任务时刻各服务模型的服务信息,执行样本任务的最优服务模型的模型信息,以及利用最优服务器模型执行样本任务的收益价值;构建初始决策网络模型,并利用初始决策网络模型对各样本数据进行训练,获得预设决策网络模型。
本申请提供了一种预设决策网络模型的构建方法,即利用初始决策网络魔心对样本数据进行训练,得到满足要求的预设决策网络模型。其中,样本数据即为对应样本任务的各类相关数据信息,包括但不限于样本任务的任务信息、调取样本任务时各服务模型的服务模型,适用于执行样本任务的最优服务模型的模型信息,以及利用该最优服务模型对样本任务进行处理的收益价值等。可以理解的是,上述预设数量的具体取值并不影响本技术方案的实施,由技术人员根据实际情况进行设置即可,本申请对此不做限定,但是,其取值越大,模型精度越高。
基于该申请实施例,预设决策网络模型的构建过程如下:
1、获取系统信息,该系统信息即为上述样本数据:
决策网络模型对待处理任务的调度过程可以看作是无限长度范围内的决策问题,可以构建基于马尔科夫决策过程的问题描述模型。其中,系统环境状态S t包括当前阶段各个服务模 型的运行状态和待处理任务的任务信息;决策动作A t表示将待处理任务调度至某一服务模型上进行处理;R t表示执行当前决策所带来的系统收益(即收益价值)。
2、构建决策目标函数:
由于本申请技术方案的目的在于优化边缘计算环境下深度神经网络服务模型的运行效率,即平均任务处理结果精度和平均任务处理时间。因此,可以构建决策目标函数如下:
Figure PCTCN2022138453-appb-000004
其中,
Figure PCTCN2022138453-appb-000005
Figure PCTCN2022138453-appb-000006
分别表示当前系统的平均任务处理时延和平均任务处理结果精度;α和β分别表示两个部分的权重值;t i和p i表示任务i在当前决策下的执行时间和结果精度;T表示系统总的待处理任务数量。
3、训练决策网络模型:
可以基于深度强化学习算法,构建基于深度神经网络的动作价值函数表示方法,并进一步通过价值网络的更新,得到最优的决策网络参数:
(1)环境状态输入:
基于深度神经网络构建决策网络QN(ω),其中,ω表示QN网络的参数,以t时刻的系统环境参数
Figure PCTCN2022138453-appb-000007
{TC i,Pr i,Ld i}作为QN的输入,其中,
Figure PCTCN2022138453-appb-000008
表示第k个服务模型的当前负载状态,K表示边缘服务器所有的服务模型,TC i、Pr i、Ld i分别表示当前待处理任务i的时间约束、精度约束以及任务负载量大小。QN网络的输出为每个决策动作对应的动作价值,即Q值。
(2)选择动作:
利用∈-greedy算法,根据Q值从所有决策动作中选择最优动作A t=k,表示将待处理任务i调度至服务模型k上,并执行该动作,为待处理任务配置相应的服务模型;同时,边缘服务器转移到下一个状态S t+1,计算立即奖励:
Figure PCTCN2022138453-appb-000009
(3)存储信息(包括环境状态信息、动作信息、奖励信息):
将上一步执行的相关信息{S t,A t,R t+1,S t+1}存入缓存池D中,当D中存储的信息数量大于采样大小M时,执行下一个步骤;
(4)更新学习网络参数:
从D中随机采样M个样本,利用j表示M个样本中的第j条样本,对应的样本信息可以 表示为
Figure PCTCN2022138453-appb-000010
根据下述公式计算目标Q值:
Figure PCTCN2022138453-appb-000011
其中,
Figure PCTCN2022138453-appb-000012
表示j对应的目标动作价值,γ表示折扣因子,取值在(0,1)之间,a′表示在S′ j状态下QN网络能够输出的最大动作价值所对应的动作。
根据损失函数L计算误差:
Figure PCTCN2022138453-appb-000013
由此,即可根据梯度反向传播更新QN网络的参数ω。
4、输出决策网络模型:
请参考图2,图2为本申请所提供的一种预设决策网络模型的原理示意图,将训练完成的决策网络模型部署在边缘服务器上。在运行过程中,在每个变化时刻,收集当前系统环境状态信息,输出所有有效决策动作,最后基于贪婪策略选择价值函数最优的决策动作并执行。
在本申请的一个实施例中,该任务处理方法还可以包括:将预设决策网络模型部署于边缘服务器。
具体而言,预设决策网络模型同样可以部署于边缘服务器,以利用边缘服务器的网络资源实现更为快速的任务分配,进一步提高任务处理效率,同时有效减少本地资源的占用。
在本申请的一个实施例中,各服务模型的生成过程可以包括:通过在整体任务处理模型的指定输出层设置退出位置,得到各输出层对应的服务模型。
本申请实施例提供了一种服务模型的构建方法。具体而言,可以先进行多精度深度学习模型的训练,以获得完整的深度学习模型;进一步,为了满足不同精度要求的待处理任务,可以在整个深度学习模型不同的层次中设置多个退出位置,可以想到的是,这些退出位置相比于完整的运行整个模型,其推理精度有所下降,但在实际运行过程中将占用更少的计算资源,这对资源受限的边缘服务器长期高效运行至关重要;最后,将不同退出位置所包含的深度学习推理模型以服务的形式部署在边缘服务器,生成不同精度的服务模型。如图3所示,图3为本申请所提供的一种多服务模型的结构示意图,Ser k表示第k个退出位置对应的服务模型。
与此同时,还可以根据深度学习模型规模,得到不同退出位置的平均推理精度,作为该位置的服务信息。此外,还可以将终端设备可负载的服务模型部署在设备端,以实现本地任务处理。
其中,为实现快速高效的模型训练,上述完整的深度学习模型可以由云端训练得到,在云端完成训练之后,再部署到边缘服务器和终端设备。请参考图4,图4为本申请所提供的一种多服务模型的部署示意图,对于云端训练得到的深度学习模型,可以将不同退出位置所在层及其之前所有层作为一个子网络模型(也即服务模型),以边缘服务的形式进行打包,并将这些服务部署至边缘服务器中,以等待用户请求对其进行调用。此外,将对计算资源消耗较小的部分服务模型部署至终端设备中,以实现本地任务处理。
在本申请的一个实施例中,上述利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理之后,还可以包括:接收有效服务模型反馈的任务处理结果;输出任务处理结果。
本申请实施例所提供的任务处理方法,可用于实现任务处理结果的反馈。由于各服务模型部署于边缘服务器上,相当于由边缘服务器实现了各待处理任务的任务处理,基于此,在完成任务处理获得任务处理结果之后,边缘服务器还可以将最优服务模型的任务处理结果反馈至终端设备,由终端设备进行本地输出与存储,便于技术人员及时有效的获知任务处理结果。
本申请还提供了一种任务处理装置,请参考图5,图5为本申请所提供的一种任务处理装置的结构示意图,该任务处理装置可包括:
任务排序模块1,用于获取各待处理任务的任务信息,并根据任务信息对各待处理任务进行优先级排序;
服务信息获取模块2,用于按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各服务模型部署于边缘服务器上;
模型收益计算模块3,用于利用预设决策网络模型对目标待处理任务的任务信息和各服务信息进行处理,获得有效服务模型以及各有效服务模型的收益价值;
任务处理模块4,用于利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理。
可见,本申请实施例所提供的任务处理装置,首先通过优先级排序的方式,对待处理任务按照优先级从高到低的顺序进行处理,然后在任务处理过程中,利用预设决策网络模型动态的为各待处理任务分配最优的服务模型,并且,各服务模型部署于边缘服务器上,由此,实现了边端协同的任务处理,进而实现了网络运行过程中延迟和推理结果精度的动态平衡,最大化边缘网络效用,即可以在解决网络资源受限问题的同时,提高任务处理效率。
在本申请的一个实施例中,该任务处理装置还可以包括本地处理模块,用于在根据任务信息对各待处理任务进行优先级排序之前,根据任务信息计算对应待处理任务的资源消耗率;利用本地服务模型对资源消耗率低于预设阈值的待处理任务进行处理。
在本申请的一个实施例中,上述任务排序模块1可以包括:
等级评估值计算单元,用于根据固有约束时间、预估等待时长以及预估执行时间计算获得对应待处理任务的等级评估值;
优先级排序单元,用于根据等级评估值的取值大小对各待处理任务进行优先级排序。
在本申请的一个实施例中,上述等级评估值计算单元可具体用于利用等级评估公式对固有约束时间、预估等待时长以及预估执行时间进行计算,获得等级评估值;其中,等级评估公式为:
Figure PCTCN2022138453-appb-000014
其中,TC i表示待处理任务i的固有约束时间,WTP i表示待处理任务i的预估等待时长,EP i表示待处理任务i的预估执行时间,Por i表示待处理任务i的等级评估值。
在本申请的一个实施例中,该任务处理装置还可以包括预设决策网络模型构建模块,用于获取预设数量个样本任务的样本数据;其中,样本数据包括对应样本任务的任务信息,调取样本任务时刻各服务模型的服务信息,执行样本任务的最优服务模型的模型信息,以及利用最优服务器模型执行样本任务的收益价值;构建初始决策网络模型,并利用初始决策网络模型对各样本数据进行训练,获得预设决策网络模型。
在本申请的一个实施例中,该任务处理装置还可以包括预设决策网络模型部署模块,用于将预设决策网络模型部署于边缘服务器。
在本申请的一个实施例中,该任务处理装置还可以包括服务模型构建模块,用于通过在整体任务处理模型的指定输出层设置退出位置,得到各输出层对应的服务模型。
在本申请的一个实施例中,该任务处理装置还可以包括结果反馈模块,用于在利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理之后,接收有效服务模型反馈的任务处理结果;输出任务处理结果。
对于本申请提供的装置的介绍请参照上述方法实施例,本申请在此不做赘述。
本申请还提供了一种任务处理设备,请参考图6,图6为本申请所提供的一种任务处理设备的结构示意图,该任务处理设备可包括:
存储器,用于存储计算机程序;
处理器,用于执行计算机程序时可实现如上述任意一种任务处理方法的步骤。
如图6所示,为任务处理设备的组成结构示意图,任务处理设备可以包括:处理器10、存储器11、通信接口12和通信总线13。处理器10、存储器11、通信接口12均通过通信总线13完成相互间的通信。
在本申请实施例中,处理器10可以为中央处理器(Central Processing Unit,CPU)、特定应用集成电路、数字信号处理器、现场可编程门阵列或者其他可编程逻辑器件等。
处理器10可以调用存储器11中存储的程序,具体的,处理器10可以执行任务处理方法的实施例中的操作。
存储器11中用于存放一个或者一个以上程序,程序可以包括程序代码,程序代码包括计算机操作指令,在本申请实施例中,存储器11中至少存储有用于实现以下功能的程序:
获取各待处理任务的任务信息,并根据任务信息对各待处理任务进行优先级排序;
按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各服务模型部署于边缘服务器上;
利用预设决策网络模型对目标待处理任务的任务信息和各服务信息进行处理,获得有效服务模型以及各有效服务模型的收益价值;
利用取值最大的收益价值对应的有效服务模型对目标待处理任务进行处理。
在一种可能的实现方式中,存储器11可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统,以及至少一个功能所需的应用程序等;存储数据区可存储使用过程中所创建的数据。
此外,存储器11可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件或其他易失性固态存储器件。
通信接口12可以为通信模块的接口,用于与其他设备或者系统连接。
当然,需要说明的是,图6所示的结构并不构成对本申请实施例中任务处理设备的限定,在实际应用中任务处理设备可以包括比图6所示的更多或更少的部件,或者组合某些部件。
本申请还提供了一种计算机非易失性可读存储介质,该计算机非易失性可读存储介质上存储有计算机程序,计算机程序被处理器执行时可实现如上述任意一种任务处理方法的步骤。
该计算机非易失性可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
对于本申请提供的计算机非易失性可读存储介质的介绍请参照上述方法实施例,本申请在此不做赘述。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM或技术领域内所公知的任意其它形式的存储介质中。
以上对本申请所提供的技术方案进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请的保护范围内。

Claims (20)

  1. 一种任务处理方法,其特征在于,包括:
    获取各待处理任务的任务信息,并根据所述任务信息对各所述待处理任务进行优先级排序;
    按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各所述服务模型部署于边缘服务器上;
    利用预设决策网络模型对所述目标待处理任务的任务信息和各所述服务信息进行处理,获得有效服务模型以及各所述有效服务模型的收益价值;
    利用取值最大的收益价值对应的有效服务模型对所述目标待处理任务进行处理。
  2. 根据权利要求1所述的任务处理方法,其特征在于,所述根据所述任务信息对各所述待处理任务进行优先级排序之前,还包括:
    根据所述任务信息计算对应待处理任务的资源消耗率;
    利用本地服务模型对所述资源消耗率低于预设阈值的待处理任务进行处理。
  3. 根据权利要求2所述的任务处理方法,其特征在于,所述根据所述任务信息计算对应待处理任务的资源消耗率之后,还包括:
    对所述资源消耗率超出预设阈值的待处理任务进行优先级排序,并按照优先级从高到低的顺序存放至任务队列。
  4. 根据权利要求1所述的任务处理方法,其特征在于,所述任务信息包括所述待处理任务的固有约束时间、预估等待时长以及预估执行时间,则所述根据所述任务信息对各所述待处理任务进行优先级排序,包括:
    根据所述固有约束时间、所述预估等待时长以及所述预估执行时间计算获得对应待处理任务的等级评估值;
    根据所述等级评估值的取值大小对各所述待处理任务进行优先级排序。
  5. 根据权利要求4所述的任务处理方法,其特征在于,所述等级评估值与所述待处理任务的优先级的对应关系为:所述待处理任务的等级评估值的取值越小,则所述待处理任务的优先级越高。
  6. 根据权利要求4所述的任务处理方法,其特征在于,所述根据所述固有约束时间、所述预估等待时长以及所述预估执行时间计算获得对应待处理任务的等级评估值,包括:
    利用等级评估公式对所述固有约束时间、所述预估等待时长以及所述预估执行时间进行计算,获得所述等级评估值;其中,所述等级评估公式为:
    Figure PCTCN2022138453-appb-100001
    其中,TC i表示待处理任务i的固有约束时间,WTP i表示待处理任务i的预估等待时长,EP i表示待处理任务i的预估执行时间,Por i表示待处理任务i的等级评估值。
  7. 根据权利要求1所述的任务处理方法,其特征在于,所述预设决策网络模型的生成过程包括:
    获取预设数量个样本任务的样本数据;其中,所述样本数据包括对应样本任务的任务信息,调取所述样本任务时刻各服务模型的服务信息,执行所述样本任务的最优服务模型的模型信息,以及利用所述最优服务模型执行所述样本任务的收益价值;
    构建初始决策网络模型,并利用所述初始决策网络模型对各所述样本数据进行训练,获得所述预设决策网络模型。
  8. 根据权利要求7所述的任务处理方法,其特征在于,还包括:
    将所述预设决策网络模型部署于所述边缘服务器。
  9. 根据权利要求1至8任意一项所述的任务处理方法,其特征在于,各所述服务模型的生成过程包括:
    通过在整体任务处理模型的指定输出层设置退出位置,得到各所述输出层对应的服务模型。
  10. 根据权利要求9所述的任务处理方法,其特征在于,所述方法还包括:
    根据所述服务模型的规模大小,计算所述退出位置的平均推理精度;
    将所述平均推理精度作为所述退出位置的服务信息。
  11. 根据权利要求1所述的任务处理方法,其特征在于,所述利用取值最大的收益价值对应的有效服务模型对所述目标待处理任务进行处理之后,还包括:
    接收所述有效服务模型反馈的任务处理结果;
    输出所述任务处理结果。
  12. 根据权利要求1所述的任务处理方法,其特征在于,获取各待处理任务的任务信息,包括:
    获取各所述待处理任务的任务执行时间和/或任务执行过程中的资源利用率。
  13. 根据权利要求1所述的任务处理方法,其特征在于,根据所述任务信息对各所述待处理任务进行优先级排序,包括:
    根据所述任务信息实时对各所述待处理任务进行优先级排序;
    和/或,
    根据所述任务信息,按照预设时间间隔对各所述待处理任务进行优先级排序。
  14. 根据权利要求1所述的任务处理方法,其特征在于,所述服务模型为处理所述待处理任务的网络模型;不同服务模型具有不同的服务信息;所述服务信息包括网络层数、模型精度。
  15. 根据权利要求1所述的任务处理方法,其特征在于,所述收益价值为利用所述有效服务模型对所述目标待处理任务进行处理时的奖励;所述方法和包括:
    将所述收益价值最大的有效服务模型作为最优服务模型。
  16. 根据权利要求1所述的任务处理方法,其特征在于,所述预设等待时长为执行所述待处理任务的等待时长;所述预估执行时间为执行所述待处理任务所需占用的时长。
  17. 一种任务处理装置,其特征在于,包括:
    任务排序模块,用于获取各待处理任务的任务信息,并根据所述任务信息对各所述待处理任务进行优先级排序;
    服务信息获取模块,用于按照优先级从高到低的顺序获得目标待处理任务,并获取各服务模型的服务信息;其中,各所述服务模型部署于边缘服务器上;
    模型收益计算模块,用于利用预设决策网络模型对所述目标待处理任务的任务信息和各所述服务信息进行处理,获得有效服务模型以及各所述有效服务模型的收益价值;
    任务处理模块,用于利用取值最大的收益价值对应的有效服务模型对所述目标待处理任务进行处理。
  18. 根据权利要求17所述的任务处理装置,其特征在于,还包括:
    本地处理模块,用于在所述根据所述任务信息对各所述待处理任务进行优先级排序之前,根据所述任务信息计算对应待处理任务的资源消耗率;利用本地服务模型对所述资源消耗率低于预设阈值的待处理任务进行处理。
  19. 一种任务处理设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1至16任一项所述的任务处理方法的步骤。
  20. 一种计算机非易失性可读存储介质,其特征在于,所述计算机非易失性可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至16任一项所述的任务处理方法的步骤。
PCT/CN2022/138453 2021-12-29 2022-12-12 一种任务处理方法、装置及相关设备 WO2023124947A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111626248.5A CN114253735B (zh) 2021-12-29 2021-12-29 一种任务处理方法、装置及相关设备
CN202111626248.5 2021-12-29

Publications (1)

Publication Number Publication Date
WO2023124947A1 true WO2023124947A1 (zh) 2023-07-06

Family

ID=80798450

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138453 WO2023124947A1 (zh) 2021-12-29 2022-12-12 一种任务处理方法、装置及相关设备

Country Status (2)

Country Link
CN (1) CN114253735B (zh)
WO (1) WO2023124947A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116909717A (zh) * 2023-09-12 2023-10-20 国能(北京)商务网络有限公司 一种任务调度方法
CN117033247A (zh) * 2023-10-07 2023-11-10 宜宾邦华智慧科技有限公司 一种手机、平板电脑搭载的验证方法和系统
CN117575113A (zh) * 2024-01-17 2024-02-20 南方电网数字电网研究院股份有限公司 基于马尔科夫链的边端协同任务处理方法、装置和设备
CN118215052A (zh) * 2024-03-19 2024-06-18 烟台泓威电子科技有限公司 一种基于时间戳的警务协同网络通信优化算法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253735B (zh) * 2021-12-29 2024-01-16 苏州浪潮智能科技有限公司 一种任务处理方法、装置及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114202A1 (en) * 2017-10-13 2019-04-18 Beijing Baidu Netcom Science And Technology Co., Ltd. Task scheduling method and apparatus of artificial intelligence heterogeneous hardware, device and readable medium
CN110955463A (zh) * 2019-12-03 2020-04-03 天津大学 支持边缘计算的物联网多用户计算卸载方法
CN112905327A (zh) * 2021-03-03 2021-06-04 湖南商务职业技术学院 一种任务调度方法、边缘服务器、计算机介质及边云协同计算系统
CN113254178A (zh) * 2021-06-01 2021-08-13 苏州浪潮智能科技有限公司 一种任务调度方法、装置、电子设备及可读存储介质
CN114253735A (zh) * 2021-12-29 2022-03-29 苏州浪潮智能科技有限公司 一种任务处理方法、装置及相关设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110750342B (zh) * 2019-05-23 2020-10-09 北京嘀嘀无限科技发展有限公司 调度方法、装置、电子设备及可读存储介质
CN111400005A (zh) * 2020-03-13 2020-07-10 北京搜狐新媒体信息技术有限公司 一种数据处理方法、装置及电子设备
CN111427679B (zh) * 2020-03-25 2023-12-22 中国科学院自动化研究所 面向边缘计算的计算任务调度方法、系统、装置
CN113326126B (zh) * 2021-05-28 2024-04-05 湘潭大学 任务处理方法、任务调度方法、装置及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114202A1 (en) * 2017-10-13 2019-04-18 Beijing Baidu Netcom Science And Technology Co., Ltd. Task scheduling method and apparatus of artificial intelligence heterogeneous hardware, device and readable medium
CN110955463A (zh) * 2019-12-03 2020-04-03 天津大学 支持边缘计算的物联网多用户计算卸载方法
CN112905327A (zh) * 2021-03-03 2021-06-04 湖南商务职业技术学院 一种任务调度方法、边缘服务器、计算机介质及边云协同计算系统
CN113254178A (zh) * 2021-06-01 2021-08-13 苏州浪潮智能科技有限公司 一种任务调度方法、装置、电子设备及可读存储介质
CN114253735A (zh) * 2021-12-29 2022-03-29 苏州浪潮智能科技有限公司 一种任务处理方法、装置及相关设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116909717A (zh) * 2023-09-12 2023-10-20 国能(北京)商务网络有限公司 一种任务调度方法
CN116909717B (zh) * 2023-09-12 2023-12-05 国能(北京)商务网络有限公司 一种任务调度方法
CN117033247A (zh) * 2023-10-07 2023-11-10 宜宾邦华智慧科技有限公司 一种手机、平板电脑搭载的验证方法和系统
CN117033247B (zh) * 2023-10-07 2023-12-12 宜宾邦华智慧科技有限公司 一种手机、平板电脑搭载的验证方法和系统
CN117575113A (zh) * 2024-01-17 2024-02-20 南方电网数字电网研究院股份有限公司 基于马尔科夫链的边端协同任务处理方法、装置和设备
CN117575113B (zh) * 2024-01-17 2024-05-03 南方电网数字电网研究院股份有限公司 基于马尔科夫链的边端协同任务处理方法、装置和设备
CN118215052A (zh) * 2024-03-19 2024-06-18 烟台泓威电子科技有限公司 一种基于时间戳的警务协同网络通信优化算法

Also Published As

Publication number Publication date
CN114253735B (zh) 2024-01-16
CN114253735A (zh) 2022-03-29

Similar Documents

Publication Publication Date Title
WO2023124947A1 (zh) 一种任务处理方法、装置及相关设备
WO2021197364A1 (zh) 一种用于服务的扩缩容的方法及相关设备
CN109561148A (zh) 边缘计算网络中基于有向无环图的分布式任务调度方法
CN109885397B (zh) 一种边缘计算环境中时延优化的负载任务迁移算法
US8387059B2 (en) Black-box performance control for high-volume throughput-centric systems
CN108965024A (zh) 一种5g网络切片基于预测的虚拟网络功能调度方法
CN114039918B (zh) 一种信息年龄优化方法、装置、计算机设备及存储介质
CN114564312A (zh) 一种基于自适应深度神经网络的云边端协同计算方法
CN115129463A (zh) 算力调度方法及装置、系统及存储介质
CN115033359A (zh) 一种基于时延控制的物联代理多任务调度方法和系统
CN117909044A (zh) 面向异构计算资源的深度强化学习协同调度方法及装置
CN116932198A (zh) 资源调度方法、装置、电子设备及可读存储介质
CN111585915A (zh) 长、短流量均衡传输方法、系统、存储介质、云服务器
CN113190342B (zh) 用于云-边协同网络的多应用细粒度卸载的方法与系统架构
CN117749796A (zh) 一种云边算力网络系统计算卸载方法及系统
CN116795553A (zh) 算力资源的调度方法及装置、存储介质及电子装置
CN116915869A (zh) 基于云边协同的时延敏感型智能服务快速响应方法
CN116302578A (zh) 一种QoS约束的流应用延迟确保方法及系统
CN116488344A (zh) 一种用于多类型电网设备量测数据的动态资源调度方法
CN116431326A (zh) 一种基于边缘计算和深度强化学习的多用户依赖性任务卸载方法
CN116109058A (zh) 一种基于深度强化学习的变电站巡视管理方法和装置
CN115827178A (zh) 边缘计算任务分配方法、装置、计算机设备及相关介质
CN117118836A (zh) 基于资源预测的服务功能链多阶段节能迁移方法
CN114866430A (zh) 边缘计算的算力预测方法、算力编排方法及系统
CN114301907A (zh) 云计算网络中的业务处理方法、系统、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914202

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE