CN112988346B - Task processing method, device, equipment and storage medium - Google Patents

Task processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112988346B
CN112988346B CN202110174976.0A CN202110174976A CN112988346B CN 112988346 B CN112988346 B CN 112988346B CN 202110174976 A CN202110174976 A CN 202110174976A CN 112988346 B CN112988346 B CN 112988346B
Authority
CN
China
Prior art keywords
task
drift
processed
edge computing
computing server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110174976.0A
Other languages
Chinese (zh)
Other versions
CN112988346A (en
Inventor
李朝霞
康楠
邢鑫
成景山
李铭轩
李策
陈海波
时文丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Cloud Data Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Cloud Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Cloud Data Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202110174976.0A priority Critical patent/CN112988346B/en
Publication of CN112988346A publication Critical patent/CN112988346A/en
Application granted granted Critical
Publication of CN112988346B publication Critical patent/CN112988346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Power Sources (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The method includes the steps of receiving a task drift request sent by an edge computing server, carrying a task to be processed, determining target computing power resource equipment corresponding to the task to be processed, establishing a preset channel between the edge computing server and the target computing power resource equipment, and controlling the task to be processed to drift through the preset channel so that the target computing power resource equipment can finish processing the task to be processed, namely, the embodiment of the application can allocate computing power resources for the task to be processed in the edge computing server, establish a channel between the edge computing server and the computing power resources, process the task to be processed by utilizing the channel computing power resources based on the channel, solve the problem that the edge computing server has poor processing quality for delay sensitive application, and enable the edge computing server to quickly respond to sudden computing requests.

Description

Task processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of internet of things, and in particular, to a task processing method, a task processing device, and a storage medium.
Background
The continuous development of random technology, the application of the internet of things or the edge application is more and more. Many internet of things applications exist, such as virtual reality, unmanned, etc., which are delay sensitive.
In the related art, the computing power of the edge computing server is very limited, and it cannot respond quickly to all burst computing requests. Thus, some delay-sensitive computing tasks may experience long queuing delays at the edge servers, and even the queuing delays will exceed the network delays from the user network to the remote cloud computing center, giving the delay-sensitive application a very poor user experience.
Then, there is no effective solution to the above problem. Therefore, how to improve the processing quality of the edge computing server for delay-sensitive applications and respond quickly to sudden computing requests is an urgent problem to be solved.
Disclosure of Invention
In order to solve the problems in the prior art, the application provides a task processing method, a task processing device, task processing equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a task processing method, including the following steps:
receiving a task drift request sent by an edge computing server, wherein the task drift request carries a task to be processed;
determining target computing power resource equipment corresponding to the task to be processed;
establishing a preset channel between the edge computing server and the target computing power resource equipment;
and controlling the task to be processed to carry out task drift through the preset channel so as to enable the target computing power resource equipment to finish processing the task to be processed.
In one possible implementation manner, after a preset channel is established between the edge computing server and the target computing power resource device, the method further includes:
setting the priority of the preset channel according to the task to be processed;
and controlling the task to be processed to perform task drift through the preset channel, including:
and carrying out task drift processing on the task to be processed through the preset channel based on the priority.
In a possible implementation manner, the controlling, through the preset channel, the task to be processed to perform task drift includes:
and sending a drift starting instruction to the edge computing server, so that the edge computing server sends the task to be processed to the target computing power resource equipment through the preset channel according to the drift starting instruction.
In one possible implementation manner, after the sending a drift start instruction to the edge computing server, the method further includes:
receiving task drift progress and task drift logs reported by the edge computing server, and receiving equipment states reported by the target computing power resource equipment;
and if the task drift is judged to be stopped according to the task drift progress, the task drift log and/or the equipment state, a drift stopping instruction is sent to the edge computing server, so that the edge computing server stops sending the task to be processed through the preset channel according to the drift stopping instruction.
In a possible implementation manner, after the task to be processed is controlled to perform task drift through the preset channel, the method further includes:
obtaining a processing result of the target computing power resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
In one possible implementation manner, after the obtaining the processing result of the target computing power resource device on the task to be processed, the method further includes:
and disconnecting the preset channel.
In a second aspect, an embodiment of the present application provides a task processing device, including:
the receiving module is used for receiving a task drift request sent by the edge computing server, wherein the task drift request carries a task to be processed;
the determining module is used for determining target computing power resource equipment corresponding to the task to be processed;
the establishing module is used for establishing a preset channel between the edge computing server and the target computing power resource equipment;
and the drifting module is used for controlling the task to be processed to drift through the preset channel so as to enable the target computing power resource equipment to finish processing the task to be processed.
In one possible implementation manner, the drift module is specifically configured to:
setting the priority of the preset channel according to the task to be processed;
and carrying out task drift processing on the task to be processed through the preset channel based on the priority.
In one possible implementation manner, the drift module is specifically configured to:
and sending a drift starting instruction to the edge computing server, so that the edge computing server sends the task to be processed to the target computing power resource equipment through the preset channel according to the drift starting instruction.
In one possible implementation, the drift module is further configured to:
receiving task drift progress and task drift logs reported by the edge computing server, and receiving equipment states reported by the target computing power resource equipment;
and if the task drift is judged to be stopped according to the task drift progress, the task drift log and/or the equipment state, a drift stopping instruction is sent to the edge computing server, so that the edge computing server stops sending the task to be processed through the preset channel according to the drift stopping instruction.
In a possible implementation manner, after the drift module controls the task to be processed to drift through the preset channel, the method further includes a post-processing module, configured to:
obtaining a processing result of the target computing power resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
In one possible implementation, the post-processing module is further configured to:
and disconnecting the preset channel.
In a third aspect, an embodiment of the present application provides a task processing device, including:
a processor;
a memory; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program causing a server to perform the method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising computer instructions for performing the method of the first aspect by a processor.
According to the task processing method, the task processing device, the task processing equipment and the storage medium, the task drift request sent by the edge computing server is received, the task drift request carries a task to be processed, target computing power resource equipment corresponding to the task to be processed is determined, a preset channel is established between the edge computing server and the target computing power resource equipment, the task drift is controlled through the preset channel, so that the target computing power resource equipment can complete processing of the task to be processed, namely the task to be processed in the edge computing server can be provided with computing power resources, a channel between the edge computing server and the computing power resources is established, the task to be processed is processed by utilizing the channel computing power resources based on the channel, the problem that the processing quality of delay-sensitive application is poor by the edge computing server is solved, the edge computing server can quickly respond to sudden computing requests, and practical application requirements are met.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of a task processing system architecture according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a task processing method according to an embodiment of the present application;
FIG. 3 is a flowchart of another task processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of another task processing device according to an embodiment of the present disclosure;
FIG. 6A provides one possible basic hardware architecture for the task processing device described herein;
fig. 6B provides another possible basic hardware architecture of the task processing device described herein.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the related art, the computing power of the edge computing server is very limited, and it cannot respond quickly to all burst computing requests. Thus, some delay-sensitive computing tasks may experience long queuing delays at the edge servers, and even the queuing delays will exceed the network delays from the user network to the remote cloud computing center, giving the delay-sensitive application a very poor user experience.
However, there is no effective solution to the above problem. Therefore, how to improve the processing quality of the edge computing server for delay-sensitive applications and respond quickly to sudden computing requests is an urgent problem to be solved.
In order to solve the above problems, the embodiments of the present application provide a task processing method, which may allocate computing power resources to a task to be processed in an edge computing server, establish a channel between the edge computing server and the computing power resources, and process the task to be processed by using the channel computing power resources based on the channel, so as to solve the problem that the processing quality of the edge computing server for delay-sensitive applications is poor, and enable the edge computing server to quickly respond to sudden computing requests, thereby meeting actual application needs.
Optionally, the task processing method provided in the present application may be applied to a task processing system architecture schematic shown in fig. 1, where, as shown in fig. 1, the system may include at least one of a receiving device 101, a drifting device 102, and a display device 103.
In a specific implementation process, the receiving device 101 may be an input/output interface or a communication interface, and may be used to receive a task drift request sent by an edge computing server, where the task drift request carries information such as a task to be processed.
The drifting device 102 can allocate computing power resources for the task to be processed in the edge computing server, establish a channel between the edge computing server and the computing power resources, process the task to be processed by using the channel computing power resources based on the channel, solve the problem that the processing quality of the edge computing server for delay-sensitive application is poor, and enable the edge computing server to quickly respond to sudden computing requests, thereby meeting the actual application needs.
The display device 103 may be used to display the task to be processed, the computing resource, and the like.
The display device may also be a touch display screen for receiving user instructions while displaying the above content to enable interaction with a user.
It should be understood that the above-described processor may be implemented by a processor that reads instructions in a memory and executes the instructions, or may be implemented by a chip circuit.
The above system is only one exemplary system, and may be set according to application requirements when implemented.
It will be appreciated that the architecture illustrated by the embodiments of the present application does not constitute a particular limitation on the architecture of the task processing system. In other possible embodiments of the present application, the architecture may include more or fewer components than those illustrated, or some components may be combined, some components may be separated, or different component arrangements may be specifically determined according to the actual application scenario, and the present application is not limited herein. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
In addition, the system architecture and the service scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and as a person of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The following description of the technical solutions of the present application will take several embodiments as examples, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 provides a flowchart of a task processing method according to an embodiment of the present application, where the task processing method may be performed by any apparatus that performs the task processing method, and the apparatus may be implemented by using software and/or hardware. As shown in fig. 2, based on the system architecture shown in fig. 1, the task processing provided in the embodiment of the present application may include the following steps:
s201: and receiving a task drift request sent by the edge computing server, wherein the task drift request carries a task to be processed.
Here, the task to be processed may be a delay sensitive task, or a sudden calculation request task, etc., which may be specifically determined according to an actual situation, which is not particularly limited in the embodiment of the present application.
Taking the system shown in fig. 1 as an example, the receiving device may receive the task drift request sent by the edge computing server, and may store the task drift request in a preset queue. The receiving device can store the task drift requests in the preset queue according to the receiving sequence, so that the subsequent drift device can execute the subsequent task drift operation under the condition that the data exist in the preset queue by detecting whether the data exist in the preset queue, and the subsequent drift device is suitable for practical application.
S202: and determining the target computing power resource equipment corresponding to the task to be processed.
In this embodiment of the present application, the drift device receives, at the receiving device, a task drift request sent by an edge computing server, and may determine, according to a task to be processed carried by the task drift request, a target computing power resource device. For example, the correspondence between the task and the computing power resource device may be pre-stored in the drift device, so that the target computing power resource device corresponding to the task to be processed is determined according to the correspondence.
The correspondence may be determined by the drift device according to a number of known tasks and the computational power resources for processing the tasks.
S203: and establishing a preset channel between the edge computing server and the target computing power resource equipment.
Here, the drift device may establish a software defined network (software defined network, sn) channel between the edge computing server and the target computing resource device, where the bandwidth is determined according to the actual situation, so that the drift operation is performed through the sn channel, and data is transmitted at a high speed and a low time delay.
The drift device may set a priority of the preset channel according to the task to be processed after the preset channel is established between the edge computing server and the target computing resource device, so that task drift processing is performed on the task to be processed through the preset channel based on the priority.
The priorities of channels corresponding to different tasks are different, for example, delay sensitive tasks or sudden calculation request tasks, and the priorities of the corresponding channels are high.
In the embodiment of the present application, when the task to be processed is a delay-sensitive task or a sudden computing request task, the priority of the preset channel is set to be high, so that task drift processing is performed on the task to be processed by subsequent priority, the problem that the processing quality of the edge computing server for delay-sensitive application is poor is solved, and the edge computing server can quickly respond to the sudden computing request.
In addition, the corresponding relation between the task and the priority of the channel may be pre-stored in the drifting device, and then, the priority of the channel corresponding to the task to be processed is determined according to the corresponding relation, so that the priority of the preset channel is set, and then, based on the priority, task drifting processing is performed on the task to be processed through the preset channel.
The corresponding relation includes delay sensitive tasks or sudden calculation request tasks, and the corresponding channels have high priority.
S204: and controlling the task to be processed to carry out task drift through the preset channel so as to enable the target computing power resource equipment to finish processing the task to be processed.
The drift device may obtain a processing result of the target computing resource device on the task to be processed after the task drift is performed on the task to be processed through the preset channel, and then send the processing result to the edge computing server and/or other preset modules, so that the edge computing server and/or other preset modules can timely learn about the processing result, so as to perform subsequent processing and meet application requirements.
In addition, after the processing result of the target computing power resource device on the task to be processed is obtained, the drift device may disconnect the preset channel, so that the computing power network resumes the daily business.
According to the method, the device and the system for processing the task, the task drift request sent by the edge computing server is received, the task drift request carries a task to be processed, target computing power resource equipment corresponding to the task to be processed is determined, a preset channel is established between the edge computing server and the target computing power resource equipment, the task drift is controlled through the preset channel, so that the target computing power resource equipment can complete processing of the task to be processed, namely the task to be processed in the edge computing server can be provided with computing power resources, a channel between the edge computing server and the computing power resources is established, the task to be processed is processed by utilizing the channel computing power resources based on the channel, the problem that the processing quality of delay sensitive application is poor by the edge computing server is solved, the edge computing server can respond to sudden computing requests quickly, and practical application requirements are met.
In addition, in the embodiment of the present application, when the task to be processed is controlled to perform task drift through the preset channel, a drift start instruction is considered to be sent to the edge computing server, after the drift start instruction is sent, a task drift progress and a task drift log reported by the edge computing server are received, a device state reported by the target computing resource device is received, and whether task drift needs to be stopped is determined. Fig. 3 is a flow chart of another task processing method according to an embodiment of the present application. As shown in fig. 3, the method includes:
s301: and receiving a task drift request sent by the edge computing server, wherein the task drift request carries a task to be processed.
S302: and determining the target computing power resource equipment corresponding to the task to be processed.
S303: and establishing a preset channel between the edge computing server and the target computing power resource equipment.
Steps S301 to S303 are described in the above steps S201 to S203, and are not described herein.
S304: and sending a drift starting instruction to the edge computing server, so that the edge computing server sends the task to be processed to the target computing resource equipment through the preset channel according to the drift starting instruction, and the target computing resource equipment finishes processing the task to be processed.
When the drifting device controls the task to be processed through the preset channel to drift, a drifting starting instruction is sent to the edge computing server, so that the drifting device can communicate information with the edge computing server in time, and the edge computing server starts to send the task to be processed to the target computing resource equipment through the preset channel based on the instruction, resource waste is avoided, and the method is suitable for application.
S305: and receiving the task drift progress and the task drift log reported by the edge computing server, and receiving the equipment state reported by the target computing power resource equipment.
S306: and if the task drift is judged to be stopped according to the task drift progress, the task drift log and/or the equipment state, a drift stopping instruction is sent to the edge computing server, so that the edge computing server stops sending the task to be processed through the preset channel according to the drift stopping instruction.
In this embodiment of the present application, after the drift device sends the drift start instruction to the edge computing server, the drift device receives a task drift progress and a task drift log reported by the edge computing server, and receives a device state reported by the target computing resource device, so as to determine whether to stop task drift based on the received information. If necessary, the drift device sends a drift stopping instruction to the edge computing server, and the edge computing server stops sending the task to be processed through the preset channel based on the instruction, so as to meet the application requirements of various application scenes.
The drift device may preset a task stop drift condition, and determine that task drift needs to be stopped when the task drift progress, the task drift log, and/or the device state meet the preset task stop drift condition. Here, the preset stop task drift condition may include that the task drift progress reaches a preset progress threshold, that the task drift log includes preset contents, and/or that the device state is a preset state, and the like.
In addition, after the drift device sends the drift stop instruction to the edge computing server, the task drift progress and the task drift log reported by the edge computing server and the device state reported by the target computing resource device can be received in real time, so that whether the task drift needs to be stopped or not is judged according to the task drift progress, the task drift log and/or the device state. If not, the drift device sends a drift recovery instruction to the edge computing server, and the edge computing server recovers sending the task to be processed through the preset channel based on the instruction.
In this embodiment of the present application, when the drift device controls the task to be processed through the preset channel to perform task drift, a drift start instruction is sent to the edge computing server, so that the drift device can communicate information with the edge computing server in time, and therefore, the edge computing server starts to send the task to be processed to the target computing resource device through the preset channel based on the instruction, thereby avoiding resource waste. And after sending a drift starting instruction, the drift device receives the task drift progress and task drift log reported by the edge computing server, receives the equipment state reported by the target computing power resource equipment, judges whether the task drift needs to be stopped, and if so, sends a drift stopping instruction to the edge computing server, and the edge computing server stops sending the task to be processed through the preset channel based on the instruction, thereby meeting the application requirements of various application scenes. In addition, the drift device can allocate computing power resources for the task to be processed in the edge computing server, establish a channel between the edge computing server and the computing power resources, process the task to be processed by using the channel computing power resources based on the channel, solve the problem that the processing quality of the edge computing server for delay-sensitive application is poor, and enable the edge computing server to respond to sudden computing requests quickly.
Corresponding to the task processing method of the above embodiment, fig. 4 is a schematic structural diagram of the task processing device provided in the embodiment of the present application. For convenience of explanation, only portions relevant to the embodiments of the present application are shown. Fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application, where the task processing device 40 includes: a receiving module 401, a determining module 402, a establishing module 403 and a drifting module 404. The task processing device may be the task processing device itself or a chip or an integrated circuit that realizes the functions of the task processing device. It should be noted that, the division of the receiving module, the determining module, the establishing module and the drift module is only a division of logic functions, and the two may be integrated or independent physically.
The receiving module 401 is configured to receive a task drift request sent by an edge computing server, where the task drift request carries a task to be processed.
A determining module 402, configured to determine a target computing power resource device corresponding to the task to be processed.
The establishing module 403 is configured to establish a preset channel between the edge computing server and the target computing power resource device.
And the drifting module 404 is configured to control, through the preset channel, the task to be processed to perform task drifting, so that the target computing power resource device completes processing of the task to be processed.
In one possible implementation, the drift module 404 is specifically configured to:
setting the priority of the preset channel according to the task to be processed;
and carrying out task drift processing on the task to be processed through the preset channel based on the priority.
In one possible implementation, the drift module 404 is specifically configured to:
and sending a drift starting instruction to the edge computing server, so that the edge computing server sends the task to be processed to the target computing power resource equipment through the preset channel according to the drift starting instruction.
In one possible implementation, the drift module 404 is further configured to:
receiving task drift progress and task drift logs reported by the edge computing server, and receiving equipment states reported by the target computing power resource equipment;
and if the task drift is judged to be stopped according to the task drift progress, the task drift log and/or the equipment state, a drift stopping instruction is sent to the edge computing server, so that the edge computing server stops sending the task to be processed through the preset channel according to the drift stopping instruction.
The device provided in the embodiment of the present application may be used to execute the technical solution of the embodiment of the method, and its implementation principle and technical effects are similar, and the embodiment of the present application is not repeated here.
Fig. 5 is a schematic structural diagram of another task processing device according to an embodiment of the present application, and on the basis of fig. 4, the task processing device 40 further includes: a post-processing module 405.
In a possible implementation manner, after the drift module 404 controls the task to be processed to perform task drift through the preset channel, the post-processing module 405 is configured to:
obtaining a processing result of the target computing power resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
In one possible implementation, the post-processing module 405 is further configured to:
and disconnecting the preset channel.
The device provided in the embodiment of the present application may be used to execute the technical solution of the embodiment of the method, and its implementation principle and technical effects are similar, and the embodiment of the present application is not repeated here.
Alternatively, fig. 6A and 6B schematically provide one possible basic hardware architecture of the task processing device described in the present application, respectively.
Referring to fig. 6A and 6B, the task processing device includes at least one processor 601 and a communication interface 603. Further optionally, a memory 602 and a bus 604 may also be included.
Wherein the number of processors 601 may be one or more, fig. 6A and 6B illustrate only one of the processors 601. Alternatively, the processor 601 may be a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) or digital signal processing (Digital Signal Process, DSP). If the task processing device has a plurality of processors 601, the types of the plurality of processors 601 may be different or may be the same. Optionally, the multiple processors 601 of the task processing device may also be integrated as a multi-core processor.
Memory 602 stores computer instructions and data; the memory 602 may store computer instructions and data required to implement the above-described task processing methods provided herein, for example, the memory 602 stores instructions for implementing the steps of the above-described task processing methods. The memory 602 may be any one or any combination of the following storage media: nonvolatile Memory (e.g., read-Only Memory (ROM), solid State Drive (SSD) or Solid State Drive, hard Disk Drive (HDD), optical Disk), and volatile Memory.
The communication interface 603 may provide information input/output for the at least one processor. Any one or any combination of the following devices may also be included: a network interface (e.g., ethernet interface), a wireless network card, etc., having network access functionality.
Optionally, the communication interface 603 may also be used for data communication by the task processing device with other computing devices or terminals.
Further alternatively, FIGS. 6A and 6B represent bus 604 with a bold line. A bus 604 may connect the processor 601 with the memory 602 and the communication interface 603. Thus, through bus 604, processor 601 may access memory 602 and may also interact with other computing devices or terminals using communication interface 603.
In the present application, the task processing device executes the computer instructions in the memory 602, so that the task processing device implements the task processing method provided in the present application, or so that the task processing device deploys the task processing device described above.
From a logical functional partitioning perspective, as illustrated in fig. 6A, the memory 602 may include a receiving module 401, a determining module 402, a building module 403, and a drift module 404. The inclusion herein is not limited to a physical structure, and may involve only the functions of the receiving module, determining module, establishing module, and drift module, respectively, when the instructions stored in the memory are executed.
For example, as shown in FIG. 6B, the memory 602 may also include a post-processing module 405. The inclusion herein is not limited to physical structures, but rather involves only the functionality of the post-processing module when the instructions stored in memory are executed.
In addition, the task processing device described above may be implemented in hardware as a hardware module or as a circuit unit, in addition to the software as in fig. 6A and 6B described above.
The present application provides a computer-readable storage medium storing a computer program that causes a server to execute the above-described task processing method provided by the present application.
The present application provides a computer program product comprising computer instructions for execution by a processor of the above-described task processing method provided by the present application.
The present application provides a chip comprising at least one processor and a communication interface providing information input and/or output for the at least one processor. Further, the chip may also include at least one memory for storing computer instructions. The at least one processor is configured to invoke and execute the computer instructions to perform the task processing method provided herein.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.

Claims (6)

1. A method of task processing, comprising:
receiving a task drift request sent by an edge computing server, wherein the task drift request carries a task to be processed; the task to be processed comprises a delay sensitive task or a sudden calculation request task;
determining target computing power resource equipment corresponding to the task to be processed;
establishing a preset channel between the edge computing server and the target computing power resource equipment;
controlling the task to be processed to carry out task drift through the preset channel so as to enable the target computing power resource equipment to finish processing the task to be processed;
after a preset channel is established between the edge computing server and the target computing power resource device, the method further comprises the following steps:
setting the priority of the preset channel according to the task to be processed;
and controlling the task to be processed to perform task drift through the preset channel, including:
based on the priority, sending a drift starting instruction to the edge computing server, so that the edge computing server sends the task to be processed to the target computing resource equipment through the preset channel according to the drift starting instruction;
after the sending the drift start instruction to the edge computing server, the method further includes:
receiving task drift progress and task drift logs reported by the edge computing server, and receiving equipment states reported by the target computing power resource equipment;
and if the task drift is judged to be stopped according to the task drift progress, the task drift log and/or the equipment state, a drift stopping instruction is sent to the edge computing server, so that the edge computing server stops sending the task to be processed through the preset channel according to the drift stopping instruction.
2. The method according to claim 1, further comprising, after said controlling the task to be processed to perform task drift through the preset channel:
obtaining a processing result of the target computing power resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
3. The method according to claim 2, further comprising, after the obtaining the processing result of the target computing power resource device on the task to be processed:
and disconnecting the preset channel.
4. A task processing device, comprising:
the receiving module is used for receiving a task drift request sent by the edge computing server, wherein the task drift request carries a task to be processed; the task to be processed comprises a delay sensitive task or a sudden calculation request task;
the determining module is used for determining target computing power resource equipment corresponding to the task to be processed;
the establishing module is used for establishing a preset channel between the edge computing server and the target computing power resource equipment;
the drifting module is used for controlling the task to be processed to drift through the preset channel so as to enable the target computing power resource equipment to finish processing the task to be processed;
the drift module is specifically configured to:
setting the priority of the preset channel according to the task to be processed;
performing task drift processing on the task to be processed through the preset channel based on the priority;
the drift module is specifically configured to:
sending a drift starting instruction to the edge computing server, so that the edge computing server sends the task to be processed to the target computing power resource equipment through the preset channel according to the drift starting instruction;
the drift module is further configured to:
receiving task drift progress and task drift logs reported by the edge computing server, and receiving equipment states reported by the target computing power resource equipment;
and if the task drift is judged to be stopped according to the task drift progress, the task drift log and/or the equipment state, a drift stopping instruction is sent to the edge computing server, so that the edge computing server stops sending the task to be processed through the preset channel according to the drift stopping instruction.
5. A task processing device, characterized by comprising:
a processor;
a memory; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method of any of claims 1-3.
6. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which causes a server to perform the method of any one of claims 1-3.
CN202110174976.0A 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium Active CN112988346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110174976.0A CN112988346B (en) 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110174976.0A CN112988346B (en) 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112988346A CN112988346A (en) 2021-06-18
CN112988346B true CN112988346B (en) 2024-02-23

Family

ID=76347952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110174976.0A Active CN112988346B (en) 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112988346B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641124B (en) * 2021-08-06 2023-03-10 珠海格力电器股份有限公司 Calculation force distribution method and device, controller and building control system
CN114900860B (en) * 2022-05-05 2024-04-02 中国联合网络通信集团有限公司 Edge computing method and device for mobile terminal, edge computing server and medium
CN115587103A (en) * 2022-12-07 2023-01-10 杭州华橙软件技术有限公司 Algorithm resource planning method, device, terminal and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494612A (en) * 2018-01-19 2018-09-04 西安电子科技大学 A kind of network system and its method of servicing that mobile edge calculations service is provided
CN108683613A (en) * 2018-05-10 2018-10-19 Oppo广东移动通信有限公司 A kind of method, apparatus and computer storage media of scheduling of resource
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110381159A (en) * 2019-07-26 2019-10-25 中国联合网络通信集团有限公司 Task processing method and system
CN110460635A (en) * 2019-07-04 2019-11-15 华南理工大学 One kind is towards unpiloted edge discharging method and device
CN110856183A (en) * 2019-11-18 2020-02-28 南京航空航天大学 Edge server deployment method based on heterogeneous load complementation and application
CN111625354A (en) * 2020-05-19 2020-09-04 南京乐贤智能科技有限公司 Arrangement method of computing power of edge computing equipment and related equipment thereof
CN111641891A (en) * 2020-04-16 2020-09-08 北京邮电大学 Task peer-to-peer unloading method and device in multi-access edge computing system
WO2020216135A1 (en) * 2019-04-25 2020-10-29 南京邮电大学 Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494612A (en) * 2018-01-19 2018-09-04 西安电子科技大学 A kind of network system and its method of servicing that mobile edge calculations service is provided
CN108683613A (en) * 2018-05-10 2018-10-19 Oppo广东移动通信有限公司 A kind of method, apparatus and computer storage media of scheduling of resource
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
WO2020216135A1 (en) * 2019-04-25 2020-10-29 南京邮电大学 Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration
CN110460635A (en) * 2019-07-04 2019-11-15 华南理工大学 One kind is towards unpiloted edge discharging method and device
CN110381159A (en) * 2019-07-26 2019-10-25 中国联合网络通信集团有限公司 Task processing method and system
CN110856183A (en) * 2019-11-18 2020-02-28 南京航空航天大学 Edge server deployment method based on heterogeneous load complementation and application
CN111641891A (en) * 2020-04-16 2020-09-08 北京邮电大学 Task peer-to-peer unloading method and device in multi-access edge computing system
CN111625354A (en) * 2020-05-19 2020-09-04 南京乐贤智能科技有限公司 Arrangement method of computing power of edge computing equipment and related equipment thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
移动边缘计算中的任务调度与资源分配;曹普;《中国优秀硕士学位论文全文数据库信息科技辑》;I136-904 *
陈敏.《人工智能通信理论与方法》.华中科技大学出版社,2020,60. *

Also Published As

Publication number Publication date
CN112988346A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112988346B (en) Task processing method, device, equipment and storage medium
CN107025205B (en) Method and equipment for training model in distributed system
US9244817B2 (en) Remote debugging in a cloud computing environment
US10120705B2 (en) Method for implementing GPU virtualization and related apparatus, and system
US20180210752A1 (en) Accelerator virtualization method and apparatus, and centralized resource manager
CN112035238B (en) Task scheduling processing method and device, cluster system and readable storage medium
US8862538B2 (en) Maintaining a network connection of a workload during transfer
CN103699428A (en) Method and computer device for affinity binding of interrupts of virtual network interface card
US9092260B1 (en) Sync point coordination providing high throughput job processing across distributed virtual infrastructure
CN109656646B (en) Remote desktop control method, device, equipment and virtualization chip
CN109828848A (en) Platform services cloud server and its multi-user operation method
US11513830B2 (en) Introspection into workloads running within virtual machines
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN108829516B (en) Resource virtualization scheduling method for graphic processor
CN114296953A (en) Multi-cloud heterogeneous system and task processing method
Mangal et al. Flexible cloud computing by integrating public-private clouds using openstack
CN116188240B (en) GPU virtualization method and device for container and electronic equipment
CN114327846A (en) Cluster capacity expansion method and device, electronic equipment and computer readable storage medium
CN116662009A (en) GPU resource allocation method and device, electronic equipment and storage medium
CN113821174B (en) Storage processing method, storage processing device, network card equipment and storage medium
CN115827148A (en) Resource management method and device, electronic equipment and storage medium
CN111258715B (en) Multi-operating system rendering processing method and device
CN115328609A (en) Cloud desktop data processing method and system
CN112968812A (en) Network performance testing method, device, equipment and storage medium
US9996373B1 (en) Avoiding overloading of network adapters in virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant