CN112817753A - Task processing method and device, storage medium and electronic device - Google Patents

Task processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112817753A
CN112817753A CN202110084621.2A CN202110084621A CN112817753A CN 112817753 A CN112817753 A CN 112817753A CN 202110084621 A CN202110084621 A CN 202110084621A CN 112817753 A CN112817753 A CN 112817753A
Authority
CN
China
Prior art keywords
network
resources
sharing
computing
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110084621.2A
Other languages
Chinese (zh)
Inventor
徐益标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110084621.2A priority Critical patent/CN112817753A/en
Publication of CN112817753A publication Critical patent/CN112817753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention provides a task processing method and device, a storage medium and an electronic device, wherein the method comprises the following steps: under the condition that the current time is within the computing power resource sharing time period and a target task belongs to a non-real-time task, sharing N computing power resources in a first network to a second network, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state; and processing the target task by using the N computing resources. The invention solves the problem of adjusting the computing power resource in the related technology, and achieves the effects of fully utilizing the computing power resource and improving the time-sharing multiplexing efficiency of the computing power resource.

Description

Task processing method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a task processing method and device, a storage medium and an electronic device.
Background
In a picture and video analysis system, different types of computing resources are required to be used by different service modules, and in the current mainstream picture and video analysis system, various computing resources are often determined according to various estimated service volumes in advance when a project is deployed, and if the service volumes of subsequent modules are obviously changed, the computing resources required by some services are insufficient or the computing resources required by some services are idle for a long time; meanwhile, if the quantity of various computing resources needs to be adjusted, time and labor are wasted, and the operation and maintenance cost is greatly increased.
In view of the above problems, no effective solution has been proposed in the related art.
Disclosure of Invention
The embodiment of the invention provides a task processing method and device, a storage medium and an electronic device, and aims to at least solve the problem of adjusting computing resources in the related art.
According to an embodiment of the present invention, there is provided a service processing method including: under the condition that the current time is within the computing power resource sharing time period and a target task belongs to a non-real-time task, sharing N computing power resources in a first network to a second network, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state; and processing the target task by utilizing the N computing resources.
According to another embodiment of the present invention, there is provided a processing apparatus of a task, including: the system comprises a first sharing module, a second sharing module and a first resource sharing module, wherein the first sharing module is used for sharing N computing resources in a first network to a second network under the condition that the current time is within a computing resource sharing time period and a target task belongs to a non-real-time task, the target task belongs to the second network, N is a natural number which is greater than or equal to 1, and the N computing resources are in an idle state; and the first processing module is used for processing the target task by utilizing the N computing resources.
In an exemplary embodiment, the apparatus further includes: the first determining module is configured to determine, before N computing resources in a first network are shared to a second network, a sharing configuration of M computing resources in the first network when a current time is within a computing resource sharing time period and a target task belongs to a non-real-time task, where the sharing configuration includes a maximum sharing rate of the M computing resources and a sharing time period of the M computing resources, and M is greater than or equal to N.
In an exemplary embodiment, the first determining module includes: a first setting unit, configured to set a computation resource sharing rate of the first network; a first determining unit, configured to determine a product of the M computation power resources and the computation power resource sharing rate as a maximum sharing rate of the M computation power resources.
In an exemplary embodiment, the first determining module includes: the second determining unit is used for determining the real-time task analysis amount of the first network in each time period; and the second setting unit is used for setting the time period in which the real-time task analysis amount is smaller than the preset analysis amount as the sharing time period of the M calculation force resources.
In an exemplary embodiment, the apparatus further includes: the first conversion module is used for converting part of real-time tasks in the second network resources into non-real-time tasks to obtain the target tasks before N calculation resources in a first network are shared to a second network under the condition that the current time is in a calculation resource sharing time period and the target tasks belong to the non-real-time tasks and under the condition that the calculation resources for executing the real-time tasks in the second network are insufficient in a preset time period.
In an exemplary embodiment, the first sharing module includes: a third determining unit, configured to determine a load rate of each computational resource in the first network in the computational resource sharing time period, so as to determine the N computational resources; a fourth determining unit, configured to determine K computational resources required by the target task in the second network, where K is an integer greater than or equal to 0; and a first sharing unit configured to share the N computation resources with the second network when K is less than or equal to N.
In an exemplary embodiment, the apparatus further includes: the recording module is used for recording the serial numbers of N computing resources after the N computing resources in a first network are shared to a second network under the condition that the current time is within the computing resource sharing time period and a target task belongs to a non-real-time task; and the rollback module is used for rolling back the N force calculation resources to the first network according to the serial numbers of the N force calculation resources.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, N computing power resources in a first network are shared to a second network under the condition that the current time is in a computing power resource sharing time period and a target task belongs to a non-real-time task, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state; and processing the target task by using the N computing resources. The adjustment of computing resources between different networks can be realized. Therefore, the problem of adjusting the computing resources in the related art can be solved, and the effects of fully utilizing the computing resources and improving the time-sharing multiplexing efficiency of the computing resources are achieved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a task processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of processing tasks according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a computing power resource pool according to an embodiment of the invention;
FIG. 4 is a schematic diagram of computational resource allocation according to an embodiment of the present invention;
FIG. 5 is a diagram of a tidal sharing configuration according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of non-real-time task processing according to an embodiment of the invention;
FIG. 7 is a flow chart of tidal dispatch according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of computing power resource conversion according to an embodiment of the invention;
fig. 9 is a block diagram of a structure of a processing device of a task according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the present invention running on a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a task processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used for storing computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the processing methods of the tasks in the embodiments of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the methods described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, a method for processing a task is provided, and fig. 2 is a flowchart of a method for processing a task according to an embodiment of the present invention, where as shown in fig. 2, the flowchart includes the following steps:
step S202, under the condition that the current time is in the computing power resource sharing time period and the target task belongs to a non-real-time task, sharing N computing power resources in a first network to a second network, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state;
and step S204, processing the target task by using the N computing resources.
The execution subject of the above steps may be a server, etc., but is not limited thereto.
The embodiment includes, but is not limited to, application in a scene where a video or a picture is analyzed, for example, recognition of a face image acquired in traffic monitoring.
In the present embodiment, the target task includes, but is not limited to, video or pictures. The target task can be the recognition of the face in the picture, or the tracking of the pedestrian track in the video, and the like.
In this embodiment, the first network may include a power resource for processing a picture, and the second network may include a power resource for processing a video.
Through the steps, N computing power resources in a first network are shared to a second network under the condition that the current time is within the computing power resource sharing time period and a target task belongs to a non-real-time task, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state; and processing the target task by using the N computing resources. The adjustment of computing resources between different networks can be realized. Therefore, the problem of adjusting the computing resources in the related art can be solved, and the effects of fully utilizing the computing resources and improving the time-sharing multiplexing efficiency of the computing resources are achieved.
In an exemplary embodiment, in a case that the current time is within the computing power resource sharing time period and the target task belongs to the non-real-time task, before sharing the N computing power resources in the first network to the second network, the method further includes:
and S1, determining the sharing configuration of the M computing resources in the first network, wherein the sharing configuration comprises the maximum sharing rate of the M computing resources and the sharing time period of the M computing resources, and M is greater than or equal to N.
In one exemplary embodiment, determining a shared configuration of M computational resources in a first network comprises:
s1, setting the computing power resource sharing rate of the first network;
and S2, determining the product of the M computing resources and the sharing rate of the computing resources as the maximum sharing rate of the M computing resources.
In this embodiment, the maximum sharing rate of the M computing resources, that is, how many computing resources in the M computing resources can be used for sharing, and the value range is (0% -100%). For example, M computing resources are allocated under a network a, and if the sharing rate is set to R%, the maximum sharable computing resource number in the network, i.e., N ═ Round (M × R%), is determined by the configuration item, generally in combination with the actual analysis task amount.
In one exemplary embodiment, determining a shared configuration of M computational resources in a first network comprises:
s1, determining the real-time task analysis amount of the first network in each time period;
and S2, setting the time period in which the real-time task analysis amount is less than the preset analysis amount as the sharing time period of the M computing power resources.
In this embodiment, the sharing time period, which is the time within which the M computation power resources can be shared, is a time interval consisting of a start time startTime and an end time endTime, and has a time format [ HH: mm ], and is accurate to the point. For example, the sharing time period of the A network configuration is [09:00-21:00], namely, the 09:00 to 21:00 of each day is represented, and the resources under the network are available for sharing; the startTime setting may be greater than endTime, e.g., [21:00-09:00], i.e., 9:00 for days 21:00 to T +1, representing T days, computational resources under A network may be available for sharing.
In an exemplary embodiment, in a case that the current time is within the computing power resource sharing time period and the target task belongs to the non-real-time task, before sharing the N computing power resources in the first network to the second network, the method further includes:
and S1, under the condition that computing resources for executing the real-time tasks in the second network are insufficient within a preset time period, converting part of the real-time tasks in the second network resources into non-real-time tasks to obtain target tasks.
In this embodiment, when the real-time tasks are excessive and the computing resources are insufficient, part of the real-time tasks cannot acquire the computing resources for analysis, and at this time, the real-time tasks need to be converted into non-real-time tasks, such as real-time video analysis tasks, the analysis tasks that can be performed in the same time for analyzing the computing resources are limited, and the number of channels that need to be analyzed in the actual production environment is often greater than the capability of the computing resources, so that the real-time videos of part of the channels can be converted into offline videos, the offline videos are issued to the system in a non-real-time task form for queuing, after the computing resources are shared, the image analysis computing resources are converted into video analysis computing resources, and the offline video tasks in the queue are analyzed to realize time-sharing multiplexing of the resources.
In one exemplary embodiment, in a case that the current time is within the computing power resource sharing time period and the target task belongs to the non-real-time task, sharing N computing power resources in the first network to the second network includes:
s1, determining the load rate of each computational resource in the first network in the computational resource sharing time period to determine N computational resources;
s2, K computing power resources needed by the target task in the second network are determined, wherein K is an integer greater than or equal to 0;
and S3, sharing the N computing resources to the second network when K is less than or equal to N.
In this embodiment, the load utilization rates of the computing resources are collected, and whether the tidal computing power conversion can be performed is determined according to the utilization rates, which may be specifically calculated according to the following formula: ceil (load value)/threshold }, where load value is the utilization rate of each computing power resource, and { threshold } is a variable calculated according to an actual situation, the minimum number of computing power resources required in the actual service execution of the current network is calculated, and if the number of computing power resources is less than the actual number of computing power resources, it indicates that tidal computing power conversion can be performed.
In an exemplary embodiment, after sharing the N computational resources in the first network to the second network in a case where the current time is within the computational resource sharing time period and the target task belongs to the non-real-time task, the method further includes:
s1, recording the serial numbers of N computing resources;
and S2, rolling back the N force calculation resources to the first network according to the serial numbers of the N force calculation resources.
In this embodiment, if the current time is outside the time range of the shared configuration or there is no non-real-time task, the computing resource conversion needs to be rolled back, specifically, the roll-back operation is performed according to the computing resource number recorded during the computing resource conversion, which includes roll-back of the computing resource and closing of the non-real-time task
The invention is illustrated below with reference to specific examples:
in this embodiment, the computing power resource is a set of hardware and software for business computing, such as a human face image (video) analysis computing power, a structured image (video) analysis operator, and the like.
Tidal scheduling refers to scheduling strategies corresponding to different task volumes in different time periods, for example, when the face traffic is large in the daytime, more face analysis operators are needed, and when the face traffic is small in the evening, the face analysis computational power is converted into structural video computational power for analyzing the video recording task.
Real-time tasks, i.e., tasks requiring all-weather, real-time analysis, such as real-time analysis of facial and vehicle structured pictures.
The non-real-time task, namely, the task without time requirement and with low instantaneity, can be processed in a delayed way, such as a scheduled video recording task.
The multi-domain method means that computing resources are actually deployed in different physical machine rooms (cabinets), and the computing resources in different places need to be divided according to the concept of the domain.
In a picture and video analysis system, different types of computing resources are required to be used by different service modules, and in the current mainstream picture and video analysis system, various computing resources are often determined according to various estimated service volumes in advance when a project is deployed, and if the service volumes of subsequent modules are obviously changed, the computing resources required by some services are insufficient or the computing resources required by some services are idle for a long time; meanwhile, if the quantity of various computing resources needs to be adjusted, time and labor are wasted, and the operation and maintenance cost is greatly increased.
The embodiment comprises the following steps:
s1, allocating computing resources, pooling computing resources of all the nanotubes, collecting resource information through the resource information collection module, and counting the total amount of information of hardware resources, such as a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU), currently used number, and available number of the hardware resources, as shown in fig. 3. The computing resources in the computing resource pool are allocated to different service networks according to service requirements, and the resources in the different networks are used for processing different services, such as a face service network, a structured service network, a vehicle service network, and the like, as shown in fig. 4.
S2, setting sharing configuration of computing resources of each network, wherein after computing resources are distributed by each service network, the resources in each network start to process respective services, but A network analysis tasks are few, the resource utilization rate is not high, B network analysis tasks are many, and computing resources are insufficient; at the moment, computing power resource sharing is needed; sharing means that computing resources in different service networks can be dynamically converted into computing resources of other service networks; how to perform computational sharing, when to start sharing scheduling, how to determine the number of sharable resources, and further, the sharing configuration of each network needs to be set.
The sharing configuration comprises:
upper limit of sharable resource: that is, how many resources in the service network can be used for sharing, the value range is (0% -100%), that is, M computational power resources are allocated under the a network, if the sharing rate is set to R%, the maximum sharable computational power resource number under the network, that is, N ═ Round (M × R%), and the configuration item generally determines the sharing rate in combination with the actual analysis task amount.
Sharing time: the computing power resource of the network can be shared in what time, the shared time is a time interval which consists of a start time and an end time, the time format is [ HH: mm ], and the sharing is accurate to the point; for example, the sharing time period of the A network configuration is [09:00-21:00], namely, the 09:00 to 21:00 of each day is represented, and the resources under the network are available for sharing; the startTime setting may be greater than endTime, e.g., [21:00-09:00], i.e., 9:00 for days 21:00 to T +1 representing T, and computational resources under A network may be available for sharing, as shown in FIG. 5.
S3, creating a non-real-time task, where the real-time task is too many, and when the computing resource is insufficient, part of the real-time task cannot acquire the computing resource for analysis, and at this time, the real-time task needs to be converted into a non-real-time task, such as a real-time video analysis task, the analysis task that can be performed in the same time for analyzing the computing power is limited, and the number of channels that need to be analyzed in the actual production environment is often greater than the capability of the computing resource, so that the real-time video of part of the channels can be converted into an offline video, and the offline video is sent to the system in a non-real-time task form for queuing, and after the computing resource is shared, the image analysis computing resource is converted into a video analysis computing resource, and the offline video tasks in the queue are analyzed, thereby implementing time-sharing.
S4, performing tidal scheduling of computational resources based on the shared configuration and the non-real-time task, as shown in fig. 7, comprising the steps of:
s701: after the sharing configuration is set, a timer is set in the system to scan the sharing configuration and the non-real-time task at regular time, and the time interval is set to be x minutes, for example, 5 minutes or 10 minutes;
S702-S703: during scanning, judging whether the current time is in the shared time or not, judging whether a non-real-time task exists or not, if both the current time and the shared time exist, performing tide computing power conversion, and if the two conditions are not met, performing rollback operation of computing power resource conversion instead of performing tide computing power conversion;
s704: when the tidal computing power conversion is performed, whether the tidal computing power conversion is performed needs to be judged according to the following conditions: collecting the load utilization rate of each computing resource;
s705: judging whether the tide computing power conversion can be carried out or not according to the utilization rate, and specifically carrying out calculation according to the following formula: ceil (sum (loadValue))/{ threshold }, wherein loadValue is the utilization rate of each computing power resource, and { threshold } is a variable calculated according to actual conditions, the minimum computing power resource number required in the actual service execution of the current network is calculated, and if the number of the computing power resources is less than the actual computing power resource number, tidal computing power conversion can be performed; if the number of the computing resources is larger than the actual computing resources, the computing resources converted by the tide computing resources need to be rolled back, and real-time service is ensured;
at least one computing power resource under each domain is reserved because each domain is distributed with a respective real-time analysis task, and the real-time tasks can be ensured to be normally executed when tidal computing power sharing conversion is carried out
S706: judging whether the computational power resource number shared by the current tide exceeds the sharing rate according to the sharing rate of the tide sharing configuration, wherein the calculation method comprises the following steps of { the computational power resource number converted by the tide }/{ the computational power resource number distributed under the network }, if the sharing rate is exceeded, not performing tide computational power conversion, rolling back the computational power resource converted before until the limit of the sharing rate is met, and if the sharing rate is not exceeded, allowing tide computational power conversion;
s707: when all the conditions are met, tidal computing power conversion is performed, as shown in fig. 8, for example, computing power with the number of a in the face network is stopped, a computing power resource with the number of a is converted into a structured network, a computing power resource with the number of B is started, and the converted computing power resource number is recorded for subsequent rollback. And simultaneously, the non-real-time tasks in the steps are issued to the computing power resource B for analysis and computation.
The above conditions are added or deleted according to actual service scenes, if the current time is out of the time range of the tide sharing configuration or no non-real-time task exists, the computing power resource conversion needs to be rolled back, specifically, the roll-back operation is performed according to the computing power resource number recorded during the computing power conversion, and the roll-back operation comprises the roll-back of the computing power resource and the closing of the non-real-time task.
In summary, according to the difference of real-time traffic in different time periods, the idle resources are used for processing the non-real-time task, and when the real-time traffic increases, the resources occupied by the non-real-time task are adjusted to the resources required by the real-time task; the real-time task is converted into a non-real-time pre-arranged task, the non-real-time task supports a queuing strategy according to the priority, the task with high priority is guaranteed to be executed first, and when a computationally intensive resource is released, the queued task can be applied to the resource immediately for analysis; the two points ensure that the computing resources are fully utilized, and the time-sharing multiplexing of the computing resources is achieved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a task processing device is further provided, and the task processing device is used for implementing the foregoing embodiments and preferred embodiments, and details are not described again after the description is given. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a structure of a processing apparatus of a task according to an embodiment of the present invention, as shown in fig. 9, the apparatus including:
the first sharing module 92 is configured to share N computational power resources in the first network to the second network when the current time is within the computational power resource sharing time period and the target task belongs to a non-real-time task, where the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computational power resources are in an idle state;
and a first processing module 94, configured to process the target task by using the N computational resources.
In an exemplary embodiment, the apparatus further includes: the first determining module is configured to determine, before N computing resources in a first network are shared to a second network, a sharing configuration of M computing resources in the first network when a current time is within a computing resource sharing time period and a target task belongs to a non-real-time task, where the sharing configuration includes a maximum sharing rate of the M computing resources and a sharing time period of the M computing resources, and M is greater than or equal to N.
In an exemplary embodiment, the first determining module includes: a first setting unit, configured to set a computation resource sharing rate of the first network; a first determining unit, configured to determine a product of the M computation power resources and the computation power resource sharing rate as a maximum sharing rate of the M computation power resources.
In an exemplary embodiment, the first determining module includes: the second determining unit is used for determining the real-time task analysis amount of the first network in each time period; and the second setting unit is used for setting the time period in which the real-time task analysis amount is smaller than the preset analysis amount as the sharing time period of the M calculation force resources.
In an exemplary embodiment, the apparatus further includes: the first conversion module is used for converting part of real-time tasks in the second network resources into non-real-time tasks to obtain the target tasks before N calculation resources in a first network are shared to a second network under the condition that the current time is in a calculation resource sharing time period and the target tasks belong to the non-real-time tasks and under the condition that the calculation resources for executing the real-time tasks in the second network are insufficient in a preset time period.
In an exemplary embodiment, the first sharing module includes: a third determining unit, configured to determine a load rate of each computational resource in the first network in the computational resource sharing time period, so as to determine the N computational resources; a fourth determining unit, configured to determine K computational resources required by the target task in the second network, where K is an integer greater than or equal to 0; and a first sharing unit configured to share the N computation resources with the second network when K is less than or equal to N.
In an exemplary embodiment, the apparatus further includes: the recording module is used for recording the serial numbers of N computing resources after the N computing resources in a first network are shared to a second network under the condition that the current time is within the computing resource sharing time period and a target task belongs to a non-real-time task; and the rollback module is used for rolling back the N force calculation resources to the first network according to the serial numbers of the N force calculation resources.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, when the current time is within the computing power resource sharing time period and the target task belongs to a non-real-time task, sharing N computing power resources in the first network to a second network, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state;
and S2, processing the target task by using the N computing resources.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In an exemplary embodiment, the processor may be configured to execute the following steps by a computer program:
s1, when the current time is within the computing power resource sharing time period and the target task belongs to a non-real-time task, sharing N computing power resources in the first network to a second network, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state;
and S2, processing the target task by using the N computing resources.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for processing a task, comprising:
under the condition that the current time is within the computing power resource sharing time period and a target task belongs to a non-real-time task, sharing N computing power resources in a first network to a second network, wherein the target task belongs to the second network, N is a natural number greater than or equal to 1, and the N computing power resources are in an idle state;
and processing the target task by utilizing the N computing resources.
2. The method of claim 1, wherein before sharing the N computational power resources in the first network to the second network in a case where the current time is within the computational power resource sharing time period and the target task belongs to a non-real-time task, the method further comprises:
determining a sharing configuration of M computing power resources in the first network, wherein the sharing configuration comprises a maximum sharing rate of the M computing power resources and a sharing time period of the M computing power resources, and M is greater than or equal to N.
3. The method of claim 2, wherein determining a shared configuration of M computational resources in the first network comprises:
setting a computing resource sharing rate of the first network;
determining a product between the M computing resources and the computing resource sharing rate as a maximum sharing rate of the M computing resources.
4. The method of claim 2, wherein determining a shared configuration of M computational resources in the first network comprises:
determining real-time task analysis amount of the first network in each time period;
and setting the time period in which the real-time task analysis amount is less than the preset analysis amount as the sharing time period of the M computational power resources.
5. The method of claim 1, wherein before sharing the N computational power resources in the first network to the second network in a case where the current time is within the computational power resource sharing time period and the target task belongs to a non-real-time task, the method further comprises:
and under the condition that computing resources for executing the real-time tasks in the second network are insufficient within a preset time period, converting part of the real-time tasks in the second network resources into non-real-time tasks to obtain the target tasks.
6. The method of claim 1, wherein in the case that the current time is within the computing power resource sharing time period and the target task belongs to a non-real-time task, sharing N computing power resources in the first network to the second network comprises:
determining a load rate of each computing resource in the first network within the computing resource sharing time period to determine the N computing resources;
determining K computing resources required by the target task in the second network, wherein K is an integer greater than or equal to 0;
sharing the N computing resources into the second network if the K is less than or equal to the N.
7. The method of claim 1, wherein after sharing the N computational power resources in the first network to the second network if the current time is within the computational power resource sharing time period and the target task belongs to the non-real-time task, the method further comprises:
recording the serial numbers of the N computing resources;
and rolling back the N force calculation resources to the first network according to the serial numbers of the N force calculation resources.
8. A task processing apparatus, comprising:
the system comprises a first sharing module, a second sharing module and a first resource sharing module, wherein the first sharing module is used for sharing N computing resources in a first network to a second network under the condition that the current time is within a computing resource sharing time period and a target task belongs to a non-real-time task, the target task belongs to the second network, N is a natural number which is greater than or equal to 1, and the N computing resources are in an idle state;
and the first processing module is used for processing the target task by utilizing the N computing resources.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202110084621.2A 2021-01-21 2021-01-21 Task processing method and device, storage medium and electronic device Pending CN112817753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110084621.2A CN112817753A (en) 2021-01-21 2021-01-21 Task processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110084621.2A CN112817753A (en) 2021-01-21 2021-01-21 Task processing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112817753A true CN112817753A (en) 2021-05-18

Family

ID=75858660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110084621.2A Pending CN112817753A (en) 2021-01-21 2021-01-21 Task processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112817753A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807539A (en) * 2021-09-06 2021-12-17 北狐数字科技(上海)有限公司 High multiplexing method, system, medium and terminal for machine learning and graph computing power
CN116886935A (en) * 2023-09-08 2023-10-13 中移(杭州)信息技术有限公司 Coding calculation force sharing method, device and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807539A (en) * 2021-09-06 2021-12-17 北狐数字科技(上海)有限公司 High multiplexing method, system, medium and terminal for machine learning and graph computing power
CN116886935A (en) * 2023-09-08 2023-10-13 中移(杭州)信息技术有限公司 Coding calculation force sharing method, device and equipment
CN116886935B (en) * 2023-09-08 2023-12-26 中移(杭州)信息技术有限公司 Coding calculation force sharing method, device and equipment

Similar Documents

Publication Publication Date Title
CN109857546B (en) Multi-server mobile edge computing unloading method and device based on Lyapunov optimization
CN108540568B (en) Computing capacity sharing method and intelligent equipment
CN112817753A (en) Task processing method and device, storage medium and electronic device
CN106982356B (en) Distributed large-scale video stream processing system
CN109787915B (en) Flow control method and device for network access, electronic equipment and storage medium
CN110602180B (en) Big data user behavior analysis method based on edge calculation and electronic equipment
CN110830964B (en) Information scheduling method, internet of things platform and computer readable storage medium
CN115150473A (en) Resource scheduling method, device and storage medium
CN112395075A (en) Resource processing method and device and resource scheduling system
CN112463293A (en) Container-based expandable distributed double-queue dynamic allocation method in edge scene
CN116225643A (en) Task scheduling method and device based on shared resources and electronic equipment
CN103856558A (en) Data processing method and device for terminal application
CN103607731A (en) Method and device for processing measurement reports
CN111158908B (en) Kubernetes-based scheduling method and device for improving GPU utilization rate
CN108667920B (en) Service flow acceleration system and method for fog computing environment
WO2020082599A1 (en) Cloud control method and device, computer apparatus, and storage medium
CN111124669A (en) Operation method, system, terminal and storage medium of distributed SaaS software
CN111160283B (en) Data access method, device, equipment and medium
CN115955698A (en) 5G network slicing system based on smart power grid
CN116010019A (en) Memory resource allocation method, related device and equipment
CN112003900A (en) Method and system for realizing high service availability under high-load scene in distributed system
CN112817732A (en) Stream data processing method and system suitable for cloud-side collaborative multi-data-center scene
CN115037968A (en) Video playing method and device, storage medium and electronic device
CN114158073B (en) Network slice deployment method, device, equipment and storage medium
CN116866352B (en) Cloud-edge-coordinated intelligent camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination