CN116909758B - Processing method and device of calculation task and electronic equipment - Google Patents

Processing method and device of calculation task and electronic equipment Download PDF

Info

Publication number
CN116909758B
CN116909758B CN202311178976.3A CN202311178976A CN116909758B CN 116909758 B CN116909758 B CN 116909758B CN 202311178976 A CN202311178976 A CN 202311178976A CN 116909758 B CN116909758 B CN 116909758B
Authority
CN
China
Prior art keywords
power
computing
network
task
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311178976.3A
Other languages
Chinese (zh)
Other versions
CN116909758A (en
Inventor
王亚平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311178976.3A priority Critical patent/CN116909758B/en
Publication of CN116909758A publication Critical patent/CN116909758A/en
Application granted granted Critical
Publication of CN116909758B publication Critical patent/CN116909758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/505Clust
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a processing method and device of a calculation task and electronic equipment, and relates to the technical field of data processing, wherein the method comprises the following steps: firstly, acquiring load information corresponding to the execution of calculation tasks sent from a first calculation network to each second calculation network; determining a target power calculation network for executing a target power calculation task from each second power calculation network according to the load information, wherein the target power calculation task is a power calculation task in the first power calculation network; and then the target calculation task is sent to the target calculation network for execution. According to the method and the device, based on the kafka system, according to the load information between the first power computing network and each second power computing network, the power computing tasks in the first power computing network are reasonably distributed, and the success rate of executing the power computing tasks can be improved.

Description

Processing method and device of calculation task and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing a computing task, and an electronic device.
Background
The power calculation network can accurately and efficiently execute the task with very large scale.
At present, the difference between different power networks is huge, and the power is limited, so when the power task is executed too much, the load of the power network is too heavy, which can lead to the increase of resource competition, further can lead to the increase of the power task execution time, even can lead to the situation of failure of the power task execution, and affects the service quality of the power network.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus and an electronic device for processing an algorithm task, which mainly aims to improve the technical problems that the difference between different algorithm networks is huge, the algorithm is limited, when the algorithm task is executed too much, the load of the algorithm network is too heavy, which can lead to the increase of resource competition, further the execution time of the algorithm task is increased, even the execution failure of the algorithm task occurs, and the service quality of the algorithm network is affected.
In a first aspect, the present application provides a method for processing a computing task, including:
acquiring load information corresponding to the execution of the power calculation tasks sent from the first power calculation network to each second power calculation network respectively;
determining a target power computing network for executing a target power computing task from each second power computing network according to the load information, wherein the target power computing task is a power computing task in the first power computing network;
and sending the target computing power task to the target computing power network for execution.
In a second aspect, the present application provides a processing apparatus for a computing task, including:
the acquisition module is configured to acquire load information corresponding to the execution of the calculation tasks sent from the first calculation network to each second calculation network respectively;
a determining module configured to determine, from the respective second computing networks, a target computing network that performs a target computing task according to the load information, wherein the target computing task is a computing task in the first computing network;
and the execution module is configured to send the target computing power task to the target computing power network for execution.
In a third aspect, the present application provides a system for processing a computing task, including: a plurality of computing networks;
the plurality of computing power networks are respectively configured with kafka-connectors, and the kafka-connectors are used for collecting load information and computing power tasks of the plurality of computing power networks and storing the load information and computing power tasks into a kafka cluster;
and the first computing network in the plurality of computing networks sends the target computing task to the target computing network in each second computing network for execution through kafka-stream according to the load information corresponding to each second computing network collected by kafka-connector.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
In a fifth aspect, the present application provides an electronic device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the method of the first aspect when executing the computer program.
By means of the technical scheme, the processing method, the processing device and the electronic equipment for the power calculation tasks are provided, wherein the power calculation tasks sent from the first power calculation network are firstly acquired, and the corresponding load information is executed to each second power calculation network respectively; determining a target power calculation network for executing a target power calculation task from each second power calculation network according to the load information, wherein the target power calculation task is a power calculation task in the first power calculation network; and then the target calculation task is sent to the target calculation network for execution. Compared with the prior art, the method and the device have the advantages that the computing tasks in the first computing network can be reasonably distributed according to the load information between the first computing network and each second computing network, the success rate of executing the computing tasks can be improved, and the service quality of the computing networks can be improved.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a method for processing a computing task according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for processing a computing task according to an embodiment of the present application;
FIG. 3 illustrates a functional schematic of an example provided by an embodiment of the present application;
FIG. 4 illustrates an exemplary architecture diagram provided by embodiments of the present application;
FIG. 5 illustrates a functional schematic of an example provided by an embodiment of the present application;
FIG. 6 illustrates a functional schematic of an example provided by an embodiment of the present application;
fig. 7 is a schematic structural diagram of a device for processing a computing task according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
In order to improve the technical problems that the current different power networks have huge difference and limited power, when the power task is executed too much, the power network is overloaded, the resource competition is increased, the power task execution time is increased, the power task execution failure even occurs, and the service quality of the power network is affected. The embodiment provides a method for processing a computing task, as shown in fig. 1, the method includes:
and 101, acquiring load information corresponding to the execution of the power calculation tasks sent from the first power calculation network to each second power calculation network.
A computing network is a new type of information infrastructure that allocates and flexibly schedules computing resources, storage resources, and network resources as needed among clouds, networks, edges, according to business needs.
In this embodiment of the present application, the first power network may be any power network of different power networks, and specifically may be any power network that needs to send a task to other power networks, and the second power network may be other power networks except the first network.
The execution subject of the present embodiment may be a processing apparatus or device for computing a power task, and may be configured on the end side of the first computing network.
In the embodiment of the application, the load information corresponding to the execution of the power calculation task from the first power calculation network to the second power calculation network can be used for evaluating the load condition of the power calculation network corresponding to the execution of the power calculation task sent from the first power calculation network by the second power calculation network.
In the embodiment of the application, the computing power tasks are sent from the first computing power network, and the corresponding load information is executed to each second computing power network respectively, so that the computing power tasks of the first computing power network can be measured to which second computing power network is specifically sent for execution.
Optionally, the load information corresponding to the execution of the first power computing network to the second power computing networks may be a set of load information including the load information corresponding to the execution of the first power computing network, the load information corresponding to the execution of the first power computing network to the execution of the second power computing network to the execution of the first power computing network to the execution of the second power computing network to the execution of the corresponding load information by the C power computing network.
Step 102, determining a target power network for executing the target power task from the second power networks according to the load information.
The target power calculation task is a power calculation task in the first power calculation network.
In an embodiment of the present application, the target computing task may be a computing task that fails to execute in the first computing network or that executes in the first computing network over time. Correspondingly, the target power computing network is the second power computing network which is determined to be most suitable for processing the target task according to the load information obtained in the step 101 through the scheduling algorithm in each second power computing network.
And step 103, sending the target power calculation task to a target power calculation network for execution.
In some embodiments, the result of the target computing power network performing the target computing power task may also be returned to the first computing power network, or directly to a requestor of the target computing power task, or the like.
Compared with the prior art, by applying the technical scheme of the embodiment, the computing power tasks in the first computing power network can be reasonably distributed according to the load information between the first computing power network and each second computing power network, the success rate of executing the computing power tasks can be improved, and the service quality of the computing power network can be further improved.
To further illustrate the implementation of the method of this embodiment, this embodiment provides a specific method as shown in fig. 2, which includes:
step 201, distance information between the first power computing network and each second power computing network is obtained, and the first power computing network sends historical record information of the message to each second power computing network.
Alternatively, the distance information of the first power network and each second power network may include a set of distance information between the first power network and the a network (second power network), distance information between the first power network and the B network (second power network), distance information between the first network and the C network (second power network), and so on.
In an embodiment of the present application, the history information may include a historical number of successes of the first network sending the message to the second network.
Accordingly, the historical record information of the messages sent by the first power network to the respective second power networks may include a data set of historical success times of the messages sent by the first power network to the a network (the second power network), historical success times of the messages sent by the first power network to the B network (the second power network), historical success times of the messages sent by the first power network to the C network (the second power network), and so on.
Optionally, taking the first power computing network as the power computing network a, each second power computing network includes the power computing network B, the power computing network C, and the power computing network D, and the task of executing overtime or executing failure in the power computing network a is taken as the task a as an example.
Exemplary, the distance X between the power network A and the power network B is obtained respectively 1 Distance X between power network A and power network C 2 Distance X between power network A and power network D 3
Alternatively, the history information may include a historical number of successes for the first power network to send messages to the second power network.
Correspondingly, the historical success times N of the information sent by the computing power network A to the computing power network B are respectively obtained 1 Historical success number N of messages sent by the power network A to the power network C 2 Historical success number N of messages sent by power network A to power network D 3
According to the method, according to the distance information between the first power computing network and each second power computing network and the historical record information of the messages sent by the first power computing network to each second power computing network, the power computing tasks sent by the first power computing network are analyzed, and the corresponding load information is executed to each second power computing network. The process shown in steps 202 to 203 may be specifically performed.
Step 202, according to the attribute information and the real-time status information of each second computing power network, obtaining the corresponding base load of each second computing power network, and the message transmission time from the first computing power network to each second computing power network.
Optionally, step 202 may specifically include: and determining the corresponding basic load of each second computing power network according to the processor information, the memory information and the hard disk information of each second computing power network and combining the message transmission speed of the first computing power network to each second computing power network.
Optionally, step 202 specifically further includes: and carrying out weighted summation calculation based on the processor information, the memory information, the hard disk information and the message transmission speed of the second computing power network to obtain a corresponding basic load of the second computing power network.
Alternatively, kafka-brooker is deployed in a different computational network.
Optionally, step 202 specifically further includes: and sending a detection message through the kafka-brooker of the first computing power network to acquire attribute information and real-time state information of each second computing power network.
Optionally, step 202 specifically further includes: and acquiring the target calculation task through the kafka-connector of the first calculation network.
In the embodiment of the present application, a functional schematic of the kafka-connector is shown in fig. 3. The kafka-connector is used as a component collected between different message middleware, one end of the component is defined as source, the other end of the component is defined as sink, and the function of the component is to transmit data of the source end to sink (kafka cluster) in a key-value mode. If the data size is large, a plurality of kafka-connectors may be configured to improve concurrency performance. The corresponding key-value distributed storage system has the advantages of high query speed, large data storage quantity and high concurrency support, and is very suitable for query through a main key, but cannot perform complex conditional query.
Alternatively, a schematic architecture of kafka is shown in fig. 4. kafka is a high-throughput distributed publish-subscribe messaging system that can handle all action flow data for consumers in a web site. Such actions (web browsing, searching and other user actions) are a key factor in many social functions on modern networks. These data are typically addressed by processing logs and log aggregations due to throughput requirements.
In an embodiment of the present application, a functional schematic of the kafka-brooker is shown in fig. 5. kafka may be a temporary storage platform for messages or data, each instance acting as a kafka-browser, with flow equalization ensured between multiple readers. Each topic in the brooker is stored separately as a directory, the number of partitions can be configured for each topic to ensure load balancing, and a certain number of copies can be configured for each partition to ensure that data is not lost.
Optionally, the attribute information and the real-time status information of the second power network may include a processing speed, a memory size, a network transmission speed, a disk storage rate, and a message transmission time of a central processing unit (Central Processing Unit/Processor, CPU) of the second power network.
In this embodiment of the present application, the base load of the second power network needs to be obtained through a scheduling algorithm first, which specifically may be shown in the following formula one:
(equation I)
In the first formula of the present invention,the CPU processing speed, the network transmission speed, the memory size and the disk storage speed are respectively +.>Respectively corresponding weights, and has +.>
Exemplary, based on step 201, by sending detection messages to the power network B, the power network C, and the power network D through kafka-brooker of the power network a, respectively, attribute information and real-time status information corresponding to the power network B, the power network C, and the power network D can be obtained, and the foundation load F of the power network B is obtained through formula one 1 Base load F of power network C 2 And the foundation load F of the computing network D 3
Correspondingly, the message transmission time T from the computing power network A to the computing power network B can be obtained 1 Message transmission time T from power network a to power network C 2 Message transmission time T from power network a to power network D 3
Step 203, determining load information according to the distance information and the history information and combining the base load and the message transmission time.
Optionally, the history information includes: the first power network sends a historical number of successes of the message to the second power network.
Optionally, step 203 may specifically include: multiplying the historical success times by the base load, dividing the product by the distance information, and multiplying the product by the message transmission time to obtain the load information.
In the embodiment of the present application, the scheduling algorithm may determine the load information, which is specifically shown in the following formula two:
L ij = TS ij *P(S j )/D ij *L(T ij ) (equation II)
In formula II, L ij Representing load information between the power networks i to j, P (S j ) Representing the base load of the case network j, D ij Representing the distance, L (T ij ) Representing the time of transmission of the power network i to the power network j.
Exemplary, based on step 202, the load information between the power network A and the power network B can be obtained as M according to the formula II 1 =N 1 *F 1 /X 1 *T 1 Correspondingly, the load information between the power computing network A and the power computing network C can be obtained to be M 2 =N 2 *F 2 /X 2 *T 2 The load information between the power computing network A and the power computing network D is M 3 =N 3 *F 3 /X 3 *T 3
Alternatively, it can be seen from the scheduling algorithm that: the cpu processing speed, the memory size, the network transmission speed, the disk storage speed and the like of the power calculation network are all factors of optimal scheduling of the power calculation network, and in a scheduling algorithm, the distances of a plurality of power calculation networks are also factors. Meanwhile, for different power networks, the message transmission time is also an important scheduling factor, and specifically, the longer the message transmission time is, the larger the influence on the power is, but the message can be sent when the power network is idle if necessary. The number of successful sending of the historical message is taken as the last measurement factor, the factor is taken as the consideration of the historical performance of the power calculation network, and if the comprehensive factor is too bad, the power calculation time is too long, and some tasks can fail for different power calculation networks.
And 204, determining a target power network for executing the target power calculation task from the second power networks according to the load information.
The target power calculation task is a power calculation task in the first power calculation network.
Exemplary, based on step 203, the load information M is compared 1 、M 2 、M 3 If M 1 <M 2 <M 3 And determining the power computing network B with the minimum load information as a target power computing network.
Step 205, the target computing power task is sent to the target computing power network to be executed.
Optionally, step 205 may specifically include: the target power calculation task is sent to the target power calculation network to be executed through kafka-stream.
Optionally, step 205 specifically further includes: determining other power calculation networks different from the target power calculation network from each second power calculation network according to the load information if the target power calculation network fails to execute the target power calculation task; and sending the target computing power task to other computing power networks for execution.
Optionally, the target computing task is a computing task in the first computing network that fails to execute.
In the embodiment of the present application, a functional schematic of kafka-stream is shown in fig. 6. The kafka-stream is used as a self-contained leveling platform of the kafka, data messages can be dumped to different computing force network centers, and the kafka-stream supports both KTable and ksstream to forward the messages. KTable is a batch processing mode, and throughput is high; kstream is a streaming mode and has high real-time performance. The kafka-stream and the part carried by the kafka constitute a mapping relationship, so that the concurrency number of the kafka-stream can be configured, and the message sending rate is further improved. In addition, kafka-stream can temporarily store tasks that fail execution or execute over time in a locksDb and schedule to other computing networks in real time.
Illustratively, based on step 204, the target computing task a is sent to the computing network B for execution by kafka-stream, and if the computing network B fails to execute the task a or runs out of execution, the computing network a may schedule the computing result, i.e., M 1 <M 2 <M 3 Transmitting the task a to the power computing network C for carrying outAnd executing.
Optionally, if the execution of the task a by the power computing network B fails or is overtime, the power computing network B may be regarded as the first power computing network, and the scheduling budget may be re-performed to determine the target power computing network for executing the task a.
The existing different power networks have huge difference and limited power, so when the power networks have excessive power and excessive power load, the power network resource competition is inevitably increased, the task execution time is increased, even the task execution fails, and the service quality of the power networks is affected. Aiming at the problem, a scheme of statically distributing resources in advance for multiple tenants is mainly adopted, the multiple tenants are distributed according to layers, and then different computing tasks are statically arranged. Not only the resources of each tenant cannot be estimated accurately, but also the task execution time is prolonged if the allocated resources are insufficient to meet the task execution requirements; if the resources are excessively allocated, there is a problem of resource waste. In the task execution process, if a certain task occupies a large amount of data transmission for a long time, a new task cannot be started normally, if the task is failed to execute, only manual retry is needed, the scheme can greatly increase the traffic of the power network, even affect the power task which is executed, the general power task has longer execution time (different from a few hours to a few days), and finally the whole paralysis between the core network and the core network is caused.
To solve the above problem, this embodiment provides a processing system for a computing task, including: a plurality of computing networks;
the power computing system comprises a plurality of power computing networks, a plurality of power computing networks and a power computing system, wherein the power computing networks are respectively provided with a kafka-connector, and the kafka-connector is used for collecting load information and power computing tasks of the power computing networks and storing the load information and the power computing tasks into a kafka cluster; and the first power network in the plurality of power networks sends the target power calculation task to the target power calculation network in each second power network for execution through the kafka-stream according to the load information corresponding to each second power network collected by the kafka-connector.
For example, with resource allocation on demand, each time a task is submitted, the task is scheduled to a certain power network according to the traffic configuration acquired by the kafka-connector, and if the current power network is busy, the task can be temporarily dumped to the kafka for temporary storage. The method can solve the problems of uneven distribution of the resources and resource waste of the independently deployed computing power network, and distributes the resources to each task according to the needs to ensure successful execution of the task. The resource load of the power calculation network acquired by the kafka-connector can reasonably distribute the power calculation network tasks to different core power calculation networks according to a scheduling algorithm. The high throughput and low delay of kafka are exerted, the failed task is forwarded to other computing power networks in real time by using the kafka-stream, and the client does not feel, and only needs to pay attention to the task execution result.
In the embodiment of the application, the weight can be further calculated in real time through artificial intelligence and big data algorithm, and can be determined through comparison of software architecture models, and whether the same components, the same transmission models and the same flow are adopted for different architecture models or not is determined.
Compared with the prior art, by applying the technical scheme of the embodiment, the transmission process of the tasks of different computing network platforms can be unified through the message middleware model (kafka) under the computing network, the high throughput and low delay performance of kafka can be widely exerted, and the influence on the computing task is reduced. The loads and tasks of different power network platforms are collected through the kafka-connector and dumped to the kafka cluster, then the tasks are transmitted to the different power network platforms in real time or at fixed time according to the kafka-stream, the real-time performance of task transmission is enhanced, load balancing is achieved, historical power calculation success times, distance information, information sending duration and the like are comprehensively considered, and each power network can process network information more quickly and evenly. The scheduling algorithm determines another power network capable of executing overtime or failed power calculation tasks, and sends the power calculation tasks to the power calculation network for execution, so that the problems of uneven resource allocation and resource waste of each independently deployed power calculation network can be solved, resources are allocated to each task according to the needs, and successful execution of the tasks is ensured.
To illustrate the specific implementation procedure of the present embodiment, the following specific application examples are given, but not limited thereto:
all calculation tasks flow in through the kafka-connector, and confirm whether to synchronize to other core calculation force networks according to the time delay configuration: if the real-time forwarding is needed, directly forwarding to the power network platform; if the computational power network is busy, the system goes to the kafka cluster for temporary storage.
The kafka cluster is used as a temporary task storage place for storing temporary tasks; there is also a need for a temporary storage pool that gathers traffic for different computing power network platforms according to kafka-connector timing
The kafka-stream is responsible for dumping tasks to other computational power network platforms, and concurrent task numbers are configured through ktable or ksstream, so that the real-time transmission is ensured, and meanwhile, the data dumping speed and throughput can be improved; another function is that for failed tasks, rescheduling into another computational power network is required to continue execution until execution is successful.
Compared with the prior art, by applying the technical scheme of the embodiment, the transmission process of the tasks of different computing network platforms can be unified through the message middleware model (kafka) under the computing network, the high throughput and low delay performance of kafka can be widely exerted, and the influence on the computing task is reduced. The loads and tasks of different power network platforms are collected through the kafka-connector and dumped to the kafka cluster, then the tasks are transmitted to the different power network platforms in real time or at fixed time according to the kafka-stream, the real-time performance of task transmission is enhanced, load balancing is achieved, historical power calculation success times, distance information, information sending duration and the like are comprehensively considered, and each power network can process network information more quickly and evenly. The scheduling algorithm determines another power network capable of executing overtime or failed power calculation tasks, and sends the power calculation tasks to the power calculation network for execution, so that the problems of uneven resource allocation and resource waste of each independently deployed power calculation network can be solved, resources are allocated to each task according to the needs, and successful execution of the tasks is ensured.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, the present embodiment provides a processing apparatus for a computing task, as shown in fig. 7, where the apparatus includes: an acquisition module 31, a determination module 32, and an execution module 33.
An obtaining module 31 configured to obtain load information corresponding to the execution of the power calculation tasks sent from the first power calculation network to the respective second power calculation networks;
a determining module 32 configured to determine a target power network for performing a target power task from the respective second power networks in accordance with the load information, wherein the target power task is a power task in the first power network;
an execution module 33 configured to send the target computing power task to the target computing power network for execution.
In some examples of this embodiment, the obtaining module 31 is specifically configured to analyze the load information according to the distance information between the first computing power network and the respective second computing power network, and the history information of the first computing power network sending a message to the respective second computing power network.
In some examples of this embodiment, the obtaining module 31 is further configured to obtain, according to the attribute information and the real-time status information of the second computing power networks, a base load corresponding to the second computing power networks, and a message transmission time of the first computing power network to the second computing power networks, respectively; correspondingly, the obtaining module 31 is specifically configured to determine the load information according to the distance information and the history information, and combine the base load and the message transmission time.
In some examples of this embodiment, the obtaining module 31 is specifically further configured to determine, according to the processor information, the memory information, and the hard disk information of each second computing network, a corresponding base load of each second computing network in combination with a message transmission speed of each first computing network to each second computing network.
In some examples of this embodiment, the obtaining module 31 is specifically configured to perform weighted summation calculation based on the processor information, the memory information, the hard disk information and the message transmission speed of the second computing power network, so as to obtain a base load corresponding to the second computing power network.
In some examples of this embodiment, the history information includes: the first power network sends the historical successful times of the information to the second power network; correspondingly, the obtaining module 31 is specifically further configured to multiply the historical success times by the base load, and divide the product by the distance information and then multiply by the message transmission time, so as to obtain the load information.
In some examples of this embodiment, kafka-brooker is configured in a different computational network; correspondingly, the acquiring module 31 is specifically further configured to send a detection message through the kafka-brooker of the first power network, so as to acquire attribute information and real-time status information of each second power network.
In some examples of this embodiment, the obtaining module 31 is specifically further configured to obtain the target computing force task through a kafka-connector of the first computing force network; accordingly, the execution module 33 is specifically configured to send the target computing power task to the target computing power network for execution by kafka-stream.
An execution module 33 further configured to determine, from the respective second computing networks, other computing networks different from the target computing network in dependence on the load information if it is determined that the target computing network fails to execute the target computing task; and sending the target computing power task to the other computing power network for execution.
In some examples of this embodiment, the target computing task is a computing task in the first computing network that failed to execute.
It should be noted that, for other corresponding descriptions of each functional unit related to the processing device for computing the power task provided in this embodiment, reference may be made to corresponding descriptions in fig. 1 and fig. 2, and no further description is given here.
Based on the above-described methods shown in fig. 1 to 6, correspondingly, the present embodiment further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-described methods shown in fig. 1 to 6.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to perform the method of each implementation scenario of the present application.
Based on the method shown in fig. 1 to 6 and the virtual device embodiment shown in fig. 7, in order to achieve the above object, the embodiment of the present application further provides an electronic device, such as a personal computer, a server, a notebook computer, a smart phone, a smart robot, and other smart terminals, where the device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the method as described above and shown in fig. 1 to 6.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and so on. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be appreciated by those skilled in the art that the above-described physical device structure provided in this embodiment is not limited to this physical device, and may include more or fewer components, or may combine certain components, or may be a different arrangement of components.
The storage medium may also include an operating system, a network communication module. The operating system is a program that manages the physical device hardware and software resources described above, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the information processing entity equipment.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. Compared with the prior art, by applying the technical scheme of the embodiment, the transmission process of the tasks of different computing network platforms can be unified through the message middleware model (kafka) under the computing network, the high throughput and low delay performance of kafka can be widely exerted, and the influence on the computing task is reduced. The loads and tasks of different power network platforms are collected through the kafka-connector and dumped to the kafka cluster, then the tasks are transmitted to the different power network platforms in real time or at fixed time according to the kafka-stream, the real-time performance of task transmission is enhanced, load balancing is achieved, historical power calculation success times, distance information, information sending duration and the like are comprehensively considered, and each power network can process network information more quickly and evenly. The scheduling algorithm determines another power network capable of executing overtime or failed power calculation tasks, and sends the power calculation tasks to the power calculation network for execution, so that the problems of uneven resource allocation and resource waste of each independently deployed power calculation network can be solved, resources are allocated to each task according to the needs, and successful execution of the tasks is ensured.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method of processing a computing task, the method comprising:
acquiring load information corresponding to the execution of the power calculation tasks sent from the first power calculation network to each second power calculation network, wherein the load information comprises the following components: acquiring the corresponding basic load of each second power computing network and the message transmission time from the first power computing network to each second power computing network according to the attribute information and the real-time state information of each second power computing network; multiplying the historical success times of the first power network sending the message to the second power network by the basic load, dividing the obtained product by the distance information between the first power network and each second power network, and multiplying the obtained product by the message transmission time to obtain the load information;
determining a target power computing network for executing a target power computing task from each second power computing network according to the load information, wherein the target power computing task is a power computing task in the first power computing network;
and sending the target computing power task to the target computing power network for execution.
2. The method of claim 1, wherein the acquiring load information corresponding to the sending of the computing power tasks from the first computing power network to the respective second computing power networks, respectively, comprises:
and analyzing the load information according to the distance information between the first power computing network and each second power computing network and the history record information of the messages sent by the first power computing network to each second power computing network.
3. The method according to claim 1, wherein the obtaining the corresponding base load of each second computing power network according to the attribute information and the real-time status information of each second computing power network includes:
and determining the corresponding basic load of each second power computing network according to the processor information, the memory information and the hard disk information of each second power computing network and combining the message transmission speed of the first power computing network to each second power computing network.
4. The method of claim 3, wherein determining the corresponding base load of each second computing power network according to the processor information, the memory information, and the hard disk information of each second computing power network and in combination with the message transmission speed of each first computing power network to each second computing power network, comprises:
and carrying out weighted summation calculation based on the processor information, the memory information, the hard disk information and the message transmission speed of the second computing power network to obtain a corresponding basic load of the second computing power network.
5. The method of claim 1, wherein the kafka-brooker is configured in different computing networks;
before the base load corresponding to each second power network is obtained according to the attribute information and the real-time state information of each second power network, and the message transmission time from the first power network to each second power network is respectively obtained, the method further comprises:
and sending a detection message through the kafka-brooker of the first power network, and acquiring attribute information and real-time state information of each second power network.
6. The method of claim 5, wherein the method further comprises:
acquiring the target calculation task through a kafka-connector of the first calculation network;
the sending the target computing power task to the target computing power network for execution comprises the following steps:
and transmitting the target computing power task to the target computing power network for execution through kafka-stream.
7. The method of claim 1, wherein after sending the target computing power task to the target computing power network for execution, the method further comprises:
determining other computing power networks different from the target computing power network from the second computing power networks according to the load information if the target computing power network fails to execute the target computing power task;
and sending the target computing power task to the other computing power network for execution.
8. The method of claim 1, wherein the target computing task is a computing task in the first computing network that failed to execute.
9. A system for processing a computing task, comprising: a plurality of computing networks;
the plurality of computing power networks are respectively configured with kafka-connectors, and the kafka-connectors are used for collecting load information and computing power tasks of the plurality of computing power networks and storing the load information and computing power tasks into a kafka cluster;
according to the load information corresponding to each second computing network collected by the kafka-connector, a first computing network in the plurality of computing networks sends a target computing task to the target computing network in each second computing network to be executed through the kafka-stream;
the process of obtaining the load information comprises the following steps: acquiring the corresponding basic load of each second power computing network and the message transmission time from the first power computing network to each second power computing network according to the attribute information and the real-time state information of each second power computing network; multiplying the historical success times of the first power network sending the message to the second power network by the basic load, dividing the obtained product by the distance information between the first power network and each second power network, and multiplying the obtained product by the message transmission time to obtain the load information.
10. A computing task processing device, comprising:
the acquisition module is configured to acquire load information corresponding to the execution of the computing power tasks sent from the first computing power network to each second computing power network, and comprises the following steps: acquiring the corresponding basic load of each second power computing network and the message transmission time from the first power computing network to each second power computing network according to the attribute information and the real-time state information of each second power computing network; multiplying the historical success times of the first power network sending the message to the second power network by the basic load, dividing the obtained product by the distance information between the first power network and each second power network, and multiplying the obtained product by the message transmission time to obtain the load information;
a determining module configured to determine, from the respective second computing networks, a target computing network that performs a target computing task according to the load information, wherein the target computing task is a computing task in the first computing network;
and the execution module is configured to send the target computing power task to the target computing power network for execution.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 8.
12. An electronic device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 8 when executing the computer program.
CN202311178976.3A 2023-09-13 2023-09-13 Processing method and device of calculation task and electronic equipment Active CN116909758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311178976.3A CN116909758B (en) 2023-09-13 2023-09-13 Processing method and device of calculation task and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311178976.3A CN116909758B (en) 2023-09-13 2023-09-13 Processing method and device of calculation task and electronic equipment

Publications (2)

Publication Number Publication Date
CN116909758A CN116909758A (en) 2023-10-20
CN116909758B true CN116909758B (en) 2024-01-26

Family

ID=88355075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311178976.3A Active CN116909758B (en) 2023-09-13 2023-09-13 Processing method and device of calculation task and electronic equipment

Country Status (1)

Country Link
CN (1) CN116909758B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021190482A1 (en) * 2020-03-27 2021-09-30 中国移动通信有限公司研究院 Computing power processing network system and computing power processing method
CN114968573A (en) * 2022-05-24 2022-08-30 中国联合网络通信集团有限公司 Computing resource scheduling method and device and computer readable storage medium
CN115658282A (en) * 2022-08-18 2023-01-31 江苏腾威云天科技有限公司 Server computing power management distribution method, system, network device and storage medium
CN115714774A (en) * 2021-08-18 2023-02-24 维沃移动通信有限公司 Calculation force request, calculation force distribution and calculation force execution method, terminal and network side equipment
CN115766884A (en) * 2022-11-08 2023-03-07 网络通信与安全紫金山实验室 Computing task processing method, device, equipment and medium
CN116016221A (en) * 2023-01-05 2023-04-25 中国联合网络通信集团有限公司 Service processing method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021190482A1 (en) * 2020-03-27 2021-09-30 中国移动通信有限公司研究院 Computing power processing network system and computing power processing method
CN115714774A (en) * 2021-08-18 2023-02-24 维沃移动通信有限公司 Calculation force request, calculation force distribution and calculation force execution method, terminal and network side equipment
CN114968573A (en) * 2022-05-24 2022-08-30 中国联合网络通信集团有限公司 Computing resource scheduling method and device and computer readable storage medium
CN115658282A (en) * 2022-08-18 2023-01-31 江苏腾威云天科技有限公司 Server computing power management distribution method, system, network device and storage medium
CN115766884A (en) * 2022-11-08 2023-03-07 网络通信与安全紫金山实验室 Computing task processing method, device, equipment and medium
CN116016221A (en) * 2023-01-05 2023-04-25 中国联合网络通信集团有限公司 Service processing method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
从边缘计算到算力网络;于清林;;产业科技创新(03);全文 *
面向算力匹配调度的泛在确定性网络研究;蔡岳平;李天驰;;信息通信技术(04);全文 *

Also Published As

Publication number Publication date
CN116909758A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN110049130B (en) Service deployment and task scheduling method and device based on edge computing
US20200137151A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
US7631034B1 (en) Optimizing node selection when handling client requests for a distributed file system (DFS) based on a dynamically determined performance index
JP4421637B2 (en) Distributed scheduling of subtask processors
Ge et al. GA-based task scheduler for the cloud computing systems
US20200364608A1 (en) Communicating in a federated learning environment
WO2021159638A1 (en) Method, apparatus and device for scheduling cluster queue resources, and storage medium
US20100281482A1 (en) Application efficiency engine
US10783002B1 (en) Cost determination of a service call
US20090282413A1 (en) Scalable Scheduling of Tasks in Heterogeneous Systems
CN107515784B (en) Method and equipment for calculating resources in distributed system
US8782659B2 (en) Allocation of processing tasks between processing resources
EP2977898B1 (en) Task allocation in a computing environment
CN105491150A (en) Load balance processing method based on time sequence and system
CN114780244A (en) Container cloud resource elastic allocation method and device, computer equipment and medium
Li et al. Replica-aware task scheduling and load balanced cache placement for delay reduction in multi-cloud environment
CA2631255A1 (en) Scalable scheduling of tasks in heterogeneous systems
US20220318065A1 (en) Managing computer workloads across distributed computing clusters
Kanagasubaraja et al. Energy optimization algorithm to reduce power consumption in cloud data center
US9501321B1 (en) Weighted service requests throttling
CN107045452B (en) Virtual machine scheduling method and device
CN116909758B (en) Processing method and device of calculation task and electronic equipment
CN116700920A (en) Cloud primary hybrid deployment cluster resource scheduling method and device
CN110750350A (en) Large resource scheduling method, system, device and readable storage medium
CN115981871A (en) GPU resource scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant