CN113342535A - Task data distribution method and device - Google Patents

Task data distribution method and device Download PDF

Info

Publication number
CN113342535A
CN113342535A CN202110743652.4A CN202110743652A CN113342535A CN 113342535 A CN113342535 A CN 113342535A CN 202110743652 A CN202110743652 A CN 202110743652A CN 113342535 A CN113342535 A CN 113342535A
Authority
CN
China
Prior art keywords
data
task
server
updating
task data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110743652.4A
Other languages
Chinese (zh)
Inventor
许璟亮
周逢源
廖鸿存
周魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110743652.4A priority Critical patent/CN113342535A/en
Publication of CN113342535A publication Critical patent/CN113342535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a task data distribution method and device, belongs to the technical field of artificial intelligence, and can be applied to the financial field or other fields. The task data distribution method comprises the following steps: acquiring current server data, and inputting the current server data into a task allocation model created based on historical server data to obtain a task data allocation scheme; distributing the current task data to a corresponding server for processing according to the task data distribution scheme; the updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data. The invention can effectively realize the dynamic reasonable allocation of the tasks and improve the utilization rate of equipment resources and the processing efficiency and success rate of task data.

Description

Task data distribution method and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a task data distribution method and device.
Background
In the quantitative transaction and the retest scenarios, both the quantitative policy execution and the retest task execution need to perform related allocation on the task execution server (container). Currently, commonly Used allocation methods include LRU (Least Recently Used) algorithm allocation and random allocation.
The allocation method can realize the allocation of the tasks, but the resource state of the server and the processing requirements (such as memory, CPU and storage) of the task data cannot be sensed, so that the task data has certain processing failure probability and certain waste of the use of system resources.
Disclosure of Invention
The embodiments of the present invention mainly aim to provide a method and an apparatus for task data allocation, so as to effectively implement dynamic reasonable allocation of tasks, and improve the utilization rate of device resources and the processing efficiency and success rate of task data.
In order to achieve the above object, an embodiment of the present invention provides a task data allocation method, including:
acquiring current server data, and inputting the current server data into a task allocation model created based on historical server data to obtain a task data allocation scheme;
distributing the current task data to a corresponding server for processing according to the task data distribution scheme;
the updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data.
An embodiment of the present invention further provides a task data allocation apparatus, including:
the distribution scheme module is used for acquiring current server data, inputting the current server data into a task distribution model created based on historical server data, and obtaining a task data distribution scheme;
the task data distribution module is used for distributing the current task data to the corresponding servers for processing according to the task data distribution scheme;
the updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and runs on the processor, wherein the processor realizes the steps of the task data distribution method when executing the computer program.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the task data distribution method are implemented.
The task data distribution method and the device of the embodiment of the invention firstly input the current server data into the task distribution model to obtain the task data distribution scheme, then distribute the current task data to the corresponding server for processing according to the task data distribution scheme, the updating of the task distribution model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server distributed with the current task data, so that the dynamic reasonable distribution of the task can be effectively realized, and the utilization rate of equipment resources, the processing efficiency of the task data and the success rate are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of a task data distribution method in an embodiment of the invention;
FIG. 2 is a flow chart of obtaining server operational data analysis results in an embodiment of the present invention;
FIG. 3 is a flow diagram of creating a task assignment model based on historical server data in an embodiment of the invention;
FIG. 4 is a schematic diagram of a neural network architecture in an embodiment of the present invention;
FIG. 5 is a block diagram showing the construction of a task data distribution apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a task data distribution device according to another embodiment of the present invention;
fig. 7 is a block diagram of a computer device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
In view of the fact that in the prior art, the resource state of a server, the processing requirement of task data and the like cannot be sensed, certain processing failure probability exists in the task data, and certain waste exists in the use of system resources, embodiments of the present invention provide a task data allocation method and apparatus.
The reinforcement learning is a branch of machine learning, and compared with the classic supervised learning and unsupervised learning problems of machine learning, the reinforcement learning has the greatest characteristic of learning in interaction. The agent learns knowledge continuously according to the obtained reward or punishment in the interaction with the environment, and is more adaptive to the environment. The paradigm of reinforcement learning is very similar to the process of human learning knowledge, and thus reinforcement learning is considered as an important approach to implementing general AI. The present invention will be described in detail below with reference to the accompanying drawings.
FIG. 1 is a flowchart of a task data distribution method according to an embodiment of the present invention. As shown in fig. 1, the task data allocation method includes:
s101: and acquiring current server data, and inputting the current server data into a task allocation model created based on historical server data to obtain a task data allocation scheme.
The current server data comprises a current cpu utilization rate, a current memory utilization rate, current task data processing average consumed time data, a current throughput rate and a current task execution success rate.
The task data distribution scheme distributes the weight of each server to the task. For example, when there are three servers in total, the task data allocation scheme of the output is [ 0.60.20.2 ].
S102: and distributing the current task data to the corresponding server for processing according to the task data distribution scheme.
For example, when the task data allocation scheme is [ 0.60.20.2 ], the task data is allocated to the first server, the second server and the third server for processing according to the proportion of 60%, 20% and 20%.
The updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data.
Fig. 2 is a flowchart of obtaining an analysis result of server operation data according to an embodiment of the present invention. As shown in fig. 2, the analyzing the server to which the current task data is assigned to obtain the server operation data analysis result includes:
s201: and acquiring each operation data of the server distributed with the current task data.
During specific implementation, each running data of the server to which the current task data is distributed can be monitored through the real-time data monitoring device, and each running data comprises a real-time cpu utilization rate, a real-time cpu glitch data, a real-time memory utilization rate, a real-time memory glitch data, a real-time task data processing average consumed time data, a real-time throughput rate and a real-time task execution success rate. The real-time cpu glitch data is determined by the monitored real-time cpu running data, and the real-time memory glitch data is determined by the monitored real-time memory running data.
S202: and obtaining the analysis result of the server operation data according to the comparison result of each operation data and each corresponding operation threshold value.
Each operation threshold can be dynamically adjusted through each operation data of the server acquired in real time, and the operation thresholds include a cpu usage threshold, a cpu glitch threshold, a memory usage threshold, a memory glitch threshold, a task data processing average consumed time threshold, a throughput rate threshold and a task execution success rate threshold.
And comparing each operation data with each corresponding operation threshold value, and determining that the server is abnormal when any operation data of any server exceeds the corresponding operation threshold value range. And when any server is continuously abnormal within a preset time range, outputting an abnormal server operation data analysis result.
In one embodiment, the updating of the task allocation model in dependence on the server operation data analysis result comprises:
determining a model updating time period according to a server operation data analysis result obtained in a preset time period; and determining server data in the model updating time period as server updating data, and updating the task distribution model according to the server updating data.
During specific implementation, when any server is continuously abnormal within a preset time range, determining the next time point of a preset time period as the end of a model updating time period, determining the model updating time period according to the next time point of the preset time period and a preset server data extraction time range, determining server data (including cpu utilization rate, memory utilization rate, task data processing average time consumption data, throughput rate, task execution success rate and the like) within the model updating time period as server updating data, and retraining a task distribution model based on the server updating data.
The execution subject of the task data allocation method shown in fig. 1 may be a computer. As can be seen from the process shown in fig. 1, in the task data allocation method according to the embodiment of the present invention, the data of the current server is first input into the task allocation model to obtain a task data allocation scheme, and then the data of the current task is allocated to the corresponding server according to the task data allocation scheme for processing, the update of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server to which the data of the current task has been allocated, so that the dynamic reasonable allocation of the task can be effectively achieved, and the utilization rate of the device resources, the processing efficiency of the task data, and the success rate of the task data are improved.
FIG. 3 is a flow diagram of creating a task assignment model based on historical server data in an embodiment of the invention. As shown in FIG. 3, creating a task assignment model based on historical server data includes:
the following iterative process is performed:
s301: and determining a prediction task data distribution scheme according to the historical server data and the neural network model parameters.
TABLE 1
Figure BDA0003142166250000051
Table 1 is a history server data table. As shown in table 1, the historical server data includes historical cpu usage, historical memory usage, historical task data processing average consumed time data, historical throughput, and historical task execution success rate.
Before inputting the historical server data into the neural network model, the historical server data needs to be formatted. The formatted historical server data includes:
historical cpu utilization rate C of jth server at ith time pointi,jAnd the historical memory utilization rate M of the jth server at the ith time pointi,jAnd average time consumption data T of historical task data processing at the ith time pointiHistorical throughput rate R of ith time pointiAnd the historical task execution success rate S of the jth server at the ith time pointi,j(ii) a Wherein i belongs to 1. t is the historical time and m is the number of servers.
Fig. 4 is a schematic diagram of a neural network structure in an embodiment of the present invention. As shown in fig. 4, S1 is a primary convolution process, S2 is a primary back propagation process, S3 is a secondary convolution process, S4 is a secondary back propagation process, S5 is a matrix adjustment process, and S6 is a full join process. S301, local features are acquired layer by using a hierarchical convolutional neural network, the knowledge of the local features is combined by using a full-connection network, and a final result is obtained through a Softmax function.
S302: and determining a task loss function according to the actual task data distribution scheme corresponding to the predicted task data distribution scheme.
S303: judging whether the current iteration times are equal to the preset iteration times or not;
s304: and when the current iteration times are equal to the preset iteration times, creating a task distribution model according to the neural network model parameters.
S305: and when the current iteration times are not equal to the preset iteration times, updating the neural network model parameters according to the task loss function, and continuously executing the iteration processing.
The specific process of the embodiment of the invention is as follows:
1. and determining a prediction task data distribution scheme according to the historical server data and the neural network model parameters.
2. And determining a task loss function according to the actual task data distribution scheme corresponding to the predicted task data distribution scheme.
3. And (3) when the current iteration times are equal to the preset iteration times, establishing a task distribution model according to the neural network model parameters, otherwise, updating the neural network model parameters according to the task loss function, and returning to the step 1.
4. And acquiring current server data, and inputting the current server data into the task allocation model to obtain a task data allocation scheme.
5. And distributing the current task data to the corresponding server for processing according to the task data distribution scheme.
6. And acquiring each operation data of the server to which the current task data is distributed, and acquiring an analysis result of the operation data of the server according to a comparison result of each operation data and each corresponding operation threshold.
7. And determining a model updating time period according to the analysis result of the server operation data acquired in the preset time period, determining the server data in the model updating time period as server updating data, and updating the task distribution model according to the server updating data.
To sum up, the task data allocation method of the embodiment of the present invention first inputs the current server data into the task allocation model to obtain the task data allocation scheme, and then allocates the current task data to the corresponding server for processing according to the task data allocation scheme, the update of the task allocation model depends on the server operation data analysis result, and the server operation data analysis result is obtained by analyzing the server to which the current task data has been allocated, so that the dynamic reasonable allocation of the task can be effectively realized, and the utilization rate of the device resources, the processing efficiency and the success rate of the task data, and the throughput rate of the whole system are improved.
Based on the same inventive concept, the embodiment of the invention also provides a task data distribution device, and as the principle of solving the problems of the device is similar to the task data distribution method, the implementation of the device can refer to the implementation of the method, and repeated parts are not described again.
Fig. 5 is a block diagram showing the structure of a task data distribution device according to an embodiment of the present invention. FIG. 6 is a schematic diagram of a task data distribution device according to another embodiment of the present invention. As shown in fig. 5 to 6, the task data distributing apparatus includes:
the distribution scheme module is used for acquiring current server data, inputting the current server data into a task distribution model created based on historical server data, and obtaining a task data distribution scheme;
the task data distribution module is used for distributing the current task data to the corresponding servers for processing according to the task data distribution scheme;
the updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data.
In one embodiment, the method further comprises the following steps: a task assignment model creation module for performing the following iterative process:
determining a prediction task data distribution scheme according to the historical server data and the neural network model parameters;
determining a task loss function according to an actual task data distribution scheme corresponding to the predicted task data distribution scheme;
and when the current iteration times are equal to the preset iteration times, establishing a task distribution model according to the neural network model parameters, otherwise, updating the neural network model parameters according to the task loss function, and continuously executing the iteration processing.
In one embodiment, the method further comprises the following steps:
the operation data acquisition module is used for acquiring each operation data of the server which is distributed with the current task data;
and the analysis result module is used for obtaining the analysis result of the server operation data according to the comparison result of each operation data and each corresponding operation threshold value.
In one embodiment, the method further comprises the following steps:
the updating time period module is used for determining a model updating time period according to the analysis result of the server operation data acquired in the preset time period;
and the model updating module is used for determining the server data in the model updating time period as server updating data and updating the task distribution model according to the server updating data.
As shown in fig. 6, in practical applications, the task data distribution device includes a data management device, a neural network model device, a task scheduling adjustment device, a processing efficiency evaluation device, a real-time data monitoring device, and a model feedback device.
The data management device comprises a distribution scheme module which is used for managing current server data and historical server data and inputting the current server data or the historical server data into the neural network model device.
The neural network model device comprises a distribution scheme module, a task distribution model creating module and a model updating module, and is used for acquiring local features layer by using a hierarchical convolutional neural network, recognizing the local features by using a fully-connected network combination, and obtaining a final result.
The task scheduling adjusting device comprises a task data distribution module which is used for outputting a task data distribution scheme for task distribution and continuously iteratively adjusting along with the model.
The processing efficiency evaluation device comprises an analysis result module which is used for analyzing and evaluating the whole operation condition of the current system server.
The real-time data monitoring device comprises an operation data acquisition module which is used for acquiring each operation data of the server which is distributed with the current task data.
The model feedback device comprises an updating time period module and a model updating module.
To sum up, the task data distribution device of the embodiment of the present invention first inputs the current server data into the task distribution model to obtain the task data distribution scheme, and then distributes the current task data to the corresponding servers for processing according to the task data distribution scheme, the updating of the task distribution model depends on the server operation data analysis result, and the server operation data analysis result is obtained by analyzing the servers to which the current task data has been distributed, so that the dynamic reasonable distribution of the tasks can be effectively realized, and the utilization rate of the device resources, the processing efficiency and the success rate of the task data, and the throughput rate of the whole system are improved.
The embodiment of the invention also provides a specific implementation mode of computer equipment, which can realize all the steps in the task data distribution method in the embodiment. Fig. 7 is a block diagram of a computer device in an embodiment of the present invention, and referring to fig. 7, the computer device specifically includes the following:
a processor (processor)701 and a memory (memory) 702.
The processor 701 is configured to call a computer program in the memory 702, and when the processor executes the computer program, the processor implements all the steps in the task data allocation method in the above embodiments, for example, when the processor executes the computer program, the processor implements the following steps:
acquiring current server data, and inputting the current server data into a task allocation model created based on historical server data to obtain a task data allocation scheme;
distributing the current task data to a corresponding server for processing according to the task data distribution scheme;
the updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data.
To sum up, the computer device of the embodiment of the present invention first inputs the data of the current server into the task allocation model to obtain the task data allocation scheme, and then allocates the data of the current task to the corresponding server for processing according to the task data allocation scheme, the update of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server to which the data of the current task has been allocated, so that the dynamic reasonable allocation of the task can be effectively realized, and the resource utilization rate of the device, the processing efficiency and the success rate of the task data, and the throughput rate of the whole system can be improved.
An embodiment of the present invention further provides a computer-readable storage medium capable of implementing all the steps in the task data allocation method in the foregoing embodiment, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements all the steps in the task data allocation method in the foregoing embodiment, for example, when the processor executes the computer program, the processor implements the following steps:
acquiring current server data, and inputting the current server data into a task allocation model created based on historical server data to obtain a task data allocation scheme;
distributing the current task data to a corresponding server for processing according to the task data distribution scheme;
the updating of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server allocated with the current task data.
To sum up, the computer-readable storage medium of the embodiment of the present invention first inputs the data of the current server into the task allocation model to obtain a task data allocation scheme, and then allocates the data of the current task to the corresponding server for processing according to the task data allocation scheme, the update of the task allocation model depends on the analysis result of the server operation data, and the analysis result of the server operation data is obtained by analyzing the server to which the data of the current task has been allocated, so that the dynamic reasonable allocation of the task can be effectively achieved, and the utilization rate of the device resources, the processing efficiency and the success rate of the task data, and the throughput rate of the whole system can be improved.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, or devices described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.

Claims (10)

1. A method for distributing task data, comprising:
acquiring current server data, and inputting the current server data into a task allocation model established based on historical server data to obtain a task data allocation scheme;
distributing the current task data to a corresponding server for processing according to the task data distribution scheme;
wherein the updating of the task allocation model depends on the server operation data analysis result, and the server operation data analysis result is obtained by analyzing the server to which the current task data is allocated.
2. The method of task data distribution according to claim 1, wherein creating a task distribution model based on historical server data comprises:
the following iterative process is performed:
determining a prediction task data distribution scheme according to the historical server data and the neural network model parameters;
determining a task loss function according to an actual task data distribution scheme corresponding to the predicted task data distribution scheme;
and when the current iteration times are equal to the preset iteration times, creating a task distribution model according to the neural network model parameters, otherwise, updating the neural network model parameters according to the task loss function, and continuously executing the iteration processing.
3. The task data distribution method according to claim 1, wherein analyzing the server to which the current task data has been distributed to obtain the server operation data analysis result includes:
acquiring each operation data of the server distributed with the current task data;
and obtaining the analysis result of the server operation data according to the comparison result of each operation data and each corresponding operation threshold value.
4. The task data distribution method of claim 1, wherein the updating of the task distribution model in dependence on server operational data analysis results comprises:
determining a model updating time period according to a server operation data analysis result obtained in a preset time period;
and determining server data in a model updating time period as server updating data, and updating the task distribution model according to the server updating data.
5. A task data distribution apparatus, comprising:
the distribution scheme module is used for acquiring current server data, inputting the current server data into a task distribution model created based on historical server data, and obtaining a task data distribution scheme;
the task data distribution module is used for distributing the current task data to the corresponding servers for processing according to the task data distribution scheme;
wherein the updating of the task allocation model depends on the server operation data analysis result, and the server operation data analysis result is obtained by analyzing the server to which the current task data is allocated.
6. The task data distribution apparatus according to claim 5, further comprising: a task assignment model creation module for performing the following iterative process:
determining a prediction task data distribution scheme according to the historical server data and the neural network model parameters;
determining a task loss function according to an actual task data distribution scheme corresponding to the predicted task data distribution scheme;
and when the current iteration times are equal to the preset iteration times, creating a task distribution model according to the neural network model parameters, otherwise, updating the neural network model parameters according to the task loss function, and continuously executing the iteration processing.
7. The task data distribution apparatus according to claim 5, further comprising:
the operation data acquisition module is used for acquiring each operation data of the server which is distributed with the current task data;
and the analysis result module is used for obtaining the analysis result of the server operation data according to the comparison result of each operation data and each corresponding operation threshold value.
8. The task data distribution apparatus according to claim 5, further comprising:
the updating time period module is used for determining a model updating time period according to the analysis result of the server operation data acquired in the preset time period;
and the model updating module is used for determining the server data in the model updating time period as server updating data and updating the task distribution model according to the server updating data.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the processor implements the steps of the task data distribution method according to any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the task data distribution method of any one of claims 1 to 4.
CN202110743652.4A 2021-06-30 2021-06-30 Task data distribution method and device Pending CN113342535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110743652.4A CN113342535A (en) 2021-06-30 2021-06-30 Task data distribution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110743652.4A CN113342535A (en) 2021-06-30 2021-06-30 Task data distribution method and device

Publications (1)

Publication Number Publication Date
CN113342535A true CN113342535A (en) 2021-09-03

Family

ID=77482044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110743652.4A Pending CN113342535A (en) 2021-06-30 2021-06-30 Task data distribution method and device

Country Status (1)

Country Link
CN (1) CN113342535A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109656702A (en) * 2018-12-20 2019-04-19 西安电子科技大学 A kind of across data center network method for scheduling task based on intensified learning
US20190258985A1 (en) * 2018-02-16 2019-08-22 Accenture Global Solutions Limited Utilizing a machine learning model and natural language processing to manage and allocate tasks
CN110941486A (en) * 2019-11-26 2020-03-31 苏州思必驰信息科技有限公司 Task management method and device, electronic equipment and computer readable storage medium
CN111679912A (en) * 2020-06-08 2020-09-18 广州汇量信息科技有限公司 Load balancing method and device of server, storage medium and equipment
CN111861159A (en) * 2020-07-03 2020-10-30 武汉实为信息技术股份有限公司 Task allocation method based on reinforcement learning
CN112990624A (en) * 2019-12-13 2021-06-18 顺丰科技有限公司 Task allocation method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258985A1 (en) * 2018-02-16 2019-08-22 Accenture Global Solutions Limited Utilizing a machine learning model and natural language processing to manage and allocate tasks
CN109656702A (en) * 2018-12-20 2019-04-19 西安电子科技大学 A kind of across data center network method for scheduling task based on intensified learning
CN110941486A (en) * 2019-11-26 2020-03-31 苏州思必驰信息科技有限公司 Task management method and device, electronic equipment and computer readable storage medium
CN112990624A (en) * 2019-12-13 2021-06-18 顺丰科技有限公司 Task allocation method, device, equipment and storage medium
CN111679912A (en) * 2020-06-08 2020-09-18 广州汇量信息科技有限公司 Load balancing method and device of server, storage medium and equipment
CN111861159A (en) * 2020-07-03 2020-10-30 武汉实为信息技术股份有限公司 Task allocation method based on reinforcement learning

Similar Documents

Publication Publication Date Title
CN112270545A (en) Financial risk prediction method and device based on migration sample screening and electronic equipment
CN112685170B (en) Dynamic optimization of backup strategies
JP2007502484A (en) Method and system for predicting inactive customers
CN114048857B (en) Calculation force distribution method and device and calculation force server
CN109885452A (en) Method for monitoring performance, device and terminal device
CN108173905A (en) A kind of resource allocation method, device and electronic equipment
CN111970205B (en) Gateway interface flow control method and system
CN112768056A (en) Disease prediction model establishing method and device based on joint learning framework
CN116185584A (en) Multi-tenant database resource planning and scheduling method based on deep reinforcement learning
Gummadi et al. Mean field analysis of multi-armed bandit games
Badri et al. Risk-based optimization of resource provisioning in mobile edge computing
Rezaei Kalantari et al. Presenting a new fuzzy system for web service selection aimed at dynamic software rejuvenation
CN113342535A (en) Task data distribution method and device
CN117014389A (en) Computing network resource allocation method and system, electronic equipment and storage medium
US20230281680A1 (en) System and methods for resource allocation
CN116521344A (en) AI algorithm scheduling method and system based on resource bus
CN114840418A (en) Fuzzy test method and device
CN109902831B (en) Service decision processing method and device
CN113220463A (en) Binding strategy inference method and device, electronic equipment and storage medium
CN113298353A (en) Resource adjusting method, device and system based on user survival model
Sliem et al. An approach for performance modelling and analysis of multi-tiers autonomic systems
CN112953845B (en) Distributed system flow control method and device
CN115481752B (en) Model training method, device, electronic equipment and storage medium
CN117479327A (en) Signal control method and device, electronic equipment and storage medium
Gani et al. UMAS learning requirement for controlling network resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination