CN114237886A - Task processing method and device, computer equipment and storage medium - Google Patents

Task processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114237886A
CN114237886A CN202111535039.XA CN202111535039A CN114237886A CN 114237886 A CN114237886 A CN 114237886A CN 202111535039 A CN202111535039 A CN 202111535039A CN 114237886 A CN114237886 A CN 114237886A
Authority
CN
China
Prior art keywords
server
task
tasks
target
servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111535039.XA
Other languages
Chinese (zh)
Inventor
张建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Securities Co Ltd
Original Assignee
Ping An Securities Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Securities Co Ltd filed Critical Ping An Securities Co Ltd
Priority to CN202111535039.XA priority Critical patent/CN114237886A/en
Publication of CN114237886A publication Critical patent/CN114237886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2025Failover techniques using centralised failover control functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/547Messaging middleware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

The application relates to the technical field of artificial intelligence, and provides a task processing method, a task processing device, a computer device and a storage medium, wherein the method comprises the following steps: performing exception analysis on all servers contained in the server cluster to determine an abnormal server, and marking the servers except the abnormal server as normal servers; determining a target server from all normal servers; acquiring a task to be processed from a data source through a target server; performing duplicate removal processing on all tasks to be processed based on a cache tool to obtain processed target tasks; and storing the target tasks into a preset message queue, and distributing all the target tasks in the message queue to each normal server in the server cluster through message middleware. The method and the device can ensure normal processing of the tasks, improve the processing efficiency of the tasks, and improve the intelligence and accuracy of task processing. The method and the device can also be applied to the field of block chains, and the data such as the target task can be stored on the block chains.

Description

Task processing method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a task processing method and device, computer equipment and a storage medium.
Background
In the prior art, when large-batch task processing is involved, a stand-alone deployment processing mode is mainly adopted. With the development of services and the continuous expansion of data scale, the task batching scheme deployed by a single machine has the following problems: when a large amount of tasks are executed, a single server is adopted for execution, so that the task execution speed is low, the task execution time is long, and the efficiency is low; moreover, if the server fails, the execution of all tasks under the server is affected, and the tasks executed on the server all fail, which causes waste of system resources and makes the normal processing of tasks unable to be guaranteed.
Disclosure of Invention
The application mainly aims to provide a task processing method, a task processing device, computer equipment and a storage medium, and aims to solve the technical problems that the existing large-batch task processing mode is slow in task execution speed, long in task execution time and low in efficiency, and normal processing of tasks cannot be guaranteed.
The application provides a task processing method, which comprises the following steps:
performing anomaly analysis on all servers contained in a preset server cluster to determine an abnormal server in the server cluster, and marking the servers except the abnormal server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
determining a target server from all the normal servers;
acquiring a task to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
and storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
Optionally, the step of performing anomaly analysis on all servers included in a preset server cluster to determine an anomalous server in the server cluster includes:
acquiring operation parameters of each server and acquiring a preset parameter threshold; the preset parameter threshold is a comprehensive threshold corresponding to all the servers; respectively carrying out numerical comparison processing on the operation parameter of each server and the preset parameter threshold value, screening out a first server of which the operation parameter meets the preset parameter threshold value from all the servers, and marking the servers except the first server as a second server;
acquiring operation parameters of the second servers, and generating operation anomaly probability values of the second servers based on the operation parameters of the second servers and a preset anomaly identification model;
acquiring a preset probability threshold; the preset probability threshold is a comprehensive threshold corresponding to all the servers;
respectively carrying out numerical comparison processing on the operation abnormal probability value of each second server and the preset probability threshold value, and screening out a third server of which the operation abnormal probability value is smaller than the preset probability threshold value from all the second servers;
and taking the first server and the third server as the abnormal servers.
Optionally, the step of obtaining the operation parameters of each second server, and generating an operation anomaly probability value of each second server based on the operation parameters of each second server and a preset anomaly identification model includes:
calculating an average value of the operating parameters of all the second servers; and the number of the first and second groups,
calculating the variance of the operating parameters of all the second servers;
acquiring appointed operation parameters of a first appointed server; the first designated server is any one of all the second servers;
and inputting the average value, the variance and the specified operation parameters into the abnormality recognition model, and generating an operation abnormality probability value corresponding to the first specified server through the abnormality recognition model.
Optionally, the step of performing deduplication processing on all the tasks to be processed based on a preset caching tool to obtain a processed target task includes:
step A: sequencing all the tasks to be processed according to the sequence of the acquisition time of each task to be processed from front to back to obtain a corresponding sequencing result;
and B: acquiring a first task of a 1 st order in the ordering result, and extracting a first task field contained in the first task;
and C: splicing all the first task fields to obtain corresponding first spliced fields, storing the first spliced fields into the cache tool, and setting first timeout time for the first spliced fields;
step D: acquiring a second task of the ith order in the ordering result, splicing all second task fields contained in the second task to obtain corresponding second spliced fields, and judging whether a first designated field which is the same as the second spliced fields exists in the cache tool or not; wherein i is an integer and the initial value of i is 2;
step E: if the first designated field does not exist, storing the second spliced field into the cache tool, and setting a second timeout time for the second spliced field, otherwise, not storing the second spliced field;
step F: repeating the step D-step E until i is equal to n so as to complete the processing of all tasks contained in the sequencing result and acquire all target splicing fields stored in the cache tool; wherein n is the number of all the tasks to be processed;
step G: and screening out the tasks corresponding to the target splicing field from all the tasks to be processed to obtain the target task.
Optionally, the step of storing the target task into a preset message queue, and distributing all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware includes:
acquiring acquisition time information, processing timeliness information and importance information of each target task;
generating a processing sequence of each target task based on the acquisition time information, the processing timeliness information and the importance information;
storing all the target tasks into the message queue according to the processing sequence;
distributing all the target tasks in the message queue to each normal server in the server cluster according to the processing sequence through the message middleware.
Optionally, the step of generating a processing order of each target task based on the acquisition time information, the processing aging information, and the importance information includes:
acquiring a first preset weight corresponding to the acquisition time information, a second preset weight corresponding to the processing timeliness information and a third preset weight corresponding to the importance information;
calculating a priority value of each target task through a preset calculation formula based on the acquisition time information, the processing timeliness information, the importance information, the first preset weight, the second preset weight and the third preset weight;
sequencing all the target tasks according to the sequence of the priority values from large to small to obtain target sequencing information corresponding to all the target tasks;
and generating a processing sequence of each target task based on all the target sequencing information.
Optionally, after the step of storing the target task into a preset message queue and allocating all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware, the method includes:
receiving a task processing result which is obtained after the received appointed task is processed and sent by a second appointed server through the message middleware; the second designated server is any one of all the normal servers, and the designated task is any one of all the target tasks received by the designated server;
if the task processing result is normal, acquiring an appointed splicing field corresponding to the appointed task in the cache tool, and adjusting appointed overtime corresponding to the appointed splicing field so that the adjusted appointed overtime is greater than the appointed overtime;
if the task processing result is abnormal, deleting the specified splicing field corresponding to the specified task from the cache tool;
and placing the specified task in a preset abnormal task list.
The present application also provides a task processing device, including:
the first determining module is used for performing exception analysis on all servers contained in a preset server cluster to determine an exception server in the server cluster, and marking the servers except the exception server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
the second determining module is used for determining a target server from all the normal servers;
the acquisition module is used for acquiring the tasks to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
the first processing module is used for performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
and the second processing module is used for storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
The present application further provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
The task processing method, the task processing device, the computer equipment and the storage medium have the following beneficial effects:
the task processing method, the task processing device, the computer equipment and the storage medium provided by the application carry out exception analysis on all servers contained in a preset server cluster and determine an exception server in the server cluster before processing a task to be processed, then a target server is determined from all normal servers in the server cluster except the abnormal server, and the target server acquires the tasks to be processed from the preset data source, then performs deduplication processing on all the tasks to be processed based on a preset cache module to obtain the processed target tasks, and finally stores the target tasks into a preset message queue, and distributing all target tasks in the message queue to each normal server in the server cluster through message middleware so as to process the received target tasks through each normal server. According to the method and the system, the normal server in the server cluster is used for processing the task to be processed, the normal server distributes the task and can execute the task at the same time, the maximum use of resources can be achieved, and the processing efficiency of the task is improved. By only using the normal server to process the task, the situation that the task cannot be normally processed due to the fact that the abnormal server is used to process the task is avoided, the operation stability of the server cluster is effectively improved, the normal processing of the task is guaranteed, and the use experience of a user is improved. Before task processing, whether repeated tasks exist in all tasks to be processed is checked based on the use of a cache tool, so that the repeated processing of all the tasks to be processed can be accurately and quickly completed, repeated execution of the same task is effectively avoided, useless power consumption of the device is reduced, and the intelligence and accuracy of task processing are improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a task processing method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Referring to fig. 1, a task processing method according to an embodiment of the present application includes:
s10: performing anomaly analysis on all servers contained in a preset server cluster to determine an abnormal server in the server cluster, and marking the servers except the abnormal server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
s20: determining a target server from all the normal servers;
s30: acquiring a task to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
s40: performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
s50: and storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
As described in the above steps S10 to S50, the execution subject of the embodiment of the method is a task processing device. In practical applications, the task processing device may be implemented by a virtual device, such as a software code, or may be implemented by an entity device in which a relevant execution code is written or integrated, and may perform human-computer interaction with a user through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device. The task processing device in the embodiment can ensure normal processing of the task, improve the processing efficiency of the task, and improve the intelligence and accuracy of task processing. Specifically, firstly, all servers included in a preset server cluster are subjected to exception analysis to determine an exception server in the server cluster, and servers except the exception server in the server cluster are marked as normal servers. Wherein the number of the servers comprises a plurality. In addition, the specific implementation process of the abnormal server in the server cluster is determined by performing abnormality analysis on all servers included in the preset server cluster, which will be further described in the subsequent specific embodiments and will not be described herein.
And then determining a target server from all the normal servers. Wherein, zookeeper distributed lock can be used to realize election, and one server is generated from all the normal servers contained in the server cluster as a master, namely the target server. The target server is used as a data producer of the task and is responsible for reading corresponding task data from a data source storing the task data and then putting the task data into a message queue. Other normal servers in the server cluster except the target server do not play a role in data production. Only when the zookeeper distributed lock triggers to reselect the master under the condition that the target server is abnormal, other normal servers have the chance to be selected as the master. When the target server fails, the zookeeper distributed lock can automatically trigger the selection of the master fault tolerance, so that the task batching process of the whole server cluster cannot be influenced, and the normal processing of the tasks is guaranteed. And then acquiring the task to be processed from a preset data source through the target server. Wherein the number of the tasks to be processed comprises a plurality of tasks. Additionally, the data source can be a related business system, such as a tape-out business system. The pending task may be a drop-off task corresponding to a drop-off plan of the customer. Each pending task may be comprised of a plurality of fields.
And subsequently, performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain a processed target task. Wherein, the specific may be a redis cache. In addition, after all the tasks to be processed are obtained, the task fields contained in each task to be processed can be spliced to generate spliced fields corresponding to each task to be processed, whether repeated tasks exist in all the tasks to be processed is checked based on the spliced fields and the use of a caching tool, then the deduplication processing of all the tasks to be processed can be accurately and quickly completed, and the repeated execution of the same task is effectively avoided. In addition, for the specific implementation process of performing deduplication processing on all the tasks to be processed based on the preset caching tool to obtain the processed target task, this will be further described in the subsequent specific embodiments, and details are not repeated here. And finally, storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server. Wherein, the message middleware may be RABBITMQ. After the tasks to be processed are placed in the message queue, the message middleware RABBITMQ uniformly distributes the tasks in the message queue to corresponding consumers according to the sequence of each task in the message queue, namely all normal servers in the server cluster, and each consumer cannot be distributed to repeated tasks. In addition, when the processing speed needs to be increased, the server cluster can be expanded, a newly added server can be triggered to reselect a master in the starting process, and meanwhile, the message queue can be automatically docked to become a new consumer and automatically participate in consumption processing of tasks.
In this embodiment, before processing a task to be processed, exception analysis is performed on all servers included in a preset server cluster and an exception server in the server cluster is determined, then a target server is determined from all normal servers in the server cluster except the exception server, the task to be processed is obtained from a preset data source through the target server, then all tasks to be processed are subjected to deduplication processing based on a preset cache module to obtain a processed target task, the target task is stored in a preset message queue, and all target tasks in the message queue are distributed to each normal server in the server cluster through a message middleware to process the received target task through each normal server. In the embodiment, the normal server in the server cluster is used for processing the task to be processed, and the normal server can distribute the task and execute the task at the same time, so that the maximum use of resources can be realized, and the processing efficiency of the task is improved. By only using the normal server to process the task, the situation that the task cannot be normally processed due to the fact that the abnormal server is used to process the task is avoided, the operation stability of the server cluster is effectively improved, the normal processing of the task is guaranteed, and the use experience of a user is improved. Before task processing, whether repeated tasks exist in all tasks to be processed is checked based on the use of a cache tool, so that the repeated processing of all the tasks to be processed can be accurately and quickly completed, repeated execution of the same task is effectively avoided, useless power consumption of the device is reduced, and the intelligence and accuracy of task processing are improved.
Further, in an embodiment of the present application, the step S10 includes:
s100: acquiring operation parameters of each server and acquiring a preset parameter threshold; the preset parameter threshold is a comprehensive threshold corresponding to all the servers;
s101: respectively carrying out numerical comparison processing on the operation parameter of each server and the preset parameter threshold value, screening out a first server of which the operation parameter meets the preset parameter threshold value from all the servers, and marking the servers except the first server as a second server;
s102: acquiring operation parameters of the second servers, and generating operation anomaly probability values of the second servers based on the operation parameters of the second servers and a preset anomaly identification model;
s103: acquiring a preset probability threshold; the preset probability threshold is a comprehensive threshold corresponding to all the servers;
s104: respectively carrying out numerical comparison processing on the operation abnormal probability value of each second server and the preset probability threshold value, and screening out a third server of which the operation abnormal probability value is smaller than the preset probability threshold value from all the second servers;
s105: and taking the first server and the third server as the abnormal servers.
As described in the foregoing steps S100 to S105, the step of performing anomaly analysis on all servers included in a preset server cluster to determine an abnormal server in the server cluster may specifically include: firstly, the operation parameters of each server are obtained, and a preset parameter threshold value is obtained. The preset parameter threshold is a comprehensive threshold corresponding to all the servers. The operating parameters may include only a task processing error rate or only a task processing timeout rate. Correspondingly, the preset parameter threshold may comprise an error threshold or a timeout threshold. The method comprises the steps of recording the total number of tasks processed by a server, the number of task processing errors and the task processing time length at each moment according to a monitoring service, counting the total number of the tasks, the number of the task processing errors and the task processing time length of the server in a certain time period through a sliding window algorithm, and determining a task processing error rate and a task processing overtime rate according to the total number of the tasks, the number of the task processing errors and the task processing time length, wherein the task processing error rate is equal to the number of the task processing errors divided by the total number of the tasks of the server. In addition, the number of overtime tasks can be obtained according to the task processing duration and the preset standard task processing duration, and the task processing overtime rate is equal to the number of the overtime tasks divided by the total number of the tasks of the server. In addition, if the task processing duration of the task is longer than the standard task processing duration, the task is regarded as an overtime task. Further, the operating parameters may of course also include both a task processing error rate and a task request timeout rate. At this time, the task processing error rate and the task request timeout rate need to be respectively judged, and as long as one parameter meets the corresponding preset parameter threshold, the server corresponding to the parameter is regarded as an abnormal server. And then, carrying out numerical comparison processing on the operation parameters of each server with the preset parameter threshold value, screening out a first server with the operation parameters meeting the preset parameter threshold value from all the servers, and marking the servers except the first server as a second server. During specific implementation, each server in a server cluster needs to be screened, whether the operating parameters of a current server meet a parameter threshold value is judged, and if yes, the current server is marked as an abnormal server; and then, taking the next server of the current server as a new current server, and continuously judging whether the operating parameters of the current server meet the parameter threshold value until all the servers in the server cluster are judged. In addition, the operating parameter satisfying the parameter threshold may be that the operating parameter is greater than the parameter threshold, or that the operating parameter is less than the parameter threshold, or the like. And then, acquiring the operation parameters of the second servers, and generating the operation abnormity probability value of each second server based on the operation parameters of each second server and a preset abnormity identification model. The preset anomaly identification model is a probability distribution model, specifically, may be any one of a gaussian model, a poisson distribution model, or a bernoulli distribution model, and preferably may be a gaussian model. The abnormal recognition model can generally represent the approximate distribution condition of the operation parameters of the servers in the server cluster, the indefinite parameters in the abnormal recognition model can be obtained by fitting according to the operation parameters of all the servers in the server cluster, then the operation parameters of the current server are input into the abnormal recognition model determined by the indefinite parameters, the probability value that the current server operates normally can be output, then the probability value that the server operates abnormally is determined by the probability that the server operates normally, and the sum of the probability value that the server operates normally and the probability value that the server operates abnormally is 1, so that the probability value of the required operation abnormality can be obtained by using the probability value of 1-operation normal. Subsequently acquiring a preset probability threshold; the preset probability threshold is a comprehensive threshold corresponding to all the servers. The value of the preset probability threshold is not specifically limited, and can be set according to actual requirements. And finally, carrying out numerical comparison processing on the operation abnormal probability value of each second server and the preset probability threshold value respectively, screening out a third server with the operation abnormal probability value smaller than the preset probability threshold value from all the second servers, and taking the first server and the third server as the abnormal servers. In this embodiment, for each server in the server cluster, if the operation parameter of the first server satisfies the parameter threshold corresponding to the operation parameter, the first server is taken as an abnormal server. On the basis, a preset abnormal recognition model is used for carrying out abnormal prediction on a second server except the first server, and then a corresponding third server is screened out from the second server to serve as an abnormal server. The servers in the server cluster are subjected to two times of abnormal recognition screening, and all servers meeting the conditions obtained in the two times of abnormal recognition screening are determined as abnormal servers, so that the accuracy rate of judging the abnormality of the servers is improved, the quality of normal servers in the server cluster is ensured to a certain extent, the condition that the tasks cannot be normally processed due to the fact that the abnormal servers are used for processing the tasks is avoided, the operation stability of the server cluster is effectively improved, the normal processing of the tasks is guaranteed, and the use experience of users is improved.
Further, in an embodiment of the application, the step S102 includes:
s1020: calculating an average value of the operating parameters of all the second servers; and the number of the first and second groups,
s1021: calculating the variance of the operating parameters of all the second servers;
s1022: acquiring appointed operation parameters of a first appointed server; the first designated server is any one of all the second servers;
s1023: and inputting the average value, the variance and the specified operation parameters into the abnormality recognition model, and generating an operation abnormality probability value corresponding to the first specified server through the abnormality recognition model.
As described in the foregoing steps S1020 to S1023, the step of obtaining the operation parameters of each second server and generating the operation abnormality probability value of each second server based on the operation parameters of each second server and a preset abnormality identification model may specifically include: first, an average of all the operating parameters of the second server is calculated. The operation parameters of all the second servers can be added and averaged, so that an average value corresponding to the operation parameters can be obtained. And calculating the variance of the operating parameters of all the second servers. The variance of the operation parameters of each second server can be obtained through a preset formula according to the operation parameters of all second servers and the average values corresponding to the operation parameters of all second servers. The preset formula specifically comprises:
Figure BDA0003412300100000121
σ2is the variance of the operating parameters of all the second servers, p is the sum of the number of all the second servers, xiRepresents the operating parameter of the ith second server, and mu is the average value of the operating parameters of all the second servers. And then acquiring the specified operation parameters of the first specified server. The first designated server is any one of all the second servers. And finally, inputting the average value, the variance and the specified operation parameters into the abnormal recognition model, and generating an operation abnormal probability value corresponding to the first specified server through the abnormal recognition model. Specifically, the anomaly identification model is a gaussian model. The formula of the Gaussian model is as follows:
Figure BDA0003412300100000122
in the formula of the Gaussian model, x represents a specified operating parameter of the first specified server, mu is an average value of the operating parameters of all the second servers, and sigma2Variance of operating parameters for all of the second serversThe mean value mu and the variance sigma corresponding to the operation parameters are calculated2And substituting the appointed operation parameters of the first appointed server into a formula of the Gaussian model, outputting a probability value that the corresponding first appointed server operates normally, and subtracting the probability value that the first appointed server operates normally by 1, so that the probability value that the first appointed server operates abnormally can be obtained. In this embodiment, after the operation parameters of each second server are obtained, a corresponding average value and a corresponding variance may be calculated according to the operation parameters, and then the average value, the variance, and the specified operation parameters of the first specified server may be input into the abnormality identification model, so that an operation abnormality probability value corresponding to the first specified server may be quickly and accurately generated through the abnormality identification model, which is beneficial to subsequently screen out a third server having an operation abnormality probability value smaller than the preset probability threshold from all the second servers according to the obtained operation abnormality probability value, and the third server is used as an abnormal server, thereby effectively improving the accuracy of determining server abnormality.
Further, in an embodiment of the present application, the step S40 includes:
s400: step A: sequencing all the tasks to be processed according to the sequence of the acquisition time of each task to be processed from front to back to obtain a corresponding sequencing result;
s401: and B: acquiring a first task of a 1 st order in the ordering result, and extracting a first task field contained in the first task;
s402: and C: splicing all the first task fields to obtain corresponding first spliced fields, storing the first spliced fields into the cache tool, and setting first timeout time for the first spliced fields;
s403: step D: acquiring a second task of the ith order in the ordering result, splicing all second task fields contained in the second task to obtain corresponding second spliced fields, and judging whether a first designated field which is the same as the second spliced fields exists in the cache tool or not; wherein i is an integer and the initial value of i is 2;
s404: step E: if the first designated field does not exist, storing the second spliced field into the cache tool, and setting a second timeout time for the second spliced field, otherwise, not storing the second spliced field;
s405: step F: repeating the step D-step E until i is equal to n so as to complete the processing of all tasks contained in the sequencing result and acquire all target splicing fields stored in the cache tool; wherein n is the number of all the tasks to be processed;
s406: step G: and screening out the tasks corresponding to the target splicing field from all the tasks to be processed to obtain the target task.
As described in the foregoing steps S400 to S406, the step of performing deduplication processing on all the tasks to be processed based on the preset caching tool to obtain the processed target task may specifically include: step A: and sequencing all the tasks to be processed according to the sequence of the acquisition time of each task to be processed from front to back to obtain a corresponding sequencing result. The acquisition time can refer to the time for acquiring each task to be processed by the target server. Then, step B: and acquiring a first task of the 1 st bit in the sequencing result, and extracting a first task field contained in the first task. The task field may refer to all fields included in the to-be-processed task, or may also refer to a field at a specified position included in the to-be-processed task, and if the field at the specified position is specified, it is required to ensure that the field extraction processing for each to-be-processed task is to perform field extraction at the specified position. And C: and splicing all the first task fields to obtain corresponding first spliced fields, storing the first spliced fields into the cache tool, and setting first timeout time for the first spliced fields. The cache tool may specifically be a redis cache. And if the storage time of the spliced field in the cache tool is longer than the timeout time, the spliced field is regarded as invalid information and is removed from the cache tool, so that the specific field can be stored in the cache tool when the specific field identical to the spliced field is received. In addition, for any current task, if the splicing field corresponding to the task exists in the cache tool, the task can be regarded as a repeated task, and the splicing field is not stored in the cache tool again in the follow-up process, and the task can be eliminated, so that the situation that multiple identical tasks are repeatedly processed is avoided, and the intelligence and the accuracy of task processing can be improved. Step D: and acquiring a second task of the ith order in the ordering result, splicing all second task fields contained in the second task to obtain corresponding second spliced fields, and judging whether a first designated field identical to the second spliced fields exists in the cache tool. Wherein i is an integer and the initial value of i is 2. Step E: and if the first specified field does not exist, storing the second spliced field into the cache tool, and setting a second timeout time for the second spliced field, otherwise, not storing the second spliced field. Step F: repeating the step D-step E until i is equal to n so as to complete the processing of all tasks contained in the sequencing result and acquire all target splicing fields stored in the cache tool; and n is the number of all the tasks to be processed. The target splicing field refers to field data which are different from each other and are obtained after deduplication processing based on the splicing field is carried out on all tasks to be processed. For example, the processing procedure when i ═ 3 may include: and acquiring a third task sequenced in the sequencing result, splicing all third task fields contained in the third task to obtain a corresponding third spliced field, and judging whether a second specified field identical to the third spliced field exists in the cache tool. And if the second specified field does not exist, storing the third spliced field into the cache tool, and setting a third timeout time for the second spliced field, otherwise, not storing the third spliced field. Step G: and screening out the tasks corresponding to the target splicing field from all the tasks to be processed to obtain the target task. In the embodiment, after all the tasks to be processed are obtained, the task fields contained in each task to be processed are spliced to generate the spliced fields corresponding to each task to be processed, and whether repeated tasks exist in all the tasks to be processed is checked based on the spliced fields and the use of the caching tool, so that the repeated execution of all the tasks to be processed can be accurately and quickly completed, the repeated execution of the same task is effectively avoided, the useless power consumption of the device is reduced, and the intelligence and the accuracy of task processing are improved.
Further, in an embodiment of the present application, the step S50 includes:
s500: acquiring acquisition time information, processing timeliness information and importance information of each target task;
s501: generating a processing sequence of each target task based on the acquisition time information, the processing timeliness information and the importance information;
s502: storing all the target tasks into the message queue according to the processing sequence;
s503: distributing all the target tasks in the message queue to each normal server in the server cluster according to the processing sequence through the message middleware.
As described in the foregoing steps S500 to S503, the step of storing the target task in a preset message queue, and allocating all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware may specifically include: firstly, acquiring the acquisition time information, the processing timeliness information and the importance information of each target task. The acquisition time information can refer to time information of each target task acquired by a target server. In addition, the processing aging information of each task can be set according to the subsequent task demand time of each business department, and the more urgent the subsequent task demand time is, the earlier the processing aging setting of the corresponding task is, that is, the smaller the value of the processing aging information is. In addition, the operation result obtained by processing each target task is generally required to be used for each business department to perform subsequent task processing, and the more important the subsequent task is, the higher the importance of the corresponding target task is. Generally, historical data can be analyzed, a convolutional neural network model is trained by acquiring a large number of sample tasks and sample importance information to obtain a task importance recognition model, and importance information of each target task is acquired by inputting each target task into the task importance recognition model. And then generating a processing sequence of each target task based on the acquisition time information, the processing timeliness information and the importance information. For the specific implementation process of generating the processing sequence of each target task based on the acquisition time information, the processing aging information, and the importance information, this will be further described in the following specific embodiments, and details are not repeated here. And then storing all the target tasks into the message queue according to the processing sequence. And finally, distributing all the target tasks in the message queue to each normal server in the server cluster through the message middleware according to the processing sequence. After storing all the target tasks into the message queue according to the processing sequence, the process of allocating all the target tasks to each normal server in the server cluster may include: firstly, numbering each normal server according to the processing performance of each normal server, wherein the numerical value of the numbering is smaller when the processing performance is smaller, and if the numbering of the normal server with the highest processing performance is 1, the numbering of the normal server with the second highest processing performance is 2. And then determining the receiving sequence of each normal server for receiving the target tasks according to the numbers of the normal servers, wherein the receiving sequence is earlier the smaller the number is, for example, the normal server with the number of 1 is used for receiving and processing a first target task ordered in the message queue, and for example, the normal server with the number of 2 is used for receiving and processing a second target task ordered in the message queue. In addition, each normal server can only receive one task at a time, when the number of the target tasks is larger than that of the normal servers, after the first round of target tasks are distributed to each normal server, the remaining target tasks are distributed according to the number of the normal servers again in the follow-up process until all the target tasks are distributed to the normal servers in the server cluster. For example, if there are 6 target tasks in the message queue and 5 normal servers (server 1, server 2, server 3, server 4, server 5, and numeral 1 behind server 1 represents the number of the server), the first target task in the message queue will be preferentially allocated to server 1, the second target task will be allocated to server 2, the third target task will be allocated to server 3, the fourth target task will be allocated to server 4, and the fifth target task will be allocated to server 5. After the first round of allocation of the target tasks is completed, a sixth ordered target task remains, and then the sixth ordered target task is allocated to the server 1 according to the corresponding relationship between the number of the normal server and the task receiving sequence, so that allocation of all target tasks in the message queue is finally completed. In this embodiment, after the target tasks are screened out through the deduplication processing, a processing sequence of each target task is further generated based on the acquisition time information, the processing aging information, and the importance information of each target task, and then all the target tasks are stored in the message queue according to the processing sequence, so that the target tasks in the message queue are distributed to each normal server in the server cluster through the message middleware according to the processing sequence, and the target tasks with earlier acquisition time, urgent time, and higher importance are preferentially processed, so that the processing intelligence of the target tasks is effectively improved, and the subsequent task processing of each business department can be smoothly docked.
Further, in an embodiment of the present application, the step S501 includes:
s5010: acquiring a first preset weight corresponding to the acquisition time information, a second preset weight corresponding to the processing timeliness information and a third preset weight corresponding to the importance information;
s5011: calculating a priority value of each target task through a preset calculation formula based on the acquisition time information, the processing timeliness information, the importance information, the first preset weight, the second preset weight and the third preset weight;
s5012: sequencing all the target tasks according to the sequence of the priority values from large to small to obtain target sequencing information corresponding to all the target tasks;
s5013: and generating a processing sequence of each target task based on all the target sequencing information.
As described in the above steps S5010 to S5013, the step of generating the processing order of each target task based on the acquisition time information, the processing aging information, and the importance information may specifically include: firstly, a first preset weight corresponding to the acquisition time information, a second preset weight corresponding to the processing timeliness information and a third preset weight corresponding to the importance information are obtained. The values of the first preset weight, the second preset weight and the third preset weight are not specifically limited, and may be set according to actual use requirements, preferably, the second preset weight is greater than the third preset weight, the third preset weight is greater than the first preset weight, and the sum of the first preset weight, the second preset weight and the third preset weight is 1. And then calculating the priority value of each target task through a preset calculation formula based on the acquisition time information, the processing timeliness information, the importance information, the first preset weight, the second preset weight and the third preset weight. Specifically, the preset calculation formula is as follows: the method comprises the steps of obtaining a target task, wherein S is a priority value of the target task, M is the acquisition time information, a is the first preset weight, N is the processing aging information, b is the second preset weight, O is the importance information, and c is the third preset weight. The larger the priority value S calculated by the preset calculation formula is, the more urgent the requirement of the corresponding target task is, and the higher the importance degree is, the processing order of the target tasks can be obtained by sequencing according to the sequence of the priority value S from large to small, and the larger the priority value S is, the higher the processing order of the corresponding target task is. And then sequencing all the target tasks according to the sequence of the priority values from large to small to obtain target sequencing information corresponding to all the target tasks. And finally, generating a processing sequence of each target task based on all the target sequencing information. In this embodiment, after all the tasks to be processed are subjected to deduplication processing to screen out target tasks, a processing sequence of each target task is further generated based on the acquisition time information, the processing aging information, and the importance information of each target task, and then all the target tasks are stored in a message queue according to the processing sequence, so that the target tasks in the message queue are distributed to each normal server in the server cluster through a message middleware according to the processing sequence, and the target tasks with earlier acquisition time, urgent time, and higher importance are preferentially processed, so that the processing intelligence of the target tasks is effectively improved, and the subsequent task processing of each business department can be smoothly docked.
Further, in an embodiment of the present application, after the step S50, the method includes:
s510: receiving a task processing result which is obtained after the received appointed task is processed and sent by a second appointed server through the message middleware; the second designated server is any one of all the normal servers, and the designated task is any one of all the target tasks received by the designated server;
s511: if the task processing result is normal, acquiring an appointed splicing field corresponding to the appointed task in the cache tool, and adjusting appointed overtime corresponding to the appointed splicing field so that the adjusted appointed overtime is greater than the appointed overtime;
s512: if the task processing result is abnormal, deleting the specified splicing field corresponding to the specified task from the cache tool;
s513: and placing the specified task in a preset abnormal task list.
As described in the foregoing steps S510 to S513, after the step of storing the target task in a preset message queue and allocating all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware, the method includes: firstly, a task processing result obtained after processing a received appointed task and sent by a second appointed server is received through the message middleware. The second designated server is any one of all the normal servers, and the designated task is any one of all the target tasks received by the designated server. Additionally, the task processing results may include processing normal or processing exception. And if the task processing result is normal, acquiring the appointed splicing field corresponding to the appointed task in the cache tool, and adjusting the appointed overtime corresponding to the appointed splicing field so that the adjusted appointed overtime is greater than the appointed overtime. And if the task processing result is processing exception, deleting the specified splicing field corresponding to the specified task from the cache tool. And placing the specified task in a preset abnormal task list. The specified task is placed in a preset abnormal task list, so that the abnormal task which is currently appeared can be quickly inquired from the abnormal task list, and further the subsequent processing of the abnormal task can be timely arranged. In this embodiment, when the task processing result of the specified task is processing normal, the specified timeout time corresponding to the specified splicing field may be intelligently adjusted, so that the timeout time for processing the normal specified task may be prolonged, and the repeated processing of the specified task that is processing normal may be better avoided. In addition, when the task processing result of the specified task is processing abnormity, the specified splicing field corresponding to the specified task is intelligently deleted from the caching tool, so that the specified splicing field corresponding to the specified task can be stored in the caching tool again later, and the specified task with the processing abnormity can be executed again smoothly later.
The task processing method in the embodiment of the present application may also be applied to the field of block chains, for example, data such as the target task is stored in the block chain. By storing and managing the target task by using the block chain, the security and the non-tamper property of the target task can be effectively ensured.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
Referring to fig. 2, an embodiment of the present application further provides a task processing apparatus, including:
the system comprises a first determining module 1, a first judging module and a second determining module, wherein the first determining module is used for performing exception analysis on all servers contained in a preset server cluster to determine an exception server in the server cluster, and marking the servers except the exception server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
the second determining module 2 is used for determining a target server from all the normal servers;
the acquisition module 3 is used for acquiring the tasks to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
the first processing module 4 is configured to perform deduplication processing on all the to-be-processed tasks based on a preset caching tool to obtain processed target tasks;
and the second processing module 5 is configured to store the target task into a preset message queue, and distribute all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware, so as to process the received target task through each of the normal servers.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the first determining module 1 includes:
the first acquisition unit is used for acquiring the operation parameters of each server and acquiring a preset parameter threshold; the preset parameter threshold is a comprehensive threshold corresponding to all the servers;
the first screening unit is used for respectively carrying out numerical comparison processing on the operation parameters of each server and the preset parameter threshold value, screening out a first server of which the operation parameters meet the preset parameter threshold value from all the servers, and marking the servers except the first server as a second server;
the first generating unit is used for acquiring the operation parameters of the second servers and generating the operation abnormal probability value of each second server based on the operation parameters of each second server and a preset abnormal recognition model;
the second acquisition unit is used for acquiring a preset probability threshold; the preset probability threshold is a comprehensive threshold corresponding to all the servers;
the second screening unit is used for respectively carrying out numerical comparison processing on the operation abnormal probability value of each second server and the preset probability threshold value, and screening out a third server of which the operation abnormal probability value is smaller than the preset probability threshold value from all the second servers;
a determining unit, configured to use the first server and the third server as the abnormal server.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the first generating unit includes:
the first calculating subunit is used for calculating the average value of the operating parameters of all the second servers; and the number of the first and second groups,
the second calculating subunit is used for calculating the variance of the operating parameters of all the second servers;
the first acquisition subunit is used for acquiring the specified operation parameters of the first specified server; the first designated server is any one of all the second servers;
and the first generation subunit is used for inputting the average value, the variance and the specified operation parameters into the abnormality recognition model, and generating an operation abnormality probability value corresponding to the first specified server through the abnormality recognition model.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the first processing module 4 includes:
the sequencing unit is used for sequencing all the tasks to be processed according to the sequence of the acquisition time of each task to be processed from front to back to obtain a corresponding sequencing result;
the extraction unit is used for acquiring a first task of the 1 st bit in the sequencing result and extracting a first task field contained in the first task;
the first processing unit is used for splicing all the first task fields to obtain corresponding first spliced fields, storing the first spliced fields into the cache tool and setting first timeout time for the first spliced fields;
the judging unit is used for acquiring the second task of the ith order in the ordering result, splicing all second task fields contained in the second task to obtain corresponding second spliced fields, and judging whether a first designated field which is the same as the second spliced field exists in the cache tool or not; wherein i is an integer and the initial value of i is 2;
the second processing unit is used for storing the second spliced field into the cache tool if the first specified field does not exist, and setting a second timeout time for the second spliced field, otherwise, the second spliced field is not stored;
a third obtaining unit, configured to make i equal to i +1, repeatedly execute the first determining unit to the second processing unit until i equal to n, so as to complete processing on all tasks included in the sorting result, and obtain all target splicing fields stored in the cache tool; wherein n is the number of all the tasks to be processed;
and the third screening unit is used for screening out the tasks corresponding to the target splicing field from all the tasks to be processed to obtain the target tasks.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the second processing module 5 includes:
the fourth acquisition unit is used for acquiring the acquisition time information, the processing timeliness information and the importance information of each target task;
the second generation unit is used for generating a processing sequence of each target task based on the acquisition time information, the processing timeliness information and the importance information;
the storage unit is used for storing all the target tasks into the message queue according to the processing sequence;
and the distribution unit is used for distributing all the target tasks in the message queue to each normal server in the server cluster according to the processing sequence through the message middleware.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the application, the second generating unit includes:
the second acquiring subunit is configured to acquire a first preset weight corresponding to the acquisition time information, a second preset weight corresponding to the processing timeliness information, and a third preset weight corresponding to the importance information;
the third calculation subunit is configured to calculate, based on the acquisition time information, the processing aging information, the importance information, the first preset weight, the second preset weight, and the third preset weight, a priority value of each of the target tasks through a preset calculation formula;
the sequencing subunit is used for sequencing all the target tasks according to the sequence of the priority values from large to small to obtain target sequencing information corresponding to all the target tasks;
and the second generation subunit is used for generating the processing sequence of each target task based on all the target sorting information.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the task processing device includes:
the receiving module is used for receiving a task processing result which is obtained after the received appointed task is processed and sent by the second appointed server through the message middleware; the second designated server is any one of all the normal servers, and the designated task is any one of all the target tasks received by the designated server;
the adjusting module is used for acquiring the appointed splicing field corresponding to the appointed task in the cache tool and adjusting the appointed overtime corresponding to the appointed splicing field if the task processing result is that the task processing is normal, so that the adjusted appointed overtime is larger than the appointed overtime;
the deleting module is used for deleting the specified splicing field corresponding to the specified task from the cache tool if the task processing result is abnormal;
and the placement module is used for placing the specified task in a preset abnormal task list.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the task processing method in the foregoing embodiment one to one, and are not described herein again.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device comprises a processor, a memory, a network interface, a display screen, an input device and a database which are connected through a system bus. Wherein the processor of the computer device is designed to provide computing and control capabilities. The memory of the computer device comprises a storage medium and an internal memory. The storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and computer programs in the storage medium to run. The database of the computer device is used for storing the tasks to be processed and the target tasks. The network interface of the computer device is used for communicating with an external terminal through a network connection. The display screen of the computer equipment is an indispensable image-text output equipment in the computer, and is used for converting digital signals into optical signals so that characters and figures are displayed on the screen of the display screen. The input device of the computer equipment is the main device for information exchange between the computer and the user or other equipment, and is used for transmitting data, instructions, some mark information and the like to the computer. The computer program is executed by a processor to implement a method of task processing.
The processor executes the steps of the task processing method:
performing anomaly analysis on all servers contained in a preset server cluster to determine an abnormal server in the server cluster, and marking the servers except the abnormal server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
determining a target server from all the normal servers;
acquiring a task to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
and storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
Those skilled in the art will appreciate that the structure shown in fig. 3 is only a block diagram of a part of the structure related to the present application, and does not constitute a limitation to the apparatus and the computer device to which the present application is applied.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where when the computer program is executed by a processor, the computer program implements a task processing method, and specifically:
performing anomaly analysis on all servers contained in a preset server cluster to determine an abnormal server in the server cluster, and marking the servers except the abnormal server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
determining a target server from all the normal servers;
acquiring a task to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
and storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A task processing method, comprising:
performing anomaly analysis on all servers contained in a preset server cluster to determine an abnormal server in the server cluster, and marking the servers except the abnormal server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
determining a target server from all the normal servers;
acquiring a task to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
and storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
2. The task processing method according to claim 1, wherein the step of performing anomaly analysis on all servers included in a preset server cluster to determine an anomalous server in the server cluster includes:
acquiring operation parameters of each server and acquiring a preset parameter threshold; the preset parameter threshold is a comprehensive threshold corresponding to all the servers;
respectively carrying out numerical comparison processing on the operation parameter of each server and the preset parameter threshold value, screening out a first server of which the operation parameter meets the preset parameter threshold value from all the servers, and marking the servers except the first server as a second server;
acquiring operation parameters of the second servers, and generating operation anomaly probability values of the second servers based on the operation parameters of the second servers and a preset anomaly identification model;
acquiring a preset probability threshold; the preset probability threshold is a comprehensive threshold corresponding to all the servers;
respectively carrying out numerical comparison processing on the operation abnormal probability value of each second server and the preset probability threshold value, and screening out a third server of which the operation abnormal probability value is smaller than the preset probability threshold value from all the second servers;
and taking the first server and the third server as the abnormal servers.
3. The task processing method according to claim 2, wherein the step of obtaining the operation parameters of each of the second servers and generating the operation abnormality probability value of each of the second servers based on the operation parameters of each of the second servers and a preset abnormality recognition model includes:
calculating an average value of the operating parameters of all the second servers; and the number of the first and second groups,
calculating the variance of the operating parameters of all the second servers;
acquiring appointed operation parameters of a first appointed server; the first designated server is any one of all the second servers;
and inputting the average value, the variance and the specified operation parameters into the abnormality recognition model, and generating an operation abnormality probability value corresponding to the first specified server through the abnormality recognition model.
4. The task processing method according to claim 1, wherein the step of performing deduplication processing on all the to-be-processed tasks based on a preset caching tool to obtain processed target tasks includes:
step A: sequencing all the tasks to be processed according to the sequence of the acquisition time of each task to be processed from front to back to obtain a corresponding sequencing result;
and B: acquiring a first task of a 1 st order in the ordering result, and extracting a first task field contained in the first task;
and C: splicing all the first task fields to obtain corresponding first spliced fields, storing the first spliced fields into the cache tool, and setting first timeout time for the first spliced fields;
step D: acquiring a second task of the ith order in the ordering result, splicing all second task fields contained in the second task to obtain corresponding second spliced fields, and judging whether a first designated field which is the same as the second spliced fields exists in the cache tool or not; wherein i is an integer and the initial value of i is 2;
step E: if the first designated field does not exist, storing the second spliced field into the cache tool, and setting a second timeout time for the second spliced field, otherwise, not storing the second spliced field;
step F: repeating the step D-step E until i is equal to n so as to complete the processing of all tasks contained in the sequencing result and acquire all target splicing fields stored in the cache tool; wherein n is the number of all the tasks to be processed;
step G: and screening out the tasks corresponding to the target splicing field from all the tasks to be processed to obtain the target task.
5. The task processing method according to claim 1, wherein the step of storing the target task in a preset message queue, and distributing all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware comprises:
acquiring acquisition time information, processing timeliness information and importance information of each target task;
generating a processing sequence of each target task based on the acquisition time information, the processing timeliness information and the importance information;
storing all the target tasks into the message queue according to the processing sequence;
distributing all the target tasks in the message queue to each normal server in the server cluster according to the processing sequence through the message middleware.
6. The task processing method according to claim 5, wherein the step of generating the processing order of each of the target tasks based on the acquisition time information, the processing aging information, and the importance information includes:
acquiring a first preset weight corresponding to the acquisition time information, a second preset weight corresponding to the processing timeliness information and a third preset weight corresponding to the importance information;
calculating a priority value of each target task through a preset calculation formula based on the acquisition time information, the processing timeliness information, the importance information, the first preset weight, the second preset weight and the third preset weight;
sequencing all the target tasks according to the sequence of the priority values from large to small to obtain target sequencing information corresponding to all the target tasks;
and generating a processing sequence of each target task based on all the target sequencing information.
7. The task processing method according to claim 1, wherein the step of storing the target task in a preset message queue, and distributing all the target tasks in the message queue to each of the normal servers in the server cluster through a message middleware is followed by:
receiving a task processing result which is obtained after the received appointed task is processed and sent by a second appointed server through the message middleware; the second designated server is any one of all the normal servers, and the designated task is any one of all the target tasks received by the designated server;
if the task processing result is normal, acquiring an appointed splicing field corresponding to the appointed task in the cache tool, and adjusting appointed overtime corresponding to the appointed splicing field so that the adjusted appointed overtime is greater than the appointed overtime;
if the task processing result is abnormal, deleting the specified splicing field corresponding to the specified task from the cache tool;
and placing the specified task in a preset abnormal task list.
8. A task processing apparatus, comprising:
the first determining module is used for performing exception analysis on all servers contained in a preset server cluster to determine an exception server in the server cluster, and marking the servers except the exception server in the server cluster as normal servers; wherein the number of the servers comprises a plurality;
the second determining module is used for determining a target server from all the normal servers;
the acquisition module is used for acquiring the tasks to be processed from a preset data source through the target server; wherein the number of the tasks to be processed comprises a plurality of tasks;
the first processing module is used for performing duplicate removal processing on all the tasks to be processed based on a preset cache tool to obtain processed target tasks;
and the second processing module is used for storing the target tasks into a preset message queue, distributing all the target tasks in the message queue to each normal server in the server cluster through a message middleware, and processing the received target tasks through each normal server.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111535039.XA 2021-12-15 2021-12-15 Task processing method and device, computer equipment and storage medium Pending CN114237886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111535039.XA CN114237886A (en) 2021-12-15 2021-12-15 Task processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111535039.XA CN114237886A (en) 2021-12-15 2021-12-15 Task processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114237886A true CN114237886A (en) 2022-03-25

Family

ID=80756383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111535039.XA Pending CN114237886A (en) 2021-12-15 2021-12-15 Task processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114237886A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827157A (en) * 2022-04-12 2022-07-29 北京云思智学科技有限公司 Cluster task processing method, device and system, electronic equipment and readable medium
CN115225636A (en) * 2022-07-12 2022-10-21 深圳壹账通智能科技有限公司 Request processing method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827157A (en) * 2022-04-12 2022-07-29 北京云思智学科技有限公司 Cluster task processing method, device and system, electronic equipment and readable medium
CN115225636A (en) * 2022-07-12 2022-10-21 深圳壹账通智能科技有限公司 Request processing method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103745107B (en) Fault mode-based establishment method for maintenance support simulation system for equipment basic level
CN113516297A (en) Prediction method and device based on decision tree model and computer equipment
CN114237886A (en) Task processing method and device, computer equipment and storage medium
CN111897673A (en) Operation and maintenance fault root cause identification method and device, computer equipment and storage medium
CN111737963B (en) Configuration file based form filling method and device and computer equipment
CN112540811B (en) Cache data detection method and device, computer equipment and storage medium
CN112700131B (en) AB test method and device based on artificial intelligence, computer equipment and medium
CN113642039A (en) Configuration method and device of document template, computer equipment and storage medium
CN111880921A (en) Job processing method and device based on rule engine and computer equipment
CN112597158A (en) Data matching method and device, computer equipment and storage medium
CN113434310A (en) Multithreading task allocation method, device, equipment and storage medium
CN114978968A (en) Micro-service anomaly detection method and device, computer equipment and storage medium
CN112965981B (en) Data checking method, device, computer equipment and storage medium
CN114461439A (en) Fault diagnosis method, device, equipment and storage medium
CN113327037A (en) Model-based risk identification method and device, computer equipment and storage medium
CN116823026A (en) Engineering data processing system and method based on block chain
CN114547053A (en) System-based data processing method and device, computer equipment and storage medium
CN113077185B (en) Workload evaluation method, workload evaluation device, computer equipment and storage medium
CN113177396B (en) Report generation method and device, computer equipment and storage medium
CN113191146B (en) Appeal data distribution method and device, computer equipment and storage medium
CN113535260B (en) Simulator-based data processing method, device, equipment and storage medium
CN113626285A (en) Model-based job monitoring method and device, computer equipment and storage medium
CN112650659B (en) Buried point setting method and device, computer equipment and storage medium
CN115225636A (en) Request processing method and device, computer equipment and storage medium
CN113435517A (en) Abnormal data point output method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination