CN116302450B - Batch processing method and device for tasks, computer equipment and storage medium - Google Patents

Batch processing method and device for tasks, computer equipment and storage medium Download PDF

Info

Publication number
CN116302450B
CN116302450B CN202310558590.9A CN202310558590A CN116302450B CN 116302450 B CN116302450 B CN 116302450B CN 202310558590 A CN202310558590 A CN 202310558590A CN 116302450 B CN116302450 B CN 116302450B
Authority
CN
China
Prior art keywords
task
processing
service
tasks
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310558590.9A
Other languages
Chinese (zh)
Other versions
CN116302450A (en
Inventor
谢清泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Original Assignee
Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd filed Critical Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Priority to CN202310558590.9A priority Critical patent/CN116302450B/en
Publication of CN116302450A publication Critical patent/CN116302450A/en
Application granted granted Critical
Publication of CN116302450B publication Critical patent/CN116302450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The application discloses a batch processing method, a device, computer equipment and a storage medium for tasks, comprising the following steps: responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service, and acquiring at least one task in the queue; acquiring processing resources of a plurality of distributed working services; distributing the tasks according to the processing resources to distribute at least one task obtained by polling to at least one distributed work service; receiving processing feedback information of the distributed work service to the task; and updating the processing information of the tasks in the queue according to the processing feedback information. By configuring global task scheduling, the distributed service is scheduled to process each task, and the task batch processing efficiency is improved.

Description

Batch processing method and device for tasks, computer equipment and storage medium
Technical Field
The present application relates to the field of task processing, and in particular, to a method and apparatus for batch processing of tasks, a computer device, and a storage medium.
Background
The self-application architecture is continuously developed and evolved, the request pressure of application is gradually increased, the data volume is also continuously increased, the single-point application service is difficult to meet the performance and iteration requirements, in order to meet the requirements of users with good mass facing the Internet, batch processing is required to be carried out on the request tasks, the current task batch processing is usually carried out on single tasks one by one, global objectivity is not caused, the processing efficiency of the current batch tasks is lower, and the increasingly-growing task demands of the application cannot be met.
Disclosure of Invention
The invention aims to provide a batch processing method and device for tasks, computer equipment and a computer storage medium, so as to at least solve the problem of low task processing efficiency in the current batch task processing.
In order to solve the technical problems, the invention provides a batch processing method of tasks, comprising the following steps:
responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service, and acquiring at least one task in the queue, wherein the queue of batch processing tasks is stored at a server;
acquiring processing resources of a plurality of distributed working services;
distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service; wherein if the processing of one task is associated with another task, then configured to distribute the associated two or more tasks to the same distributed work service;
receiving processing feedback information of the distributed work service to the task;
and updating the processing information of the tasks in the queue according to the processing feedback information.
Optionally, the responding to the task processing instruction, polling a queue of batch processing tasks through a domain management service, and before obtaining at least one task in the queue, further includes:
When any one of the distributed work services receives a task, writing the task into a queue of the batch processing task, wherein the distributed work service is configured to only write the task into the queue of the batch processing task and is configured to not read the task in the queue of the batch processing task;
when any one of the distributed work services receives a task processing instruction, the task processing instruction is sent to the domain management service so as to respond to the task processing instruction through the domain management service.
Optionally, the method further comprises:
acquiring operation information of a management service to be selected;
and judging the availability of the to-be-selected management service according to the operation information, and selecting the to-be-selected management service with the availability meeting the preset condition as the domain management service.
Optionally, before the obtaining the processing resources of the plurality of distributed working services, the method further includes:
acquiring distributed work services registered in the domain management service;
judging whether the distributed working service is online or not;
the distributed work service currently in an online state is determined to be an available distributed work service.
Optionally, the distributing the task according to the processing resource to distribute the at least one task acquired by polling to at least one distributed work service further includes:
And sending connection information to the available distributed work service so that the available distributed work service can close the automatic offline operation in response to the connection information.
Optionally, the distributing the task according to the processing resource includes:
acquiring access characters of the task;
splitting the existing data according to the access character to acquire the pointing data of the task;
distributing the pointing data and the task to the distributed work service.
Optionally, the distributing the task according to the processing resource to distribute the at least one task acquired by polling to at least one distributed work service further includes:
calculating the matching degree of the task processing resources of the distributed work service and the task;
and distributing the tasks according to the matching degree, and distributing at least one task acquired by polling to at least one distributed work service.
In order to solve the above technical problem, an embodiment of the present invention further provides a batch processing device for tasks, including:
the task response module is used for responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service and acquiring at least one task in the queue, wherein the queue of the batch processing tasks is stored at a server;
The resource acquisition module is used for acquiring processing resources of a plurality of distributed work services;
the task scheduling module is used for distributing the tasks according to the processing resources so as to distribute at least one task acquired by polling to at least one distributed work service; wherein if the processing of one task is associated with another task, then configured to distribute the associated two or more tasks to the same distributed work service;
the task feedback module is used for receiving the processing feedback information of the distributed work service on the task;
and the task updating module is used for updating the processing information of the tasks in the queue according to the processing feedback information.
Optionally, the task response module is further configured to:
when any one of the distributed work services receives a task, writing the task into a queue of the batch processing task, wherein the distributed work service is configured to only write the task into the queue of the batch processing task and is configured to not read the task in the queue of the batch processing task;
when any one of the distributed work services receives a task processing instruction, the task processing instruction is sent to the domain management service so as to respond to the task processing instruction through the domain management service.
Optionally, the device further includes a service election module, configured to:
acquiring operation information of a management service to be selected;
and judging the availability of the to-be-selected management service according to the operation information, and selecting the to-be-selected management service with the availability meeting the preset condition as the domain management service.
Optionally, the resource acquisition module is further configured to:
acquiring distributed work services registered in the domain management service;
judging whether the distributed working service is online or not;
the distributed work service currently in an online state is determined to be an available distributed work service.
Optionally, the system further comprises a task scheduling module for scheduling tasks;
and sending connection information to the available distributed work service so that the available distributed work service can close the automatic offline operation in response to the connection information.
Optionally, the task scheduling module is further configured to;
acquiring access characters of the task;
splitting the existing data according to the access character to acquire the pointing data of the task;
distributing the pointing data and the task to the distributed work service.
Optionally, the task scheduling module is further configured to:
Calculating the matching degree of the task processing resources of the distributed work service and the task;
and distributing the tasks according to the matching degree, and distributing at least one task acquired by polling to at least one distributed work service.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, including a memory and a processor, where the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor is caused to execute the steps of the batch processing method of the task.
To solve the above technical problem, an embodiment of the present invention further provides a storage medium storing computer readable instructions, where the computer readable instructions when executed by one or more processors cause the one or more processors to perform the steps of the batch processing method for tasks described above.
The beneficial effects of the invention are: the method comprises the steps of responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service, and acquiring at least one task in the queue; acquiring processing resources of a plurality of distributed working services; distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service; receiving processing feedback information of the distributed work service to the task; and updating the processing information of the tasks in the queue according to the processing feedback information, polling the queue of batch processing tasks through the configuration domain management service, distributing the tasks contained in the queue to the work service distributed in a distributed mode, distributing and processing a plurality of tasks in the queue of batch processing tasks from the global, and processing corresponding tasks through scheduling a plurality of distributed work services, so that the resource utilization rate of each distributed work service is improved, and the processing efficiency of the tasks is improved.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic flow diagram of a method for batch processing tasks according to an embodiment of the application;
FIG. 2 is a schematic diagram of the basic structure of a batch processing apparatus for tasks according to an embodiment of the application;
fig. 3 is a block diagram showing the basic structure of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, a "terminal" as used herein includes both devices of a wireless signal receiver that have only wireless signal receivers without transmitting capabilities and devices of receiving and transmitting hardware that have devices capable of performing two-way communications over a two-way communications link. Such a device may include: a cellular or other communication device having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (Personal Communications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, a "terminal" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, to operate at any other location(s) on earth and/or in space. The "terminal" used herein may also be a communication terminal, a network access terminal, a music/video playing terminal, for example, a PDA, a MID (Mobile Internet Device ) and/or a mobile phone with music/video playing function, and may also be a smart tv, a set-top box, etc.
The application refers to hardware such as a server, a client, a service node, and the like, which essentially is an electronic device with personal computer and other functions, and is a hardware device with necessary components disclosed by von neumann principles such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, and the like, wherein a computer program is stored in the memory, and the central processing unit calls the program stored in the memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing specific functions.
It should be noted that the concept of the present application, called "server", is equally applicable to the case of server clusters. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
One or more technical features of the present application, unless specified in the clear, may be deployed either on a server for implementation and the client remotely invokes an online service interface provided by the acquisition server for implementation of the access, or may be deployed and run directly on the client for implementation of the access.
The neural network model cited or possibly cited in the application can be deployed on a remote server and can be used for implementing remote call on a client, or can be deployed on a client with sufficient equipment capability for direct call, unless specified by plaintext, and in some embodiments, when the neural network model runs on the client, the corresponding intelligence can be obtained through migration learning so as to reduce the requirement on the running resources of the hardware of the client and avoid excessively occupying the running resources of the hardware of the client.
The various data related to the present application, unless specified in the plain text, may be stored either remotely in a server or in a local terminal device, as long as it is suitable for being invoked by the technical solution of the present application.
Those skilled in the art will appreciate that: although the various methods of the present application are described based on the same concepts so as to be common to each other, the methods may be performed independently of each other unless specifically indicated otherwise. Similarly, for the various embodiments disclosed herein, all concepts described herein are presented based on the same general inventive concept, and thus, concepts described herein with respect to the same general inventive concept, and concepts that are merely convenient and appropriately modified, although different, should be interpreted as equivalents.
The various embodiments of the present application to be disclosed herein, unless the plain text indicates a mutually exclusive relationship with each other, the technical features related to the various embodiments may be cross-combined to flexibly construct a new embodiment as long as such combination does not depart from the inventive spirit of the present application and can satisfy the needs in the art or solve the deficiencies in the prior art. This variant will be known to the person skilled in the art.
Referring to fig. 1, fig. 1 is a basic flow chart of a batch processing method of tasks according to the present embodiment.
As shown in fig. 1, includes:
s1100, responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service, and acquiring at least one task in the queue;
in the embodiment, a service model for task processing is configured for an application, and in an actual application scenario, the service model is deployed at a service end and is used for responding to a request received in the application, and the service model is also deployed at an application end and is used for receiving the request in the application end, so that the service model is deployed through the service end and the application end, and the batch task received by the application is processed. The method comprises the steps that applied task processing logic is stored in a task queue for one task, one or more tasks are extracted from the task queue for processing, when a task processing instruction is received, the task queue for batch processing of the tasks is polled through a domain management service in response to the task processing instruction, at least one task in the queue is obtained, the domain management service is a task scheduling and task distributing service, and after the task processing instruction is received, the selected domain management service polls the task queue, so that one or more tasks in the task queue are obtained, wherein the task queue is defined as the queue for batch processing of the tasks.
It should be noted that, the domain management service of this embodiment is deployed at a server, where the task processing instruction may be generated by an application end receiving a task processing request, and then the application sends the task processing instruction generated by the received task processing request to the server, and the server responds to the task processing instruction, polls a queue of batch processing tasks through the domain management service that is distributed, and obtains at least one task in the queue.
It should be noted that the domain management service and the working service described below are distributed, and the domain management service and the working service configure different resources, and can each complete corresponding instruction responses, where the domain management service is configured to distribute and schedule tasks, and the working service is configured to respond and process the tasks.
It should be noted that, the queue of the batch processing task is stored in the server, and only the domain management service can call the corresponding polling command interface to poll the queue of the batch processing task to obtain at least one task in the queue.
It should be noted that the task processing instructions may be manually triggered, timed task triggered, or otherwise triggered.
S1200, obtaining processing resources of a plurality of distributed work services;
after a task processing instruction is responded, a domain management service polls a queue of batch processing tasks to obtain at least one task in the queue, processing resources of a plurality of distributed work services are obtained, the work services and the domain management service are distributed, the work services are defined as distributed work services, the processing resources comprise hardware resources of the distributed work services such as CPU (Central processing Unit), GPU (graphics processing Unit), graphics card, memory and other resources in corresponding hardware, and also comprise software resources of the distributed work services, and the processing resources can characterize the availability of the distributed work services to task processing.
It should be noted that, the processing resources include fixed and dynamic changes, that is, the hardware resources, such as the CPU, GPU, graphics card, memory, etc. in the corresponding hardware may be fixed, or the existing usage of the CPU, GPU, graphics card, memory, etc. in the hardware may be dynamic, or the usage of the software resources of the distributed working service may also be dynamic.
It should be noted that, the processing resources of the distributed working service may be transmitted to the server in real time or in a timed manner, so that the server can timely acquire the processing resources of the distributed working service.
It is noted that the distributed work service includes one or more.
S1300, distributing the tasks according to the processing resources so as to distribute at least one task acquired by polling to at least one distributed work service;
after processing resources of a plurality of distributed work services are acquired, the tasks are distributed according to the processing resources, so that at least one task acquired by polling is distributed to at least one distributed work service. Distributing corresponding one or more tasks to distributed work services with different processing resources, distributing the tasks to the corresponding distributed work services, and after the distributed work services receive the tasks, calling the processing resources of the distributed work services to process according to the received tasks so as to process one or more tasks contained in a queue for batch task processing.
It should be noted that, in distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service, in order to improve the resource utilization rate of the distributed work service and improve the processing efficiency of the task, the tasks are configured to be evenly distributed to the corresponding one or more distributed work services, so that each task can be processed in time.
It is noted that if the processing of one task is associated with another task, it is configured to distribute the associated two or more tasks to the same distributed work service.
S1400, receiving processing feedback information of the distributed work service on the task;
after the tasks are distributed according to the processing resources to distribute at least one task acquired by polling to at least one distributed working service, the distributed working service calls local resources to process the tasks to obtain processing results of the tasks, and then the distributed working service feeds back the results obtained by processing the tasks to a service end, particularly to a domain management service, so that processing feedback information of the distributed working service on the tasks is received.
It should be noted that, the processing feedback information of the task includes information of successfully processing the task and failing to process the task, when the distributed work service successfully processes the assigned task, the processing result of the task is fed back to the domain management service; and when the distributed work service cannot successfully process the distributed tasks, generating processing feedback information of the task processing failure, and then sending the processing feedback information of the tasks to the domain management service.
S1500, updating the processing information of the tasks in the queue according to the processing feedback information.
After receiving the processing feedback information of the distributed work service on the task, updating the processing information of the task in the queue according to the processing feedback information so as to update the task processing progress in the queue of the batch processing task, if the processing feedback information of the task is successfully processed, updating the result of the task in the queue of the batch processing task, and feeding back the result to a requester.
In the above embodiment, at least one task in a queue of polling batch processing tasks is obtained by a domain management service in response to a task processing instruction; acquiring processing resources of a plurality of distributed working services; distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service; receiving processing feedback information of the distributed work service to the task; and updating the processing information of the tasks in the queue according to the processing feedback information, polling the queue of batch processing tasks through the configuration domain management service, distributing the tasks contained in the queue to the work service distributed in a distributed mode, distributing and processing a plurality of tasks in the queue of batch processing tasks from the global, and processing corresponding tasks through scheduling a plurality of distributed work services, so that the resource utilization rate of each distributed work service is improved, and the processing efficiency of the tasks is improved.
In some embodiments, S1100, in response to the task processing instruction, further includes, before the polling, by the domain management service, a queue of batch processing tasks, and obtaining at least one task in the queue:
s1101, when any one of the distributed work services receives a task, writing the task into a queue of the batch processing task;
in one embodiment, before a queue of batch processing tasks is polled by a domain management service in response to a task processing instruction, at least one task in the queue is acquired, when any one of the distributed work services receives a task, the task is written into the queue of batch processing tasks. The distributed work service can be distributed service deployed at a server, any one of the distributed work service can receive a task, and after the task is received, the task is written into a queue of the batch processing task.
It should be noted that the distributed work service is configured to only write tasks into the queue of the batch processing task, and is configured to not read the tasks in the queue of the batch processing task, so that the logic disorder of task reading is avoided, and the order of task processing is ensured.
S1102, when any one of the distributed work services receives a task processing instruction, the task processing instruction is sent to the domain management service so as to respond to the task processing instruction through the domain management service;
any one of the distributed working services can receive the task processing instruction, when any one of the distributed working services receives the task processing instruction, the distributed working service cannot read the tasks in the queue, the task processing instruction needs to be sent to the domain management service, then the domain management service responds to the task processing instruction, and the tasks are scheduled and distributed based on the domain management service, so that the order of processing each task in the queue of the batch processing tasks is ensured.
According to the method, the distributed work service is configured to write logic of the tasks and response logic of the received task processing instructions, the tasks in the queues of the batch processing tasks are uniformly scheduled and distributed by the domain management service, so that the ordering of the task processing in the queues of the batch processing tasks is guaranteed, the task processing is scheduled from the whole world, and the task processing efficiency is improved.
In some embodiments, the method further comprises:
S1103, acquiring operation information of the management service to be selected;
in this embodiment, in the configuration logic of the server, the domain management service is determined by selecting a preset rule, and the selection of the domain management service is dynamic, and in the process of selecting the domain management service, operation information of the to-be-selected management service is first obtained, where the operation information includes hardware resources, hardware operation information, software resources, and software operation information.
The management service to be selected may include a distributed work service, that is, the roles of the domain management service and the distributed work service are not fixed, and the distributed work service may also be selected as the domain management service, so as to schedule and distribute the task, where the preset rule tends to improve the efficiency of scheduling and distributing the task by the domain management service.
S1104, judging the availability of the to-be-selected management service according to the operation information, and selecting the to-be-selected management service with the availability meeting the preset condition as a domain management service;
after obtaining the operation information of the to-be-selected management service, judging the availability of the to-be-selected management service according to the operation information, selecting the to-be-selected management service with the availability meeting the preset condition as a domain management service, and in one implementation mode, obtaining the current resource occupancy rate of the to-be-selected management service if the hardware resource contained in the operation information of the to-be-selected management service meets the condition, and determining that the operation information of the to-be-selected management service meets the condition of high availability when the resource occupancy rate is lower than the preset value, wherein the to-be-selected management service is selected as the domain management service; in another embodiment, hardware resources of a plurality of to-be-selected management services are compared, the to-be-selected management service with the highest hardware resource is selected, then the resource occupancy rate of each to-be-selected management service is compared, and the to-be-selected management service with the lowest resource occupancy rate is selected as the domain management service, so that the high availability of the domain management service is configured.
According to the method, one or more of the plurality of to-be-selected management services are determined to serve as the domain management services in a preset rule election mode, so that the domain management services have high availability to schedule and distribute tasks, and the task scheduling and distributing efficiency is improved.
In some embodiments, before S1200 obtains the processing resources of the several distributed working services, the method further includes:
s1210, acquiring a distributed work service registered in the domain management service;
before the processing resources of a plurality of distributed work services are acquired, as the number of the domain management services can be one or a plurality of the distributed work services managed by each domain management service is different, when the domain management service schedules and distributes tasks, the distributed work services registered in the domain management service are acquired, namely, the distributed work services are registered in the designated domain management service, and the domain management service can only call the work services.
S1211, judging whether the distributed work service is online;
since the distributed working service is not real-time online working, after the distributed working service registered in the domain management service is acquired, whether the distributed working service is online needs to be judged, the online state of the distributed working service can be determined through a designated mark, and when the designated mark is denoted as wire, the distributed working service is determined to be implemented as online state.
S1212, determining the distributed work service currently in the online state as an available distributed work service.
After judging whether the distributed work service is online or not, if the distributed work service registered in the domain management service is online, determining the distributed work service currently in the online state as an available distributed work service, wherein the available distributed work service can be used for scheduling and distributing subsequent tasks, so that the distributed work service distributed by each task is ensured to be in the registered online state.
According to the method and the device, the distributed work service registered to the domain management service is judged, and the online state of the distributed work service is judged, so that the domain management service can schedule and distribute tasks to the distributed work service registered to the domain management service and in the online state, the distributed work service distributed to each task can be correspondingly processed, and the success rate of task scheduling and configuration processing is improved.
In some embodiments, the step of distributing the task according to the processing resource in order to distribute the at least one task acquired by polling to at least one distributed work service further includes:
S1311, sending connection information to the available distributed working service, so that the available distributed working service closes an automatic offline operation in response to the connection information.
And distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed working service, sending connection information to the available distributed working service, and sending the link information to the distributed working service by a domain management service so as to keep the connection between the domain management service and the distributed working service, so that the available distributed working service can close the automatic offline operation in response to the connection information.
It may be pointed out that when the distributed working service does not receive the connection information, the connection may be automatically performed after a preset period, so as to save the resource consumption of the distributed working service.
According to the method, the connection information is sent to the available distributed working service, so that the connection between the domain management service and the distributed working service is maintained, the available distributed working service responds to the connection information to close the automatic offline operation, and the success rate of task processing is improved.
In some embodiments, S1300 distributes the task according to the processing resource, including:
s1321, acquiring access characters of the task;
in the process of distributing the task according to the processing resource, firstly, access characters of the task are acquired, and the access characters identify data which the task needs to access.
S1322, splitting the existing data according to the access character to acquire the pointing data of the task;
after the access characters of the task are acquired, the existing data are split according to the access characters, the pointing data of the task are acquired, and the domain management service needs to globally schedule and distribute the task and increase the access pressure for reducing the access of each task to the database.
S1323, distributing the pointing data and the task to the distributed work service.
When the existing data are split according to the access characters, after the pointing data of the task are obtained, the pointing data and the task are distributed to the distributed work clothes, so that the distributed work service can only access the corresponding pointing data, the access and inquiry of a large number of distributed work services to a database are reduced, the occupancy rate of resources is reduced, and the processing efficiency of the task is further improved.
According to the method, the access character of the task is obtained, the existing data are split according to the access character, the pointing data of the task are obtained, the pointing data and the task are distributed to the distributed work service, so that the distributed work service can only access the corresponding pointing data, access and inquiry of a large amount of distributed work service to a database are reduced, the occupancy rate of resources is reduced, and further the processing efficiency of the task is improved.
In some embodiments, the step of distributing the task according to the processing resource in order to distribute the at least one task acquired by polling to at least one distributed work service further includes:
s1331, calculating the matching degree of the task processing resources of the distributed work service and the task;
when the tasks are distributed according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service, the matching degree of the task processing resources and the tasks of the distributed work service is calculated, and the types of the tasks which are good for processing by different distributed work services are different, so that the matching degree of the task processing resources and the tasks of the distributed work service is calculated, and the tasks which are most suitable for processing are distributed to the matched distributed work service based on the matching degree.
S1332, distributing the tasks according to the matching degree, and distributing at least one task acquired by polling to at least one distributed work service.
After calculating the matching degree of the task processing resources of the distributed work service and the tasks, distributing the tasks according to the matching degree, and distributing at least one task acquired by polling to at least one distributed work service, so that the distributed work service can process the task good by the distributed work service or the distributed work service can process one or more tasks according to the resource of the distributed work service.
According to the method, the matching degree of the task processing resources of the distributed work service and the tasks is calculated, the tasks are distributed according to the matching degree, at least one task acquired through polling is distributed to at least one distributed work service, the distributed work service can process the good task of the distributed work service, or the distributed work service can process one or more tasks according to the resources of the distributed work service, and the task processing efficiency is improved.
Referring to fig. 2 specifically, fig. 2 is a schematic diagram of a basic structure of a batch processing apparatus for tasks in this embodiment.
As shown in fig. 2, a batch processing apparatus for a task includes: a task response module 1100, a resource acquisition module 1200, a task scheduling module 1300, a task feedback module 1400, and a task update module 1500. The task response module 1100 is configured to respond to a task processing instruction, poll a queue of batch processing tasks through a domain management service, and obtain at least one task in the queue; a resource acquisition module 1200, configured to acquire processing resources of a plurality of distributed working services; a task scheduling module 1300, configured to distribute the task according to the processing resource, so as to distribute at least one task acquired by polling to at least one distributed work service; a task feedback module 1400, configured to receive processing feedback information of the distributed work service on the task; and the task updating module 1500 is configured to update the processing information of the task in the queue according to the processing feedback information.
The task batch processing device responds to the task processing instruction and polls a queue of batch processing tasks through the domain management service to acquire at least one task in the queue; acquiring processing resources of a plurality of distributed working services; distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service; receiving processing feedback information of the distributed work service to the task; and updating the processing information of the tasks in the queue according to the processing feedback information, polling the queue of batch processing tasks through the configuration domain management service, distributing the tasks contained in the queue to the work service distributed in a distributed mode, distributing and processing a plurality of tasks in the queue of batch processing tasks from the global, and processing corresponding tasks through scheduling a plurality of distributed work services, so that the resource utilization rate of each distributed work service is improved, and the processing efficiency of the tasks is improved.
Optionally, the task response module is further configured to:
when any one of the distributed work services receives a task, writing the task into a queue of the batch processing task;
when any one of the distributed work services receives a task processing instruction, the task processing instruction is sent to the domain management service so as to respond to the task processing instruction through the domain management service.
Optionally, the device further includes a service election module, configured to:
acquiring operation information of a management service to be selected;
and judging the availability of the to-be-selected management service according to the operation information, and selecting the to-be-selected management service with the availability meeting the preset condition as the domain management service.
Optionally, the resource acquisition module is further configured to:
acquiring distributed work services registered in the domain management service;
judging whether the distributed working service is online or not;
the distributed work service currently in an online state is determined to be an available distributed work service.
Optionally, the system further comprises a task scheduling module for scheduling tasks;
and sending connection information to the available distributed work service so that the available distributed work service can close the automatic offline operation in response to the connection information.
Optionally, the task scheduling module is further configured to;
acquiring access characters of the task;
splitting the existing data according to the access character to acquire the pointing data of the task;
distributing the pointing data and the task to the distributed work service.
Optionally, the task scheduling module is further configured to:
calculating the matching degree of the task processing resources of the distributed work service and the task;
and distributing the tasks according to the matching degree, and distributing at least one task acquired by polling to at least one distributed work service.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 3, fig. 3 is a basic structural block diagram of a computer device according to the present embodiment.
As shown in fig. 3, the internal structure of the computer device is schematically shown. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The nonvolatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store a control information sequence, and the computer readable instructions can enable the processor to realize a transaction certificate chaining method when the computer readable instructions are executed by the processor. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a batch processing method of a task. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor in this embodiment is configured to execute specific functions of the task response module 1100, the resource acquisition module 1200, the task scheduling module 1300, the task feedback module 1400, and the task update module 1500 in fig. 2, and the memory stores program codes and various types of data required for executing the foregoing modules. The network interface is used for data transmission between the user terminal or the server. The memory in this embodiment stores program codes and data required for executing all the sub-modules in the batch processing device for tasks, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The method comprises the steps that a computer device obtains at least one task in a queue of polling batch processing tasks through domain management service in response to task processing instructions; acquiring processing resources of a plurality of distributed working services; distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service; receiving processing feedback information of the distributed work service to the task; and updating the processing information of the tasks in the queue according to the processing feedback information, polling the queue of batch processing tasks through the configuration domain management service, distributing the tasks contained in the queue to the work service distributed in a distributed mode, distributing and processing a plurality of tasks in the queue of batch processing tasks from the global, and processing corresponding tasks through scheduling a plurality of distributed work services, so that the resource utilization rate of each distributed work service is improved, and the processing efficiency of the tasks is improved.
The application also provides a storage medium storing computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of a batch processing method of any of the embodiment tasks described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
Those of skill in the art will appreciate that the various operations, methods, steps in the flow, acts, schemes, and alternatives discussed in the present application may be alternated, altered, combined, or eliminated. Further, other steps, means, or steps in a process having various operations, methods, or procedures discussed herein may be alternated, altered, rearranged, disassembled, combined, or eliminated. Further, steps, measures, schemes in the prior art with various operations, methods, flows disclosed in the present application may also be alternated, altered, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (9)

1. A method for batch processing a task, comprising:
responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service, and acquiring at least one task in the queue, wherein the queue of batch processing tasks is stored at a server;
acquiring processing resources of a plurality of distributed working services;
distributing the tasks according to the processing resources to distribute at least one task acquired by polling to at least one distributed work service; wherein if the processing of one task is associated with another task, then configured to distribute the associated two or more tasks to the same distributed work service;
receiving processing feedback information of the distributed work service to the task;
updating the processing information of the tasks in the queue according to the processing feedback information;
the method for polling the queue of batch processing tasks through the domain management service in response to the task processing instruction, before obtaining at least one task in the queue, further comprises:
When any one of the distributed work services receives a task, writing the task into a queue of the batch processing task, wherein the distributed work service is configured to only write the task into the queue of the batch processing task and is configured to not read the task in the queue of the batch processing task;
when any one of the distributed work services receives a task processing instruction, the task processing instruction is sent to the domain management service so as to respond to the task processing instruction through the domain management service.
2. The method for batch processing of tasks according to claim 1, characterized in that the method further comprises:
acquiring operation information of a management service to be selected;
and judging the availability of the to-be-selected management service according to the operation information, and selecting the to-be-selected management service with the availability meeting the preset condition as the domain management service.
3. The method for batch processing of tasks according to claim 1, further comprising, prior to said obtaining processing resources of a number of distributed work services:
acquiring distributed work services registered in the domain management service;
judging whether the distributed working service is online or not;
The distributed work service currently in an online state is determined to be an available distributed work service.
4. A method of batching tasks according to claim 3, wherein the distributing the tasks according to the processing resources to distribute at least one task acquired by polling into at least one distributed work service further comprises:
and sending connection information to the available distributed work service so that the available distributed work service can close the automatic offline operation in response to the connection information.
5. A method of batching tasks according to claim 1, wherein the distributing the tasks according to the processing resources comprises:
acquiring access characters of the task;
splitting the existing data according to the access character to acquire the pointing data of the task;
distributing the pointing data and the task to the distributed work service.
6. The method of claim 1, wherein the distributing the tasks according to the processing resources to distribute the at least one task acquired by polling to at least one distributed work service further comprises:
Calculating the matching degree of the task processing resources of the distributed work service and the task;
and distributing the tasks according to the matching degree, and distributing at least one task acquired by polling to at least one distributed work service.
7. A batch processing apparatus for a task, comprising:
the task response module is used for responding to a task processing instruction, polling a queue of batch processing tasks through a domain management service and acquiring at least one task in the queue, wherein the queue of the batch processing tasks is stored at a server;
the resource acquisition module is used for acquiring processing resources of a plurality of distributed work services;
the task scheduling module is used for distributing the tasks according to the processing resources so as to distribute at least one task acquired by polling to at least one distributed work service; wherein if the processing of one task is associated with another task, then configured to distribute the associated two or more tasks to the same distributed work service;
the task feedback module is used for receiving the processing feedback information of the distributed work service on the task;
the task updating module is used for updating the processing information of the tasks in the queue according to the processing feedback information;
The method for polling the queue of batch processing tasks through the domain management service in response to the task processing instruction, before obtaining at least one task in the queue, further comprises:
when any one of the distributed work services receives a task, writing the task into a queue of the batch processing task, wherein the distributed work service is configured to only write the task into the queue of the batch processing task and is configured to not read the task in the queue of the batch processing task;
when any one of the distributed work services receives a task processing instruction, the task processing instruction is sent to the domain management service so as to respond to the task processing instruction through the domain management service.
8. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the steps of a batch processing method of tasks according to any of claims 1 to 6.
9. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of a batch processing method of tasks as claimed in any one of claims 1 to 6.
CN202310558590.9A 2023-05-18 2023-05-18 Batch processing method and device for tasks, computer equipment and storage medium Active CN116302450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310558590.9A CN116302450B (en) 2023-05-18 2023-05-18 Batch processing method and device for tasks, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310558590.9A CN116302450B (en) 2023-05-18 2023-05-18 Batch processing method and device for tasks, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116302450A CN116302450A (en) 2023-06-23
CN116302450B true CN116302450B (en) 2023-09-01

Family

ID=86781897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310558590.9A Active CN116302450B (en) 2023-05-18 2023-05-18 Batch processing method and device for tasks, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116302450B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957780A (en) * 2010-08-17 2011-01-26 中国电子科技集团公司第二十八研究所 Resource state information-based grid task scheduling processor and grid task scheduling processing method
US8195739B2 (en) * 2002-02-04 2012-06-05 Tibco Software Inc. Adaptive polling
US9785691B2 (en) * 2005-09-09 2017-10-10 Open Invention Network, Llc Method and apparatus for sequencing transactions globally in a distributed database cluster
CN111324445A (en) * 2018-12-14 2020-06-23 中国科学院深圳先进技术研究院 Task scheduling simulation system
CN113608891A (en) * 2021-07-19 2021-11-05 上海浦东发展银行股份有限公司 Distributed batch processing system, method, computer device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8789058B2 (en) * 2011-03-25 2014-07-22 Oracle International Corporation System and method for supporting batch job management in a distributed transaction system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8195739B2 (en) * 2002-02-04 2012-06-05 Tibco Software Inc. Adaptive polling
US9785691B2 (en) * 2005-09-09 2017-10-10 Open Invention Network, Llc Method and apparatus for sequencing transactions globally in a distributed database cluster
CN101957780A (en) * 2010-08-17 2011-01-26 中国电子科技集团公司第二十八研究所 Resource state information-based grid task scheduling processor and grid task scheduling processing method
CN111324445A (en) * 2018-12-14 2020-06-23 中国科学院深圳先进技术研究院 Task scheduling simulation system
CN113608891A (en) * 2021-07-19 2021-11-05 上海浦东发展银行股份有限公司 Distributed batch processing system, method, computer device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种分布式脆弱性检测技术的研究;邓超;中国优秀硕士学位论文全文数据库信息科技辑(月刊)(第03期);I139-185 *

Also Published As

Publication number Publication date
CN116302450A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109725988B (en) Task scheduling method and device
CN114610474B (en) Multi-strategy job scheduling method and system under heterogeneous supercomputing environment
CN101645022A (en) Work scheduling management system and method for a plurality of colonies
CN113094141A (en) Page display method and device, electronic equipment and storage medium
CN112860396B (en) GPU scheduling method and system based on distributed deep learning
CN113204425A (en) Method and device for process management internal thread, electronic equipment and storage medium
US20210326170A1 (en) Method to set up and tear down cloud environments based on a schedule obtained from one or more hosted calendars
CN114205366A (en) Cross-platform data synchronization method and device, equipment, medium and product thereof
CN116302450B (en) Batch processing method and device for tasks, computer equipment and storage medium
CN111813529B (en) Data processing method, device, electronic equipment and storage medium
CN112860742A (en) Centralized rule engine service calling and controlling method, device, equipment and medium
CN116721007B (en) Task control method, system and device, electronic equipment and storage medium
CN113326025A (en) Single cluster remote continuous release method and device
CN114048258A (en) Live broadcast data scheduling and accessing method and device, equipment, medium and product thereof
WO2021227642A1 (en) Low-power-consumption distributed invocation method, device and apparatus.
CN115378937A (en) Distributed concurrency method, device and equipment for tasks and readable storage medium
CN114238585A (en) Query method and device based on 5G message, computer equipment and storage medium
CN113098960A (en) Service operation method, device, server and storage medium
CN114443262A (en) Computing resource management method, device, equipment and system
CN114928608B (en) Method, device, equipment and storage medium for processing multimedia resources
CN112887393A (en) Access entry pushing and display control method, device, equipment and medium
CN115828006A (en) Page loading method and device, computer equipment and storage medium
US20230018479A1 (en) Method, system, medium, and server for operation management of electronic devices
CN114257582A (en) Batch job processing method, distributed system and batch job processing architecture
CN115794841A (en) Data updating method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant