CN117785423A - Batch processing method and device based on double locking and related products - Google Patents
Batch processing method and device based on double locking and related products Download PDFInfo
- Publication number
- CN117785423A CN117785423A CN202311848052.XA CN202311848052A CN117785423A CN 117785423 A CN117785423 A CN 117785423A CN 202311848052 A CN202311848052 A CN 202311848052A CN 117785423 A CN117785423 A CN 117785423A
- Authority
- CN
- China
- Prior art keywords
- task
- batch processing
- locking
- equipment
- task queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 166
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000009977 dual effect Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000000151 deposition Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 3
- 238000010923 batch production Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides a batch processing method and device based on double locking and related products, wherein the method comprises the following steps: performing first locking on a task queue so that the task queue can be accessed only by first equipment; querying the task queue by using the first equipment, wherein the same preset time passes between every two adjacent queries of the task queue; when the first device inquires that a batch processing task exists in the task queue, performing second locking on a database corresponding to the batch processing task so that the database corresponding to the batch processing task can only be accessed by the second device; and processing the batch processing task based on a database corresponding to the batch processing task by using the second equipment. According to the method, a plurality of servers can process batch processing tasks simultaneously through the double locking mechanism, and the processing efficiency of the batch processing tasks is improved.
Description
Technical Field
The present disclosure relates to the field of finance or data processing technology, and in particular, to a batch processing method and apparatus based on double locking, and related products.
Background
Along with the continuous development of technology, various fields are developed in the direction of informatization and data formation. In the financial field, there is a resource management system, and a user can freely manage a resource held by the user through the resource management system, for example, the user can put the resource into a product, and the user can also take the resource out of the product. The user's management of his own resources by the resource management system is sent to the resource holder, which may be a bank in one possible implementation, which holds the resources for the user in the form of tasks. The resource holder may process the tasks sent by all users in batches at a particular time.
The prior art batch processes use one server process to process different nodes of a batch task with one server. But processing batch tasks through one server is inefficient.
Therefore, how to improve the efficiency of batch processing task is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
Based on the problems, the application provides a batch processing method and device based on double locking and related products, so as to solve the problem of low efficiency of batch processing tasks in the prior art.
The application provides a batch processing method based on double locking, which comprises the following steps:
performing first locking on a task queue so that the task queue can be accessed only by first equipment;
querying the task queue by using the first equipment, wherein the same preset time passes between every two adjacent queries of the task queue;
when the first device inquires that a batch processing task exists in the task queue, performing second locking on a database corresponding to the batch processing task so that the database corresponding to the batch processing task can only be accessed by the second device;
and processing the batch processing task based on a database corresponding to the batch processing task by using the second equipment.
In one possible implementation manner, when the first device queries that a batch processing task exists in the task queue, performing second locking on a database corresponding to the batch processing task to enable the database corresponding to the batch processing task to be only accessible by a second device includes:
when a node for starting batch processing tasks exists in the task queue, performing second locking on the database corresponding to the node so that the database corresponding to the node for batch processing tasks can only be accessed by second equipment;
the processing the batch processing task based on the database corresponding to the batch processing task by the second equipment comprises the following steps:
and processing the nodes of the batch processing task based on the database corresponding to the nodes by using the second equipment.
In one possible implementation manner, when there is a node that starts a batch task in the task queue, performing second locking on a database corresponding to the node to enable the database corresponding to the node of the batch task to be accessed only by a second device includes:
when a plurality of nodes for starting batch processing tasks exist in the task queue, second locking is carried out on databases corresponding to the plurality of nodes, so that the databases corresponding to the plurality of nodes can only be accessed by the corresponding second equipment.
In one possible implementation, the performing the first locking for the task queue to make the task queue accessible only to the first device includes:
performing first locking on a plurality of task queues so that the plurality of task queues can only be accessed by corresponding first equipment;
the querying the task queue by using the first device, wherein the same preset time between every two adjacent queries of the task queue comprises the following steps:
and accessing the corresponding task queues through the first equipment by utilizing the corresponding relation between the plurality of task queues and the first equipment, wherein the same preset time passes between every two adjacent inquiry task queues.
The application also provides a batch processing device based on double locking, wherein the device comprises the following modules:
the first locking module is used for carrying out first locking on the task queue so that the task queue can only be accessed by the first equipment;
the query module is used for querying the task queue by using the first equipment, and the same preset time is passed between every two adjacent queries of the task queue;
the second locking module is used for carrying out second locking on the database corresponding to the batch processing task when the first equipment inquires that the batch processing task exists in the task queue, so that the database corresponding to the batch processing task can only be accessed by the second equipment;
and the batch processing module is used for processing the batch processing task based on the database corresponding to the batch processing task by using the second equipment.
In one possible implementation manner, the second locking module is specifically configured to:
when a node for starting batch processing tasks exists in the task queue, performing second locking on the database corresponding to the node so that the database corresponding to the node for batch processing tasks can only be accessed by second equipment;
the batch processing module is specifically used for:
and processing the nodes of the batch processing task based on the database corresponding to the nodes by using the second equipment.
In one possible implementation manner, the second locking module is specifically configured to:
when a plurality of nodes for starting batch processing tasks exist in the task queue, second locking is carried out on databases corresponding to the plurality of nodes, so that the databases corresponding to the plurality of nodes can only be accessed by the corresponding second equipment.
In one possible implementation manner, the first locking module is specifically configured to:
performing first locking on a plurality of task queues so that the plurality of task queues can only be accessed by corresponding first equipment;
the query module is specifically configured to:
and accessing the corresponding task queues through the first equipment by utilizing the corresponding relation between the plurality of task queues and the first equipment, wherein the same preset time passes between every two adjacent inquiry task queues.
The application also provides an electronic device comprising a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is configured to execute the steps of the dual lock based batch processing method described above according to instructions in the computer program.
The application also provides a computer readable storage medium, which is characterized in that the computer readable storage medium is used for storing a computer program, and the computer program is executed by an electronic device to realize the steps of the batch processing method based on double locking.
Compared with the prior art, the application has the following beneficial effects:
the method provided by the application performs double locking when processing batch processing tasks. First locking the task queue allows the task queue to be accessed only by the first device. By locking the first task queue, the task queue cannot be queried by a plurality of devices, and the task queue can only be queried by the first device, so that when a plurality of task queues and a plurality of devices exist, the task queue cannot be queried by the plurality of devices to query the same task queue. And performing second locking on the databases corresponding to the batch processing tasks so that the databases corresponding to the batch processing tasks can only be accessed by the second equipment. By performing the second locking, the corresponding database is only accessible to the second device when processing the batch task. If the batch processing task has a plurality of corresponding databases, each database can only be accessed by the corresponding second device. The double locking mechanism ensures that confusion does not occur when a plurality of servers are utilized to process one batch of processing tasks, and the efficiency of utilizing a plurality of servers to process one batch of processing tasks is higher.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a dual locking based batch processing method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a first locking structure according to an embodiment of the present application;
FIG. 3 is a logic implementation diagram of a dual locking based batch processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a dual locking-based batch processing apparatus according to an embodiment of the present application.
Detailed Description
As described above, the prior art batch processes use one server process, with one server processing different nodes of the batch task.
It has been found through research that if a plurality of servers are used to process batch processing tasks simultaneously, duplication of task nodes may occur. For example, if a batch processing task is processed by using a plurality of servers, a first server processes the data reporting node, a second server processes the clearing and depositing and accounting node, and after the data reporting node processed by the first server is processed, the first server may process the clearing and depositing and accounting node. The first server processes the clearing and depositing account node once, the second server processes the clearing and depositing account node again, and the clearing and depositing account node is processed twice, so that the batch processing task cannot be completed normally. In the financial field, batch tasks often involve a lot of nodes, and in view of the cost of the servers, a corresponding server cannot be set for each batch task node. The number of servers that can be used to process a batch job is in most cases less than the number of nodes of the batch job. In processing batch tasks, any server may process any batch task node. If multiple servers are allowed to process batch tasks simultaneously, it may be the case that the same task node is processed multiple times. The case where the same task node is processed multiple times may result in a batch task failure. Since the above situation occurs when a plurality of servers are used to perform a batch processing task, the related art uses one server to perform a batch processing task. But one server is less efficient at performing batch tasks. For this reason, how to improve the efficiency of batch processing task is a technical problem that needs to be solved by those skilled in the art.
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. Based on the embodiments herein, all other embodiments that a person of ordinary skill in the art could obtain without making any inventive effort are within the scope of protection of the present application.
It can be appreciated that the method provided in the present application may be applied to a processing device, where the processing device may be a processing device that performs first locking for a task queue, for example, a terminal device or a server that may perform first locking for a task queue. The method provided by the application can be independently executed through the terminal equipment or the server, can also be applied to a network scene of communication between the terminal equipment and the server, and is executed through cooperation of the terminal equipment and the server. The terminal equipment can be a computer, a mobile phone and other equipment. The server can be understood as an application server or a Web server, and can be an independent server or a cluster server in actual deployment.
Fig. 1 is a flowchart of a dual locking-based batch processing method provided in the present application, where the dual locking-based batch processing method includes the following steps:
s101: first locking the task queue allows the task queue to be accessed only by the first device.
The task queue stores related information of batch processing tasks, and the task queue can be a database or a data table. The processing device performs a first lock on the task queue such that the task queue is only accessible to the first device. The first device may be a server in one possible implementation. In another possible implementation, the first device may be a processor deployed in a server for accessing a task queue.
In one possible implementation, there may be multiple task queues, and related information for different nodes of the batch task may be stored in different task queues. For example, there are four task queues, task queue 1, task queue 2, task queue 3, and task queue 4. The batch processing task has ten task nodes, the related information of the 1 st task node and the 2 nd task node is stored in the task queue 1, the related information of the 3 rd task node and the 4 th task node is stored in the task queue 2, the related information of the 5 th task node and the 6 th task node is stored in the task queue 3, and the related information of the 7 th task node, the 8 th task node, the 9 th task node and the 10 th task node is stored in the task queue 4. The processing device may perform a first locking for the task queue such that the task queue may only be accessed by the corresponding first device.
For example, the first device a is accessing the task queue 1, at which time the processing device may perform a first lock for the task queue a, by which the task queue 1 is only accessible to the first device a. When the first device a accesses the task queue 1, the first device B cannot access the task queue 1 due to the existence of the locking mechanism. After the first device a finishes accessing the task queue 1, the processing device performs a first unlocking on the task queue 1, and the task queue 1 can be accessed by the first device B through the first unlocking.
In one possible implementation, the first device a accesses the task queue 1 and the first device B accesses the task queue 2, at which time the processing device performs a first lock on the task queue 1 and the task queue 2 such that the task queue 1 is only accessible by the first device a and the task queue 2 is only accessible by the first device B. The first device a accesses the task queue 1 and tries to access the task queue 2 after the first device a accesses the task queue 1, and at this time, the first device B has not yet accessed the task queue 2, and the task queue 2 can only be accessed by the first device B because the task queue 2 is first locked, and the first device a fails to access the task queue 2. Thereafter, the first device a attempts to access the task queue 3 and the processing device performs a first locking for the task queue 3. After the first device B finishes accessing the task queue 2, the first device B tries to access the task queue 3, and the first device B fails to access the task queue 3 due to the first locking performed on the task queue 3, and then the first device B can access other task queues.
The method provided by the application performs first locking on the task queue so that the task queue can only be accessed by the first device. By locking the first task queue, the task queue cannot be queried by a plurality of devices, and the task queue can only be queried by the first device, so that when a plurality of task queues and a plurality of devices exist, the task queue cannot be queried by the plurality of devices to query the same task queue. According to the method, the situation that multiple devices access the task queue through first locking is avoided, and compared with the prior art that one device is used for accessing the task queue, efficiency is higher.
S102: and inquiring the task queues by using the first equipment, wherein the same preset time passes between every two adjacent inquiry task queues.
In one possible implementation, the first device is a processor with timing function deployed in the server for accessing the task queue, the processor with timing function querying the task queue every predetermined time interval. In one possible implementation, the preset time interval may be five seconds.
In one possible implementation manner, the processing device may access, through the first device, a corresponding task queue by using a correspondence between a plurality of task queues and the first device, and the same preset time passes between every two adjacent queries of the task queues.
S103: when the first device inquires that the batch processing task exists in the task queue, the second locking is carried out on the database corresponding to the batch processing task, so that the database corresponding to the batch processing task can only be accessed by the second device.
The task queue stores related information of batch processing tasks, wherein the related information of batch processing tasks comprises different nodes of the batch processing tasks and storage positions of data required to be processed by the different nodes, and the related data of the batch processing tasks are stored in a database in one possible implementation mode.
When the first device inquires that the batch processing task exists in the task queue, the processing device can perform second locking on the database corresponding to the batch processing task according to the related information of the batch processing task inquired by the first device, so that the database corresponding to the batch processing task can only be accessed by the second device. The first device is used for accessing the task queue and controlling the starting processing time of the batch processing task; the second device is for processing batch processing tasks.
In one possible implementation, when a node for starting the batch processing task exists in the task queue, a second locking is performed on the database corresponding to the node, so that the database corresponding to the node for the batch processing task can only be accessed by the second device.
For example, a batch task has ten nodes, each node to process different data, and the data to be processed by each node is stored in a corresponding database. When a first node for starting batch processing tasks exists in the task queue, the processing equipment performs second locking on a first database corresponding to the first node, so that the first database can only be accessed by the second equipment A. The first database stores data corresponding to the first node. In this possible implementation, the processing device processes the first node based on the first database with the second device a.
The processing equipment performs second locking on the databases corresponding to the batch processing tasks so that the databases corresponding to the batch processing tasks can only be accessed by the corresponding second equipment. By performing the second locking, the corresponding database can only be accessed by the corresponding second device when different nodes of the batch processing task are processed. Each database is only accessible by the corresponding second device. The locking of the database allows for no confusion when processing a batch job with multiple servers, while processing a batch job with multiple servers is more efficient.
S104: and processing the batch processing task based on the database corresponding to the batch processing task by using the second equipment.
The processing device processes the batch processing task based on the database corresponding to the batch processing task by using the second device. Because the database corresponding to the batch processing task is subjected to the second locking, only the corresponding server can process the batch processing task based on the database corresponding to the batch processing task.
The method provided by the application performs double locking when processing batch processing tasks. First locking the task queue allows the task queue to be accessed only by the first device. By locking the first task queue, the task queue cannot be queried by a plurality of devices, and the task queue can only be queried by the first device, so that when a plurality of task queues and a plurality of devices exist, the task queue cannot be queried by the plurality of devices to query the same task queue. And performing second locking on the databases corresponding to the batch processing tasks so that the databases corresponding to the batch processing tasks can only be accessed by the second equipment. By performing the second locking, the corresponding database is only accessible to the second device when processing the batch task. If the batch processing task has a plurality of corresponding databases, each database can only be accessed by the corresponding second device. The double locking mechanism ensures that confusion does not occur when a plurality of servers are utilized to process one batch of processing tasks, and the efficiency of utilizing a plurality of servers to process one batch of processing tasks is higher.
Fig. 2 is a schematic structural diagram of a first locking structure provided in the present application, where there are multiple auto-activation processors (Auto Active Processor) in the plug-in layer, and the first device described above may be an auto-activation processor. The plurality of auto-launch processors first lock the task queues when invoking the plurality of task queues. When a task queue is invoked by one auto-launch processor, other auto-launch processors cannot invoke the task queue. The task queue in the figure is a multiple thread pool. The task queue has a plurality of batch processing tasks to be processed, wherein the batch processing tasks to be processed in the figure comprise quotation batch processing tasks, data reporting batch tasks, clearing money depositing batch tasks, date disc turning batch tasks and the like. Each batch task can have a corresponding number, the number of the quotation processing batch task is T200203 HSA-pt, the number of the data reporting batch task is T200105 HSA-pt, the number of the clearing account batch task is T200013 HSA-pt, and the number of the date turning batch task is T200016 HSA-pt in FIG. 2. The task queue may control the start of execution of the plurality of batch tasks. The task queue controls the start of executing a plurality of batch tasks in the business processing logic layer in a calling mode.
Fig. 3 is a logic implementation diagram of a dual locking-based batch processing method provided in the present application, where fig. 3 includes a plug-in layer, a task queue layer, and a service logic layer. After the plug-in layer starts the flow, the timing plug-in is started. It was mentioned above that the resource holder would process all tasks sent by the user in batches at a specific time, the timing plug-in being used for timing, the timing start automatic clearing processor starting the first device by means of the timing plug-in and the timing task thread pool. In one possible implementation, the timing plug-in would access the timed task thread pool at 17 a day, at 17 a particular time, perhaps 17 a day, and begin task queue service by the first device at 17 a day.
After the task queue layer starts task queue service, when the first device calls the first queue, the task queue is first locked. Tasks in the task queue that begin batching may be distributed by a distributor to a business orchestration plug-in or result discard. Tasks that need to be processed are distributed to the business orchestration plug-ins by the distributor, which discards tasks that do not need to be performed. The business orchestration plug-in places batch processing tasks into a task queue. The processor invokes the batch service based on the signal in the task queue to begin executing the batch task and the rest of the time results are discarded. The signal to start executing the batch task is artificially issued.
And after the batch processing service is started in the business logic layer, executing a batch processing task through the second equipment, and performing second locking on the corresponding database when the second equipment executes the batch processing task.
The present application also provides a schematic structural diagram of a dual-locking-based batch processing apparatus as shown in fig. 4, where the dual-locking-based batch processing apparatus 400 includes the following modules:
a first locking module 401, configured to perform first locking on a task queue so that the task queue is only accessible by a first device;
a query module 402, configured to query the task queue using the first device, where the same preset time passes between every two adjacent queries of the task queue;
a second locking module 403, configured to, when the first device queries that a batch task exists in the task queue, perform second locking on a database corresponding to the batch task, so that the database corresponding to the batch task can only be accessed by a second device;
and the batch processing module 404 is configured to process the batch processing task based on a database corresponding to the batch processing task by using the second device.
In one possible implementation manner, the second locking module is specifically configured to:
when a node for starting batch processing tasks exists in the task queue, performing second locking on the database corresponding to the node so that the database corresponding to the node for batch processing tasks can only be accessed by second equipment;
the batch processing module is specifically used for:
and processing the nodes of the batch processing task based on the database corresponding to the nodes by using the second equipment.
In one possible implementation manner, the second locking module is specifically configured to:
when a plurality of nodes for starting batch processing tasks exist in the task queue, second locking is carried out on databases corresponding to the plurality of nodes, so that the databases corresponding to the plurality of nodes can only be accessed by the corresponding second equipment.
In one possible implementation manner, the first locking module is specifically configured to:
performing first locking on a plurality of task queues so that the plurality of task queues can only be accessed by corresponding first equipment;
the query module is specifically configured to:
and accessing the corresponding task queues through the first equipment by utilizing the corresponding relation between the plurality of task queues and the first equipment, wherein the same preset time passes between every two adjacent inquiry task queues.
The embodiment of the application also provides a batch processing device based on double locking, wherein the device comprises a memory and a processor, the memory is used for storing instructions or codes, and the processor is used for executing the instructions or codes, so that the device executes the steps of the batch processing method based on double locking.
In practical applications, the computer-readable storage medium may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium.
The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, wherein elements illustrated as separate elements may or may not be physically separate, and elements illustrated as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is merely one specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A dual locking based batch processing method, comprising:
performing first locking on a task queue so that the task queue can be accessed only by first equipment;
querying the task queue by using the first equipment, wherein the same preset time passes between every two adjacent queries of the task queue;
when the first device inquires that a batch processing task exists in the task queue, performing second locking on a database corresponding to the batch processing task so that the database corresponding to the batch processing task can only be accessed by the second device;
and processing the batch processing task based on a database corresponding to the batch processing task by using the second equipment.
2. The method of claim 1, wherein when the first device queries that a batch task exists in the task queue, performing second locking on a database corresponding to the batch task to enable the database corresponding to the batch task to be accessed only by a second device comprises:
when a node for starting batch processing tasks exists in the task queue, performing second locking on the database corresponding to the node so that the database corresponding to the node for batch processing tasks can only be accessed by second equipment;
the processing the batch processing task based on the database corresponding to the batch processing task by the second equipment comprises the following steps:
and processing the nodes of the batch processing task based on the database corresponding to the nodes by using the second equipment.
3. The method of claim 2, wherein when there is a node in the task queue that starts a batch task, performing a second locking on the database corresponding to the node to enable the database corresponding to the node of the batch task to be accessed only by a second device comprises:
when a plurality of nodes for starting batch processing tasks exist in the task queue, second locking is carried out on databases corresponding to the plurality of nodes, so that the databases corresponding to the plurality of nodes can only be accessed by the corresponding second equipment.
4. The method of claim 1, wherein first locking the task queue such that the task queue is only accessible to the first device comprises:
performing first locking on a plurality of task queues so that the plurality of task queues can only be accessed by corresponding first equipment;
the querying the task queue by using the first device, wherein the same preset time between every two adjacent queries of the task queue comprises the following steps:
and accessing the corresponding task queues through the first equipment by utilizing the corresponding relation between the plurality of task queues and the first equipment, wherein the same preset time passes between every two adjacent inquiry task queues.
5. A dual locking based batch processing apparatus comprising:
the first locking module is used for carrying out first locking on the task queue so that the task queue can only be accessed by the first equipment;
the query module is used for querying the task queue by using the first equipment, and the same preset time is passed between every two adjacent queries of the task queue;
the second locking module is used for carrying out second locking on the database corresponding to the batch processing task when the first equipment inquires that the batch processing task exists in the task queue, so that the database corresponding to the batch processing task can only be accessed by the second equipment;
and the batch processing module is used for processing the batch processing task based on the database corresponding to the batch processing task by using the second equipment.
6. The apparatus of claim 5, wherein the second locking module is specifically configured to:
when a node for starting batch processing tasks exists in the task queue, performing second locking on the database corresponding to the node so that the database corresponding to the node for batch processing tasks can only be accessed by second equipment;
the batch processing module is specifically used for:
and processing the nodes of the batch processing task based on the database corresponding to the nodes by using the second equipment.
7. The device according to claim 6, wherein the second locking module is specifically configured to:
when a plurality of nodes for starting batch processing tasks exist in the task queue, second locking is carried out on databases corresponding to the plurality of nodes, so that the databases corresponding to the plurality of nodes can only be accessed by the corresponding second equipment.
8. The apparatus of claim 5, wherein the first locking module is specifically configured to:
performing first locking on a plurality of task queues so that the plurality of task queues can only be accessed by corresponding first equipment;
the query module is specifically configured to:
and accessing the corresponding task queues through the first equipment by utilizing the corresponding relation between the plurality of task queues and the first equipment, wherein the same preset time passes between every two adjacent inquiry task queues.
9. An electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement a dual locking based batch processing method as claimed in any one of claims 1 to 4.
10. A computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements a dual lock based batch processing method as claimed in any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311848052.XA CN117785423A (en) | 2023-12-28 | 2023-12-28 | Batch processing method and device based on double locking and related products |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311848052.XA CN117785423A (en) | 2023-12-28 | 2023-12-28 | Batch processing method and device based on double locking and related products |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117785423A true CN117785423A (en) | 2024-03-29 |
Family
ID=90390587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311848052.XA Pending CN117785423A (en) | 2023-12-28 | 2023-12-28 | Batch processing method and device based on double locking and related products |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117785423A (en) |
-
2023
- 2023-12-28 CN CN202311848052.XA patent/CN117785423A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3108634B1 (en) | Facilitating third parties to perform batch processing of requests requiring authorization from resource owners for repeat access to resources | |
US11159649B2 (en) | Systems and methods of rate limiting for a representational state transfer (REST) application programming interface (API) | |
CN108509523A (en) | Structuring processing method, equipment and the readable storage medium storing program for executing of block chain data | |
US7818752B2 (en) | Interface for application components | |
CN110968603A (en) | Data access method and device | |
CN118696299A (en) | Multi-tenant mode for serverless code execution | |
US20070192431A1 (en) | Method and apparatus for service oriented architecture infrastructure switch | |
US20190327138A1 (en) | System and method for network provisioning | |
US20060236308A1 (en) | Configurable functionality chaining | |
US9853912B2 (en) | Stateless services in content management clients | |
CN112019452B (en) | Method, system and related device for processing service requirement | |
CN117076096A (en) | Task flow execution method and device, computer readable medium and electronic equipment | |
CN110413427B (en) | Subscription data pulling method, device, equipment and storage medium | |
CN112306695A (en) | Data processing method and device, electronic equipment and computer storage medium | |
EP3921732A1 (en) | Resource and operation management on a cloud platform | |
CN117785423A (en) | Batch processing method and device based on double locking and related products | |
CN109284177B (en) | Data updating method and device | |
CN112565340B (en) | Service scheduling method, device, computer system and medium for distributed application | |
CN112783613B (en) | Method and device for scheduling units | |
US20140280347A1 (en) | Managing Digital Files with Shared Locks | |
US11882008B1 (en) | Workload classes for tenant-level resource isolation | |
US20240193006A1 (en) | Monitoring of resource consumption per tenant | |
US20240192996A1 (en) | Mapping of tenant-specific database sessions to workload classes | |
US11720507B2 (en) | Event-level granular control in an event bus using event-level policies | |
CN113918924A (en) | Method and device for third-party access of big data cluster, computer equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |