CN116775231A - Service processing method, service processing device, electronic equipment and storage medium - Google Patents

Service processing method, service processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116775231A
CN116775231A CN202210228549.0A CN202210228549A CN116775231A CN 116775231 A CN116775231 A CN 116775231A CN 202210228549 A CN202210228549 A CN 202210228549A CN 116775231 A CN116775231 A CN 116775231A
Authority
CN
China
Prior art keywords
target
delay
delay queue
distributed lock
business operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210228549.0A
Other languages
Chinese (zh)
Inventor
赵亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202210228549.0A priority Critical patent/CN116775231A/en
Publication of CN116775231A publication Critical patent/CN116775231A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a service processing method, including: responding to the successful acquisition of the distributed lock by the target thread, and acquiring a target business operation loaded in the target thread, wherein the target business operation is configured with a priority level; comparing the priority level of the target business operation with a preset level; under the condition that the priority level is lower than the preset level, recording target service operation by using a first delay queue corresponding to the distributed lock to obtain a second delay queue; controlling the target thread to release the distributed lock; and invoking a delay processing policy to process the second delay queue to process the target business operation. Further, the present disclosure provides a service processing apparatus, an electronic device, a readable storage medium and a computer program product.

Description

Service processing method, service processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to a service processing method, a service processing apparatus, an electronic device, a readable storage medium, and a computer program product.
Background
With the expansion of enterprise services, distributed systems are used in more and more enterprise service systems. In a distributed system, a distributed lock is a way to control synchronous access to a shared resource between different hosts, so as to ensure atomicity of operations such as reading, updating, saving, and the like of the resource.
In the process of implementing the disclosed concept, the inventor finds that at least the following problems exist in the related art: when a concurrency problem is caused by a large amount of service operations, the distributed locks adopted in the related art cannot ensure that the execution sequence of the service operations accords with the service execution logic.
Disclosure of Invention
In view of this, the present disclosure provides a service processing method, a service processing apparatus, an electronic device, a readable storage medium, and a computer program product.
One aspect of the present disclosure provides a service processing method, including: responding to the successful acquisition of the distributed lock by the target thread, and acquiring the target business operation loaded in the target thread, wherein the target business operation is configured with a priority level; comparing the priority level of the target business operation with a preset level; recording the target business operation by using a first delay queue corresponding to the distributed lock under the condition that the priority level is lower than the preset level, so as to obtain a second delay queue; controlling the target thread to release the distributed lock; and invoking a delay processing strategy to process the second delay queue so as to process the target business operation.
According to an embodiment of the present disclosure, the above method further includes: controlling the target thread to execute the target business operation under the condition that the priority level is equal to or higher than the preset level; and controlling the target thread to release the distributed lock under the condition that the target business operation is executed.
According to an embodiment of the present disclosure, the recording, using a first delay queue corresponding to the distributed lock, the target service operation, and obtaining a second delay queue, includes: acquiring the first delay queue from the cache; generating a target business element based on the identification of the target business operation and a delay time stamp, wherein the delay time stamp is characterized by a sum value of a current time stamp and a first preset time, and the first preset time is related to the priority level; inserting the target business element into the first delay queue based on the delay time stamp to obtain the second delay queue; and storing the second delay queue in the cache.
According to an embodiment of the present disclosure, the calling delay processing policy processes the second delay queue to process the target service operation, including: controlling the target thread to sleep for a second preset time; after the target thread finishes dormancy, responding to the distributed lock request of the target thread to lock the target thread; responding to the target thread to successfully acquire the distributed lock, and acquiring the second delay queue from the cache; dequeuing operation is carried out on the second delay queue, and a first service element and a third delay queue in the second delay queue are obtained; controlling the target thread to execute the target business operation under the condition that the first business element is the target business element; storing the third delay queue in the cache; and controlling the target thread to release the distributed lock under the condition that the target business operation is executed.
According to an embodiment of the present disclosure, the above method further includes: storing the second delay queue into the cache when the first service element is not the target service element; controlling the target thread to release the distributed lock; and calling the delay processing strategy again to process the second delay queue.
According to an embodiment of the present disclosure, the above method further includes: and responding to the failure of the target thread to acquire the distributed lock within a third preset time, and sending feedback information of failure of the target business operation processing to a user.
Another aspect of the present disclosure provides a service processing apparatus, including: the acquisition module is used for responding to the successful acquisition of the distributed lock by the target thread and acquiring the target business operation loaded in the target thread, wherein the target business operation is configured with a priority level; the comparison module is used for comparing the priority level of the target business operation with a preset level; the recording module is used for recording the target business operation by using a first delay queue corresponding to the distributed lock under the condition that the priority level is lower than the preset level, so as to obtain a second delay queue; the unlocking module is used for controlling the target thread to release the distributed lock; and the first processing module is used for calling a delay processing strategy to process the second delay queue so as to process the target business operation.
Another aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more instructions that, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program product comprising computer executable instructions which, when executed, are for implementing a method as described above.
According to the embodiment of the disclosure, after the target thread acquires the distributed lock, the priority level of the target business operation to be executed in the target thread can be compared with a preset level; in the case where the comparison result indicates that the priority level of the target business operation is low, the target business operation may be added to the delay queue to delay the processing of the target business operation. By the technical means, when the concurrent problem is encountered, the service operation with higher priority level can be processed preferentially, so that the technical problem that the execution sequence of the service operation cannot be ensured to be in accordance with the service execution logic by the distributed lock adopted in the related technology is at least partially overcome, and the correctness of the service execution logic and the normal operation of a service system are effectively ensured.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
fig. 1 schematically illustrates an exemplary system architecture to which a business processing method may be applied according to an embodiment of the present disclosure.
Fig. 2 schematically shows a flow chart of a traffic handling method according to an embodiment of the present disclosure.
Fig. 3 schematically illustrates a flow chart of a business processing method according to another embodiment of the present disclosure.
Fig. 4 schematically illustrates a schematic diagram of a process flow of a deferred processing policy in accordance with an embodiment of the present disclosure.
Fig. 5 schematically illustrates a schematic diagram of a delay queue data store logic according to an embodiment of the present disclosure.
Fig. 6 schematically shows a block diagram of a traffic processing apparatus according to an embodiment of the present disclosure.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a business processing method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In related art, implementations of distributed locks include implementing distributed locks based on database locks, implementing distributed locks based on caches (redis), implementing distributed locks based on Zookeeper, and so forth. The distributed locks realized in the three modes are just fair competitive locks, namely, the locks are distributed according to the first-come principle.
However, in a scenario where the data size of the service operation is large, the service system may receive locking requests of different threads for the same resource at the same time, and the distributed lock implemented in the related technology cannot divide the service operation in different threads into a reasonable execution sequence, so that problems may occur in the execution logic of the service or the execution failure of the service operation may occur. For example, for a commodity inventory of a commodity, in a major scenario, a placing order inventory operation and a cancel order and inventory operation may be simultaneously received within 1ms, and if the execution order of the business operation is not determined, the placing order inventory operation may fail to be executed due to insufficient inventory.
In view of this, the embodiments of the present disclosure provide an implementation scheme of a priority distributed lock, where different operation types are prioritized under the condition that different service operations are performed using the same distributed lock, so as to ensure that a high priority operation is performed before a low priority operation at the same time.
In particular, embodiments of the present disclosure provide a service processing method, a service processing apparatus, an electronic device, a readable storage medium, and a computer program product. The method comprises the following steps: responding to the successful acquisition of the distributed lock by the target thread, and acquiring a target business operation loaded in the target thread, wherein the target business operation is configured with a priority level; comparing the priority level of the target business operation with a preset level; under the condition that the priority level is lower than the preset level, recording target service operation by using a first delay queue corresponding to the distributed lock to obtain a second delay queue; controlling the target thread to release the distributed lock; and invoking a delay processing policy to process the second delay queue to process the target business operation.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are taken, and the public order harmony is not violated.
Fig. 1 schematically illustrates an exemplary system architecture to which a business processing method may be applied according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105.
The terminal devices 101, 102, 103 may be a variety of electronic devices including, but not limited to, smartphones, tablets, laptop portable computers, desktop computers, and the like.
Various client applications may be installed on the terminal devices 101, 102, 103, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients and/or social platform software, etc.
The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
Server 105 may be a server providing various services, for example, server 105 may be a host in a distributed system, a processor of the host may implement multi-threaded parallel processing, and each thread may process business operations generated by received user operations; the server 105 may also return the processing result or feedback information of the service operation to the terminal devices 101, 102, 103 through the network 104.
It should be noted that, the service processing method provided in the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the service processing apparatus provided in the embodiments of the present disclosure may be generally disposed in the server 105. The service processing method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the service processing apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the service processing method provided by the embodiment of the present disclosure may be performed by the terminal device 101, 102, or 103, or may be performed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the service processing apparatus provided by the embodiments of the present disclosure may also be provided in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, an operation performed by a user in a client application on any one of the terminal devices 101, 102, or 103 (for example, but not limited to, the terminal device 101) may generate a business operation, and the terminal device 101 may locally perform the business processing method provided by the embodiments of the present disclosure to process the business operation. Alternatively, the service operation may be sent to other terminal devices, servers, or server clusters through the network 104, and the service processing method provided by the embodiments of the present disclosure is performed by the other terminal devices, servers, or server clusters that receive the service operation.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a traffic handling method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S205.
In operation S201, in response to the target thread successfully acquiring the distributed lock, a target business operation loaded in the target thread is acquired, wherein the target business operation is configured with a priority level.
In operation S202, the priority level of the target business operation is compared with a preset level.
In operation S203, in the case that the priority level is lower than the preset level, the target service operation is recorded using the first delay queue corresponding to the distributed lock, and a second delay queue is obtained.
In operation S204, the control target thread releases the distributed lock.
In operation S205, a delay processing policy is invoked to process the second delay queue to process the target business operation.
The implementation of the distributed lock is not limited herein, and may be implemented using, for example, a cache of a business system, according to embodiments of the present disclosure.
According to the embodiment of the disclosure, each resource of the service system is provided with a corresponding distributed lock, the same resource can be called by a plurality of service operations, and a plurality of threads can access the same resource at the same time.
According to embodiments of the present disclosure, after a target thread successfully acquires a distributed lock, the distributed lock does not respond to locking requests of other threads.
According to embodiments of the present disclosure, the priority level may be set by a developer according to a specific business operation type. The priority level may be represented as a numeric type, a character type, or a boolean type, the specific representation of which is not limited herein.
According to embodiments of the present disclosure, the preset level may be set according to a specific service scenario, for example, may be set to the highest level among all priority levels.
According to embodiments of the present disclosure, the delay queues may be implemented by serializing data stored in a cache.
According to embodiments of the present disclosure, a plurality of business operations may be recorded in a delay queue.
According to embodiments of the present disclosure, a delay queue may establish an association of the delay queue with a distributed lock associated with a resource by establishing the association with the resource.
According to an embodiment of the present disclosure, the delay processing policy may be a policy employed when performing the business operations recorded in the delay queue in order.
According to the embodiment of the disclosure, after the target thread acquires the distributed lock, the priority level of the target business operation to be executed in the target thread can be compared with a preset level; in the case where the comparison result indicates that the priority level of the target business operation is low, the target business operation may be added to the delay queue to delay the processing of the target business operation. By the technical means, when the concurrent problem is encountered, the service operation with higher priority level can be processed preferentially, so that the technical problem that the execution sequence of the service operation cannot be ensured to be in accordance with the service execution logic by the distributed lock adopted in the related technology is at least partially overcome, and the correctness of the service execution logic and the normal operation of a service system are effectively ensured.
The method shown in fig. 2 is further described below with reference to fig. 3-5 in conjunction with the exemplary embodiment.
Fig. 3 schematically illustrates a flow chart of a business processing method according to another embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S311.
It should be noted that, unless there is an execution sequence between different operations or an execution sequence between different operations in technical implementation, the execution sequence between multiple operations may be different, and multiple operations may also be executed simultaneously in the embodiment of the disclosure.
In operation S301, a distributed lock request for a target thread is received.
In operation S302, it is determined whether the target thread successfully acquires the distributed lock. In case it is determined that the target thread fails to acquire the distributed lock, operation S303 is performed; in the case where it is determined that the target thread successfully acquires the distributed lock, operation S305 is performed.
In operation S303, it is determined whether the acquisition lock time exceeds a third preset time. If it is determined that the acquisition lock time exceeds the third preset time, operation S304 is performed; in the case where it is determined that the acquisition lock time does not exceed the third preset time, the execution returns to operation S302.
In operation S304, the current distributed lock request is denied.
In operation S305, it is determined whether the priority of the target business operation is lower than a preset level. In case it is determined that the priority of the target service operation is lower than the preset level, performing operation S306; in case it is determined that the priority of the target business operation is higher than or equal to the preset level, operation S311 is performed.
In operation S306, a delay queue is acquired from the cache.
In operation S307, a target business element is generated.
In operation S308, the target traffic element is inserted into the delay queue.
In operation S309, a delay queue is stored.
In operation S310, the control target thread releases the distributed lock.
In operation S311, the control target thread performs a target business operation. After the operation S311 is completed, operation S310 is performed.
According to embodiments of the present disclosure, the priority level of a business operation is related to the type and application scenario of the business operation. For example, in an online shopping scenario, in order to ensure normal sales of goods, the priority level of business operations that increase the inventory of goods needs to be higher than the priority level of business operations that decrease the inventory of goods.
According to the embodiment of the present disclosure, in the case that the priority level of the target business operation is higher than or equal to the preset level, it may be considered that the target business operation may be directly performed; the target thread may then be controlled to perform the target business operation and release the distributed lock after the target business operation is completed.
According to the embodiment of the present disclosure, the third preset time may be set according to a specific service scenario, which is not limited herein. For example, the third preset time may be set to be the longest service execution time obtained by statistics, and in the case that the target thread fails to acquire the distributed lock within the third preset time, the distributed lock may be considered to have a deadlock phenomenon, and at this time, feedback information of failure in operation and processing of the target service may be sent to the user, and deadlock error reporting information may be sent to a maintainer. For another example, the third preset time may be set to be an average time consumption of service execution obtained by statistics, and in a case that the target thread fails to acquire the distributed lock within the third preset time, the distributed lock request of the target thread may be rejected.
In accordance with embodiments of the present disclosure, in the event that a target thread's distributed lock request is denied, the target thread may sleep for a period of time and initiate the distributed lock request again.
According to the embodiment of the disclosure, when generating the target business element, firstly, the identification and the priority level of the target business operation can be obtained, and the current timestamp is obtained by calculation according to the system time; according to the priority level, determining the time to be delayed as a first preset time, and adding the first preset time to the current time stamp to obtain a delay time stamp; the target business element is then generated from the identification of the target business operation and the delay time stamp.
According to an embodiment of the present disclosure, the value of the first preset time may be related to the priority level. For example, the priority levels of the service operations may be classified into 1 to 5 levels from low to high, and the first preset time of the target service operation may be set to be the product of the difference between the highest priority level and the priority level of the target service operation and a time value, which may be set to 1ms.
According to the embodiment of the disclosure, the distributed lock based on the priority can be realized by introducing the delay queue, so that the running robustness of the service system is effectively improved.
Fig. 4 schematically illustrates a schematic diagram of a process flow of a deferred processing policy in accordance with an embodiment of the present disclosure.
As shown in fig. 4, the process flow includes operations S401 to S411.
In operation S401, the current thread is controlled to sleep for a second preset time.
In operation S402, a distributed lock request for a target thread is received.
In operation S403, it is determined whether the target thread successfully acquires the distributed lock. In the case that it is determined that the target thread has not successfully acquired the distributed lock, operation S404 is performed; in the case where it is determined that the target thread successfully acquires the distributed lock, operation S405 is performed.
In operation S404, it is determined whether the acquisition lock time exceeds a third preset time. Returning to the execution operation S401 if it is determined that the acquisition lock time exceeds the third preset time; in the case where it is determined that the acquisition lock time does not exceed the third preset time, the execution returns to operation S403.
In operation S405, a delay queue is acquired from a cache.
In operation S406, a first service element is acquired from the delay queue.
In operation S407, it is determined whether the first service element is a target service element; in case it is determined that the first service element is the target service element, performing operation S408; in case it is determined that the first service element is not the target service element, operation S410 is performed back.
In operation S408, the control target thread performs the target business operation.
In operation S409, the delay queue is stored in the cache, and the target thread is controlled to release the distributed lock.
In operation S410, the first service element is added to the delay queue, and the delay queue is stored in the cache.
In operation S411, the control target thread releases the distributed lock; after the completion of operation S411, execution returns to operation S401.
According to an embodiment of the present disclosure, invoking the delay processing policy to process the second delay queue to process the target business operation may specifically include: controlling the target thread to sleep for a second preset time; after the target thread finishes dormancy, responding to the distributed lock request of the target thread to lock the target thread; responding to the target thread to successfully acquire the distributed lock, and acquiring a second delay queue from the cache; dequeuing operation is carried out on the second delay queue, and a first service element and a third delay queue in the second delay queue are obtained; under the condition that the first business element is a target business element, controlling the target thread to execute target business operation; storing the third delay queue in a cache; and controlling the target thread to release the distributed lock under the condition that the target service operation is executed.
According to the embodiment of the present disclosure, the second preset time may be set according to a specific service scenario, for example, may be set to 2ms, 5ms, etc., which is not limited herein.
The dequeue operation may be, for example, a poll method, according to embodiments of the present disclosure.
According to an embodiment of the disclosure, in a case where the first service element is not the target service element, storing the second delay queue in the cache; controlling the target thread to release the distributed lock; and invoking the delay processing policy again to process the second delay queue.
Fig. 5 schematically illustrates a schematic diagram of a delay queue data store logic according to an embodiment of the present disclosure.
As shown in fig. 5, the service operation a and the service operation B may be service operations respectively loaded in the thread a and the thread B, and the thread a and the thread B initiate a distributed lock request to the service system at the same time, that is, the time with the characterization timestamp 1110111.
According to the embodiment of the disclosure, after receiving the distributed lock request, the service system determines a delay time according to the priority levels of the service operation a and the service operation B, and determines a delay time stamp according to the delay time and the current time stamp. According to the priority level, the delay time set for the service operation a is 1ms, and the delay time set for the service operation B is 2ms. Thereafter, a traffic element may be generated from the delay time stamp and inserted into the delay queue according to the size of the delay time stamp.
According to the embodiment of the disclosure, when the dequeue operation is executed, the timestamp when the dequeue operation is executed is compared with the delay timestamp configured in the service element, and only when the timestamp is greater than or equal to the delay timestamp, the dequeue operation can acquire the first service element from the delay queue, otherwise, the dequeue operation returns a null value. I.e. dequeue operation is performed at the time characterized by the timestamp 1110111, the return value of this operation is NULL, and dequeue operation is performed at the time characterized by the timestamp 1110112, the return value of this operation is a, i.e. the identity of business operation a.
According to embodiments of the present disclosure, business operations characterized by individual business elements in a delay queue may be performed sequentially, and the business operations are typically performed by threads that load the business operations.
According to the embodiment of the disclosure, after the distributed lock is released, at the moment of the characterized timestamp being 1110222, a thread C loading the service operation C initiates a distributed lock request to the service system, and a delay timestamp configured according to the service element generated by the service operation C is 1110223; at the moment of the characterized timestamp 1110223, the thread c successfully acquires the distributed lock, but the return value obtained through dequeuing operation is B, namely the identifier of the service operation B; at this point, thread c may directly release the distributed lock and go through sleep and relock operations.
Fig. 6 schematically shows a block diagram of a traffic processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the service processing apparatus 600 includes an acquisition module 610, a comparison module 620, a recording module 630, an unlocking module 640, and a first processing module 650.
And the acquiring module 610 is configured to acquire the target business operation loaded in the target thread in response to the target thread successfully acquiring the distributed lock, where the target business operation is configured with a priority level.
The comparing module 620 is configured to compare the priority level of the target service operation with a preset level.
And the recording module 630 is configured to record the target service operation using the first delay queue corresponding to the distributed lock to obtain a second delay queue when the priority level is lower than the preset level.
An unlocking module 640, configured to control the target thread to release the distributed lock.
The first processing module 650 is configured to invoke a delay processing policy to process the second delay queue to process the target business operation.
According to the embodiment of the disclosure, after the target thread acquires the distributed lock, the priority level of the target business operation to be executed in the target thread can be compared with a preset level; in the case where the comparison result indicates that the priority level of the target business operation is low, the target business operation may be added to the delay queue to delay the processing of the target business operation. By the technical means, when the concurrent problem is encountered, the service operation with higher priority level can be processed preferentially, so that the technical problem that the execution sequence of the service operation cannot be ensured to be in accordance with the service execution logic by the distributed lock adopted in the related technology is at least partially overcome, and the correctness of the service execution logic and the normal operation of a service system are effectively ensured.
According to an embodiment of the present disclosure, the apparatus 600 further comprises a second processing module and a third processing module.
And the second processing module is used for controlling the target thread to execute the target business operation under the condition that the priority level is equal to or higher than the preset level.
And the third processing module controls the target thread to release the distributed lock under the condition that the target service operation is executed.
According to an embodiment of the present disclosure, the recording module 630 includes a first recording unit, a second recording unit, a third recording unit, and a fourth recording unit.
And the first recording unit is used for acquiring the first delay queue from the cache.
And the second recording unit is used for generating a target business element based on the identification of the target business operation and a delay time stamp, wherein the delay time stamp is characterized by the sum value of the current time stamp and a first preset time, and the first preset time is related to the priority level.
And the third recording unit is used for inserting the target business element into the first delay queue based on the delay time stamp to obtain a second delay queue.
And the fourth recording unit is used for storing the second delay queue into the cache.
According to an embodiment of the present disclosure, the first processing module 650 includes a first processing unit, a second processing unit, a third processing unit, a fourth processing unit, a fifth processing unit, a sixth processing unit, and a seventh processing unit.
The first processing unit is used for controlling the target thread to sleep for a second preset time.
And the second processing unit is used for responding to the distributed lock request of the target thread after the target thread finishes dormancy so as to lock the target thread.
And the third processing unit is used for responding to the successful acquisition of the distributed lock by the target thread and acquiring a second delay queue from the cache.
And the fourth processing unit is used for executing dequeuing operation on the second delay queue to obtain the first service element and the third delay queue in the second delay queue.
And the fifth processing unit is used for controlling the target thread to execute the target business operation under the condition that the first business element is the target business element.
And the sixth processing unit is used for storing the third delay queue into a cache.
And the seventh processing unit is used for controlling the target thread to release the distributed lock under the condition that the target business operation is executed.
According to an embodiment of the present disclosure, the first processing module 650 further includes an eighth processing unit, a ninth processing unit, and a tenth processing unit.
And the eighth processing unit is used for storing the second delay queue into the cache under the condition that the first service element is not the target service element.
And the ninth processing unit is used for controlling the target thread to release the distributed lock.
And the tenth processing unit is used for calling the delay processing strategy again to process the second delay queue.
According to an embodiment of the present disclosure, the apparatus 600 further comprises a feedback module.
And the feedback module is used for responding to the failure of the target thread to acquire the distributed lock within a third preset time and sending feedback information of failure of operation processing of the target service to the user.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the acquisition module 610, the comparison module 620, the recording module 630, the unlocking module 640, and the first processing module 650 may be combined in one module/unit/sub-unit, or any of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the acquisition module 610, the comparison module 620, the recording module 630, the unlocking module 640, and the first processing module 650 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable way of integrating or packaging circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the acquisition module 610, the comparison module 620, the logging module 630, the unlocking module 640, and the first processing module 650 may be at least partially implemented as a computer program module, which, when executed, may perform the respective functions.
It should be noted that, in the embodiment of the present disclosure, the service processing apparatus portion corresponds to the service processing method portion in the embodiment of the present disclosure, and the description of the service processing apparatus portion specifically refers to the service processing method portion and is not described herein again.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a business processing method according to an embodiment of the disclosure. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, a computer electronic device 700 according to an embodiment of the present disclosure includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. The processor 701 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. Note that the program may be stored in one or more memories other than the ROM 702 and the RAM 703. The processor 701 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 700 may further include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The electronic device 700 may also include one or more of the following components connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 702 and/or RAM 703 and/or one or more memories other than ROM 702 and RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program comprising program code for performing the methods provided by the embodiments of the present disclosure, the program code for causing an electronic device to implement the business processing methods provided by the embodiments of the present disclosure when the computer program product is run on the electronic device.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (10)

1. A business processing method, comprising:
responding to the successful acquisition of a distributed lock by a target thread, and acquiring a target business operation loaded in the target thread, wherein the target business operation is configured with a priority level;
comparing the priority level of the target business operation with a preset level;
when the priority level is lower than the preset level, recording the target business operation by using a first delay queue corresponding to the distributed lock to obtain a second delay queue;
controlling the target thread to release the distributed lock; and
and calling a delay processing strategy to process the second delay queue so as to process the target business operation.
2. The method of claim 1, further comprising:
controlling the target thread to execute the target business operation under the condition that the priority level is equal to or higher than the preset level; and
and controlling the target thread to release the distributed lock under the condition that the target business operation is executed.
3. The method of claim 1, wherein the recording the target business operation using a first delay queue corresponding to the distributed lock, resulting in a second delay queue, comprises:
acquiring the first delay queue from a cache;
generating a target business element based on the identification of the target business operation and a delay time stamp, wherein the delay time stamp is characterized by a sum value of a current time stamp and a first preset time, and the first preset time is related to the priority level;
inserting the target business element into the first delay queue based on the delay time stamp to obtain the second delay queue; and
and storing the second delay queue into the cache.
4. The method of claim 3, wherein the invoking a delay handling policy to handle the second delay queue to handle the target business operation comprises:
Controlling the target thread to sleep for a second preset time;
after the target thread finishes dormancy, responding to a distributed lock request of the target thread to lock the target thread;
in response to the target thread successfully acquiring the distributed lock, acquiring the second delay queue from the cache;
dequeuing operation is carried out on the second delay queue, and a first service element and a third delay queue in the second delay queue are obtained;
controlling the target thread to execute the target business operation under the condition that the first business element is the target business element;
storing the third delay queue in the cache; and
and controlling the target thread to release the distributed lock under the condition that the target business operation is executed.
5. The method of claim 4, further comprising:
storing the second delay queue into the cache when the first service element is not the target service element;
controlling the target thread to release the distributed lock; and
and calling the delay processing strategy again to process the second delay queue.
6. The method of any one of claims 1-5, further comprising:
and responding to the failure of the target thread to acquire the distributed lock within a third preset time, and sending feedback information of failure of operation processing of the target service to a user.
7. A traffic processing apparatus comprising:
the acquisition module is used for responding to the successful acquisition of the distributed lock by the target thread and acquiring the target business operation loaded in the target thread, wherein the target business operation is configured with a priority level;
the comparison module is used for comparing the priority level of the target business operation with a preset level;
the recording module is used for recording the target business operation by using a first delay queue corresponding to the distributed lock under the condition that the priority level is lower than the preset level, so as to obtain a second delay queue;
the unlocking module is used for controlling the target thread to release the distributed lock; and
and the first processing module is used for calling a delay processing strategy to process the second delay queue so as to process the target business operation.
8. An electronic device, comprising:
one or more processors;
a memory for storing one or more instructions,
Wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 6.
9. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any of claims 1 to 6.
10. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 6 when executed.
CN202210228549.0A 2022-03-09 2022-03-09 Service processing method, service processing device, electronic equipment and storage medium Pending CN116775231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210228549.0A CN116775231A (en) 2022-03-09 2022-03-09 Service processing method, service processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210228549.0A CN116775231A (en) 2022-03-09 2022-03-09 Service processing method, service processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116775231A true CN116775231A (en) 2023-09-19

Family

ID=87990172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210228549.0A Pending CN116775231A (en) 2022-03-09 2022-03-09 Service processing method, service processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116775231A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076091A (en) * 2023-10-12 2023-11-17 宁波银行股份有限公司 Multi-engine pool scheduling method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076091A (en) * 2023-10-12 2023-11-17 宁波银行股份有限公司 Multi-engine pool scheduling method and device, electronic equipment and storage medium
CN117076091B (en) * 2023-10-12 2024-01-26 宁波银行股份有限公司 Multi-engine interface scheduling method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109960582B (en) Method, device and system for realizing multi-core parallel on TEE side
US9043872B2 (en) Selective management controller authenticated access control to host mapped resources
WO2019179026A1 (en) Electronic device, method for automatically generating cluster access domain name, and storage medium
US8396961B2 (en) Dynamic control of transaction timeout periods
US8965861B1 (en) Concurrency control in database transactions
US7770170B2 (en) Blocking local sense synchronization barrier
US20040015974A1 (en) Callback event listener mechanism for resource adapter work executions performed by an application server thread
CN110188110B (en) Method and device for constructing distributed lock
CN111190586A (en) Software development framework building and using method, computing device and storage medium
US20080091679A1 (en) Generic sequencing service for business integration
US8635682B2 (en) Propagating security identity information to components of a composite application
CN113835887B (en) Video memory allocation method and device, electronic equipment and readable storage medium
US8898126B1 (en) Method and apparatus for providing concurrent data insertion and updating
CN115686805A (en) GPU resource sharing method and device, and GPU resource sharing scheduling method and device
CN115373822A (en) Task scheduling method, task processing method, device, electronic equipment and medium
CN116775231A (en) Service processing method, service processing device, electronic equipment and storage medium
CN113132400B (en) Business processing method, device, computer system and storage medium
EP3633508A1 (en) Load distribution for integration scenarios
CN112882883B (en) Shutdown test method and device, electronic equipment and computer readable storage medium
CN113791876A (en) System, method and apparatus for processing tasks
CN113364857A (en) Service data processing method and device and server
US8707449B2 (en) Acquiring access to a token controlled system resource
CN115146316A (en) Cross-system data access method, device, equipment and medium
CN111737274B (en) Transaction data processing method, device and server
CN114253671A (en) GPU pooling method of Android container

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination