CN111694681A - Batch service processing method and device, electronic equipment and computer storage medium - Google Patents

Batch service processing method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN111694681A
CN111694681A CN202010540130.XA CN202010540130A CN111694681A CN 111694681 A CN111694681 A CN 111694681A CN 202010540130 A CN202010540130 A CN 202010540130A CN 111694681 A CN111694681 A CN 111694681A
Authority
CN
China
Prior art keywords
processed
data
task queue
cache space
common task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010540130.XA
Other languages
Chinese (zh)
Inventor
董亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202010540130.XA priority Critical patent/CN111694681A/en
Publication of CN111694681A publication Critical patent/CN111694681A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a processing method, a device, electronic equipment and a computer storage medium of batch services, which are applied to each main thread of an information processing system, wherein the information processing system comprises at least one main thread and a plurality of slave threads, and the processing method comprises the following steps: acquiring a service data set corresponding to batch services; the service data set comprises a plurality of pieces of data to be processed; in the scheme, the main thread can control a plurality of slave threads to process the service data sets corresponding to the batch services in parallel only by managing one common task queue, so that the master thread does not need to determine which queue each piece of data to be processed is written into. Therefore, the scheme can effectively reduce the system resources consumed by processing the batch services.

Description

Batch service processing method and device, electronic equipment and computer storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for processing a batch service, an electronic device, and a computer storage medium.
Background
Batch business is a type of business that is often encountered by information handling systems of banks, and a batch business often involves a large number of users and related data, such as a brokerage payroll business for a plurality of employees of an enterprise, a text reconciliation business for transaction records over a period of time, and a batch account opening business for a large number of users.
In order to increase the processing speed of batch business, the information processing system of the bank usually adopts a multi-process technology to process the batch business in parallel at present. The existing multi-process technology generally relates to a main process and a plurality of sub-processes, wherein the main process screens out each piece of data to be processed from a service data set of batch services, and writes the data to be processed into a task queue of each sub-process respectively, so that each sub-process processes the data to be processed in the corresponding task queue one by one.
In the above processing process, the main process needs to manage the task queues corresponding to the multiple sub-processes and determine which task queue each piece of data to be processed is written into, so that the existing multi-process technology consumes more system resources.
Disclosure of Invention
In view of the problems in the prior art, the present application provides a method and an apparatus for processing a batch service, an electronic device, and a computer storage medium, so as to provide a batch service processing scheme with less resource consumption.
The first aspect of the present application provides a method for processing a batch service, which is applied to each main thread of an information processing system, where the information processing system includes at least one main thread and a plurality of slave threads, and the method includes:
acquiring a service data set corresponding to batch services; the service data set comprises a plurality of pieces of data to be processed;
and screening each piece of data to be processed from the service data set, and writing each piece of data to be processed into a common task queue, so that each slave thread reads and processes the data to be processed from the common task queue one by one.
Optionally, after each piece of to-be-processed data is obtained by screening from the service data set and each piece of to-be-processed data is written into a common task queue, the method further includes:
receiving feedback information of each slave thread; the feedback information is used for indicating whether the corresponding data to be processed is successfully processed.
Optionally, the common task queue includes a plurality of buffer spaces arranged in sequence, each buffer space is used for storing one piece of the to-be-processed data, and the process of reading and processing the to-be-processed data from the common task queue by the slave thread includes:
obtaining a head pointer of the common task queue;
reading to-be-processed data stored in a first cache space of the common task queue; wherein the first cache space refers to a cache space to which the head pointer currently points;
after the reading is successful, the head pointer points to the next cache space of the first cache space in the public task queue, and the head pointer is released;
and processing the read data to be processed.
Optionally, the writing each piece of the to-be-processed data into a common task queue includes:
obtaining a tail pointer of the common task queue;
if the second cache space of the public task queue is empty, writing a piece of data to be processed into the second cache space; wherein the second cache space refers to a cache space to which a tail pointer of the common task queue currently points;
and if the service data set contains to-be-processed data which is not written into the common task queue, pointing the tail pointer to a next cache space of the second cache space, and releasing the tail pointer.
A second aspect of the present application provides a processing apparatus for a batch service, which is applied to each master thread of an information processing system, the information processing system including at least one master thread and a plurality of slave threads, the processing apparatus including:
the acquiring unit is used for acquiring a service data set corresponding to the batch services; the service data set comprises a plurality of pieces of data to be processed;
the screening unit is used for screening each piece of data to be processed from the service data set;
and the writing unit is used for writing each piece of the data to be processed into a common task queue so that each slave thread reads from the common task queue one by one and processes the data to be processed.
Optionally, the processing apparatus further includes:
the receiving unit is used for receiving the feedback information of each slave thread; the feedback information is used for indicating whether the corresponding data to be processed is successfully processed.
Optionally, the common task queue includes a plurality of buffer spaces arranged in sequence, each buffer space is used for storing one piece of the to-be-processed data, and when the slave thread reads and processes the to-be-processed data from the common task queue, the slave thread is specifically used for:
obtaining a head pointer of the common task queue;
reading to-be-processed data stored in a first cache space of the common task queue; wherein the first cache space refers to a cache space to which the head pointer currently points;
after the reading is successful, the head pointer points to the next cache space of the first cache space in the public task queue, and the head pointer is released;
and processing the read data to be processed.
Optionally, when the writing unit writes each piece of the to-be-processed data into the common task queue, the writing unit is specifically configured to:
obtaining a tail pointer of the common task queue;
if the second cache space of the public task queue is empty, writing a piece of data to be processed into the second cache space; wherein the second cache space refers to a cache space to which a tail pointer of the common task queue currently points;
and if the service data set contains to-be-processed data which is not written into the common task queue, pointing the tail pointer to a next cache space of the second cache space, and releasing the tail pointer.
A third aspect of the present application provides an electronic device comprising a memory and a processor;
wherein the memory is for storing a computer program;
the processor is configured to execute the computer program, and in particular, is configured to implement the method for processing the batch service provided in any one of the first aspects of the present application.
A fourth aspect of the present application provides a computer storage medium, configured to store a computer program, where the computer program is specifically configured to implement the method for processing a batch service provided in any one of the first aspects of the present application when executed.
The application provides a processing method, a device, electronic equipment and a computer storage medium of batch services, which are applied to each main thread of an information processing system, wherein the information processing system comprises at least one main thread and a plurality of slave threads, and the processing method comprises the following steps: acquiring a service data set corresponding to batch services; the service data set comprises a plurality of pieces of data to be processed; in the scheme, the main thread can control a plurality of slave threads to process the service data sets corresponding to the batch services in parallel only by managing one common task queue, so that the master thread does not need to determine which queue each piece of data to be processed is written into. Therefore, the scheme can effectively reduce the system resources consumed by processing the batch services.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic thread architecture diagram of an information handling system according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a batch service processing method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart for reading and processing data to be processed from a thread according to an embodiment of the present disclosure;
fig. 4 is a block diagram illustrating a batch service processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for processing the batch service provided by the embodiment of the application can be applied to an information processing system comprising at least one main thread and a plurality of slave threads, wherein each main thread is used for screening the data to be processed from the service data set of the batch service and writing the screened data to be processed into a common task queue of the information processing system, and each slave thread is used for reading each piece of data to be processed from the common task queue one by one and processing the read data to be processed.
The relationship between the master thread and the slave thread in the above-described information processing system can be referred to fig. 1.
Optionally, the information processing system may include only one Master thread and a plurality of slave threads, in this case, a working mode of the information processing system is referred to as a manager-Worker (Master-Worker) mode, where the Master thread may be regarded as a manager thread (i.e., Master thread), and the slave threads may be regarded as Worker threads (i.e., Worker threads), in the manager-Worker mode, after each Worker thread finishes processing one piece of data to be processed each time, feedback information corresponding to the piece of data to be processed needs to be fed back to the Master thread, and the feedback information is used to indicate whether the corresponding data to be processed is successfully processed to the Master thread.
Alternatively, the information handling system may include a plurality of master threads and a plurality of slave threads, in which case the mode of operation of the information handling system is referred to as a producer-consumer mode, wherein the master threads may be considered producer threads and the slave threads may be considered consumer threads. In producer-consumer mode, the consumer thread does not need to provide feedback information to the producer thread after processing the pending data.
Referring to fig. 2, a method for processing a batch service provided in an embodiment of the present application may include the following steps:
it should be noted that the following steps can be considered as steps performed by main threads in the information handling system, each of which performs the steps shown in fig. 2 as a producer thread when the information handling system is operating in a producer-consumer mode.
S201, obtaining a service data set corresponding to the batch services.
The service data set comprises a plurality of pieces of data to be processed.
As described above, the batch business of the bank may include a payroll business for a plurality of employees of an enterprise, a text reconciliation business for transaction records in a certain period, and a batch account opening business for a large number of users, and in addition, may also include a short message notification business for a plurality of users, and the like.
The service data set of a batch service refers to a set of input data that needs to be provided when handling the batch service. For example, if the batch business is a salary business, the business data set includes a bank account of each employee of the corresponding enterprise and a salary amount set by the enterprise, wherein the bank account of each employee needing to issue the salary and the salary amount of the employee constitute a piece of data to be processed.
If the batch service is a batch account opening service for a plurality of users, the service data set comprises personal information (including identity numbers, contact ways, home addresses and the like) of each user needing to open an account, and the personal information of one user needing to open an account forms a piece of data to be processed.
S202, screening each piece of data to be processed from the business data set.
It is possible that the traffic data set contains some data that currently does not need to be processed for the moment. Taking the surreptitious payroll service as an example, part of employees may not issue payroll at this time, and bank accounts of the part of employees are also included in the service data set, and the bank accounts of the employees do not belong to the data to be processed, in this case, other data which do not belong to the data to be processed in the service data set can be filtered out by executing the step S202, so that errors in the subsequent processing of the slave threads are avoided.
Taking the short message notification service as an example, when handling a batch of short message notification services, the corresponding service data set may include the contact way of each user having opened the short message notification function in the bank, but the short message notification service only needs to send a short message to a part of the users, so the contact ways of the users who need to send the short message at this time need to be screened out from the service data set, and the contact way of each user who needs to send the short message constitutes a piece of data to be processed.
Optionally, the main thread may also designate a corresponding processing mode for each piece of to-be-processed data, and write the processing mode and the to-be-processed data into the common task queue, so as to control the slave thread to process a certain piece of to-be-processed data according to the processing mode designated by the main thread.
For example, in the payroll service, a bank may open a credit card or other loan repayment service after payroll is issued, and correspondingly, after screening out the bank account of each employee who needs to issue payroll, the main thread may configure an identifier for handling the repayment service for the bank account of the user who has opened the repayment service and has loans that need to be repayed, and set a repayment amount, so as to control the corresponding amount to be deducted from the bank account of the user when the thread reads the bank accounts of the users.
In the short message notification service, the main thread can specify the type of the short message to be sent for the contact information of each user needing to send the short message, so that the type of the short message sent from the auxiliary thread to the corresponding user is controlled.
S203, writing each piece of data to be processed into the common task queue, so that each slave thread reads and processes the data to be processed from the common task queue one by one.
Optionally, step S202 and step S203 may be executed simultaneously, that is, each time the main thread obtains one piece of to-be-processed data through screening, the main thread may immediately write the screened piece of to-be-processed data into the common task queue, and does not need to wait for writing each piece of to-be-processed data in the service data set into the common task queue after screening.
Optionally, after the main thread writes one piece of to-be-processed data into the common task queue, the piece of to-be-processed data may be deleted from the service data set to avoid repeatedly writing the piece of to-be-processed data into the common task queue in the following, or the corresponding write tag may be set for the piece of to-be-processed data in the service data set without deleting the to-be-processed data, so as to indicate that the piece of to-be-processed data has been written into the common task queue.
The common task queue comprises a plurality of cache spaces, a tail pointer and a head pointer, the slave thread can read data to be processed in the cache space pointed by the head pointer through the head pointer, the master thread can write the data to be processed into the cache space pointed by the tail pointer through the tail pointer, and the head pointer and the tail pointer can move backwards after being used every time so as to point to the next cache space of the cache space pointed originally.
For example, when the current head pointer points to the cache space a, after a slave thread reads the data to be processed of the cache space a through the head pointer, the head pointer will point to the next cache space of the cache space a. The situation of the tail pointer is similar and is not described in detail.
Optionally, if the tail pointer points to the last cache space of the common task queue, the main thread may add a plurality of cache spaces to the common task queue after the last cache space after writing the to-be-processed data into the last cache space, and then point the tail pointer to the newly added cache space.
Or, the main thread may also detect whether the first cache space of the common task queue is empty at this time, and if the first cache space of the common task queue is empty at this time, the tail pointer may be pointed to the first cache space of the common task queue.
When there are multiple main threads, in order to avoid conflict between the main threads caused by the multiple main threads writing data to be processed into the common task queue at the same time, it may be set that the tail pointer is only accessible by one main thread at the same time, and at this time, the specific execution process of step S203 may be:
a tail pointer of the common task queue is obtained.
Specifically, obtaining the tail pointer of the common task queue refers to obtaining the access right of the tail pointer released by another main thread. The tail pointer can preset an authority identifier, and when the authority identifier is accessible, any main thread can write self-screened data to be processed into the second cache space through the tail pointer. The second buffer space is used for pointing to the buffer space pointed by the tail pointer.
Once any main thread starts to access the second cache space through the tail pointer, the corresponding authority identifier is changed to be inaccessible, in this case, except for the main thread currently using the tail pointer, other main threads cannot use the tail pointer, and therefore the to-be-processed data cannot be written into the common task queue.
Correspondingly, after any main thread finishes the operation of writing the data to be processed into the common task queue through the tail pointer, and the tail pointer points to the next cache space of the original second cache space, the authority identification of the tail pointer is changed to be accessible, all main threads in the information processing system can compete for the access authority of the tail pointer again at the moment, and then the main threads which acquire the access authority execute the operation of writing the data to be processed and moving the tail pointer.
After the tail pointer is obtained, if a second cache space of the current public task queue is empty, writing a piece of data to be processed into the second cache space, if the current second cache space is not empty, pointing the tail pointer to a next cache space of the current second cache space, then detecting whether the new second cache space is empty, if the new second cache space is empty, writing the data to be processed, otherwise, if the new second cache space is still not empty, repeating the steps until an empty second cache space is detected.
As described above, after any main thread completes the write operation of the to-be-processed data, if the to-be-processed data that is not written into the common task queue exists in the service data set, the tail pointer is pointed to the next cache space of the second cache space, and the tail pointer is released, that is, the access permission of the tail pointer is released, that is, the permission identifier of the tail pointer is changed to be accessible.
Optionally, if the information processing system adopts a Master-Worker mode, the Master thread (i.e., the Master thread) may receive, in real time, feedback information provided after each slave thread processes one piece of data to be processed, thereby determining whether the corresponding data to be processed is successfully processed, and recording the data to be processed and the corresponding feedback information in the corresponding database.
Through the above steps, it can be found that, with the batch service processing method provided in the embodiment of the present application, on one hand, multiple slave threads in an information processing system can be called to process multiple pieces of to-be-processed data in a service data set in parallel, and on the other hand, each master thread only needs to manage one unique common task queue, and does not need to manage multiple task queues simultaneously, no matter in a manager-worker mode or a producer-consumer mode.
Specifically, when the main thread needs to manage a plurality of task queues, operations such as the movement of the tail pointer of each task queue and the addition of a new buffer space need to be performed by the main thread, and the main thread needs to determine which task queue to write a piece of pending data into after screening each time the pending data is obtained. In the scheme, each main thread only needs to execute operations such as moving a tail pointer and newly adding a buffer space to a unique common task queue, and the determination of which task queue the screened data to be processed is written into is completely unnecessary.
In conclusion, the scheme can effectively reduce the system resources consumed by the main thread in the information processing system during operation, and further reduce the system resources consumed by the whole information processing system.
The steps described in the above embodiments are processing methods of batch services provided in the present application, which are introduced with a main thread in an information processing system as an execution subject. In the method for processing a batch service provided by the embodiment of the present application, a process of reading and processing data to be processed in a common task queue from a thread is described below with reference to fig. 3, and as shown in fig. 3, the process may include the following steps:
s301, a head pointer of the common task queue is obtained.
Step S301 refers to obtaining access rights of the head pointer from the thread, similar to the obtaining of the tail pointer of the common task queue by the main thread described above. By controlling the access authority of the head pointer, the common task queue can be ensured to be read by only one slave thread at a time, so that the conflict caused by simultaneously reading the data to be processed by a plurality of slave threads is avoided.
S302, to-be-processed data stored in a first buffer space of the common task queue are read.
Wherein the first cache space refers to the cache space currently pointed to by the head pointer.
S303, after the reading is successful, the head pointer points to the next cache space of the first cache space in the public task queue, and the head pointer is released.
Releasing the head pointer means changing the authority identifier of the head pointer from originally inaccessible to accessible.
And S304, processing the read data to be processed.
The specific processing manner of step S304 varies according to the batch service. For example, if the batch service is to send notification messages to multiple users, the data to be processed in step S304 is the contact way of any user to whom the notification messages need to be sent, and the specific processing way of step S304 is to connect with the short message platform and provide the read contact way of the user to the short message platform, so as to trigger the short message platform to send the notification messages to the user.
In the information processing system according to the embodiment of the present application, a corresponding semaphore is preset for each thread, and whether a thread specifically operates as a master thread or a slave thread in the information processing system may be determined by a semaphore value corresponding to the thread.
Optionally, each piece of to-be-processed data in the common task queue may be deleted from the common task queue by the corresponding slave thread after being processed.
Further, a switching duration may be set, and after a thread continues to operate as a master thread for a switching duration, a signal value of a semaphore corresponding to the thread may be changed, so as to change to a slave thread to continue operating, that is, to execute the steps in the embodiment corresponding to fig. 3, and similarly, after a thread continues to operate as a slave thread for a switching duration, a signal value of a semaphore corresponding to the thread may be changed, so as to change to a master thread to continue operating, that is, to execute the steps in the embodiment corresponding to fig. 2. By the method, when each thread in the information processing system needs to work continuously for a long time, namely when the service data set of the batch service contains more to-be-processed data, the load of each thread can be balanced to a certain extent, and the phenomenon that a certain thread is broken down due to overlarge load is avoided.
The following describes a method for processing a batch service provided in an embodiment of the present application with reference to a specific example.
Take a surrogated payroll service as an example. Assuming that a certain enterprise a opens a salary service for all employees of the enterprise a at a bank, when the salary needs to be issued, a main thread in the information processing system obtains bank accounts of all employees of the enterprise a as a service data set of the volume service of the salary, then, each main thread executes the method in the embodiment corresponding to the above fig. 2, screens out the bank account of each employee needing to issue the salary and the salary quota set by the enterprise a from the service data set, and writes the bank account of each employee needing to issue the salary and the corresponding salary quota as a piece of data to be processed into a common task queue.
On the other hand, the information processing system may start a plurality of slave threads, and after each slave thread is started, the bank account and the corresponding payroll amount of each employee needing to issue payroll are read one by one from the common task queue by executing the method in the embodiment corresponding to fig. 3.
For each slave thread, each time one bank account and the corresponding payroll are read out by the slave thread, the step S303 is executed, the head pointer is moved and the access right of the head pointer is released, so that other slave threads can continuously read other bank accounts, then the read bank accounts and the corresponding payroll are sent to the transfer platform, and the transfer platform is triggered to take out the corresponding payroll from the enterprise account of the enterprise A and send the corresponding payroll to the bank account of the employee, so that the payment roll for the employee of the enterprise A is completed.
Referring to fig. 4, the apparatus may be considered as a structural block diagram of a main thread in an information processing system, and the apparatus for processing a batch service provided in this embodiment may include the following units:
an obtaining unit 401, configured to obtain a service data set corresponding to a batch service.
The service data set comprises a plurality of pieces of data to be processed.
The screening unit 402 is configured to screen each piece of data to be processed from the service data set.
A writing unit 403, configured to write each piece of data to be processed into the common task queue, so that each slave thread reads and processes the data to be processed from the common task queue one by one.
Optionally, the processing apparatus further includes a receiving unit 404, configured to receive feedback information of each slave thread. The feedback information is used for indicating whether the corresponding data to be processed is successfully processed.
Optionally, the common task queue includes a plurality of buffer spaces arranged in sequence, each buffer space is used for storing one piece of data to be processed, and when the slave thread reads and processes the data to be processed from the common task queue, the slave thread is specifically used for:
obtaining a head pointer of a common task queue;
reading to-be-processed data stored in a first cache space of a common task queue; wherein, the first cache space refers to the cache space currently pointed to by the head pointer;
after the reading is successful, the head pointer points to the next cache space of the first cache space in the public task queue, and the head pointer is released;
and processing the read data to be processed.
Optionally, when the writing unit 403 writes each piece of to-be-processed data into the common task queue, the writing unit is specifically configured to:
acquiring a tail pointer of a common task queue;
if the second cache space of the public task queue is empty, writing a piece of data to be processed into the second cache space; the second cache space refers to a cache space to which a tail pointer of the common task queue points currently;
and if the service data set contains data to be processed which is not written into the common task queue, pointing the tail pointer to the next cache space of the second cache space, and releasing the tail pointer.
The specific working principle of the device for processing the batch services provided in the embodiments of the present application may refer to a method for processing the batch services provided in any embodiment of the present application, and details are not described here.
The application provides a processing device of batch services, which is applied to each main thread of an information processing system, wherein the information processing system comprises at least one main thread and a plurality of slave threads, and an obtaining unit 401 obtains a service data set corresponding to the batch services; the service data set comprises a plurality of pieces of data to be processed; the screening unit 402 screens each piece of to-be-processed data from the service data set, and the writing unit 403 writes each piece of to-be-processed data into the common task queue, so that each slave thread reads and processes the to-be-processed data from the common task queue one by one. Therefore, the scheme can effectively reduce the system resources consumed by processing the batch services.
An electronic device is further provided in the embodiments of the present application, please refer to fig. 5, and the electronic device includes a memory 501 and a processor 502.
The memory 501 is used for storing a computer program, and the processor 502 is used for executing the computer program stored in the memory 501, and is specifically used for executing the processing method of the batch service provided in any embodiment of the present application.
The embodiments of the present application further provide a computer storage medium, which is used to store a computer program, and when the stored computer program is executed, the computer storage medium is specifically used to implement the method for processing batch services provided in any embodiment of the present application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It should be noted that the terms "first", "second", and the like in the present invention are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
Those skilled in the art can make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for processing a batch of traffic, applied to each master thread of an information handling system, the information handling system comprising at least one master thread and a plurality of slave threads, the method comprising:
acquiring a service data set corresponding to batch services; the service data set comprises a plurality of pieces of data to be processed;
and screening each piece of data to be processed from the service data set, and writing each piece of data to be processed into a common task queue, so that each slave thread reads and processes the data to be processed from the common task queue one by one.
2. The processing method according to claim 1, wherein after the step of obtaining each piece of the to-be-processed data from the service data set by screening and writing each piece of the to-be-processed data into a common task queue, the processing method further comprises:
receiving feedback information of each slave thread; the feedback information is used for indicating whether the corresponding data to be processed is successfully processed.
3. The processing method according to claim 1, wherein the common task queue includes a plurality of buffer spaces arranged in sequence, each buffer space is used for storing one piece of the data to be processed, and the process of reading and processing the data to be processed from the common task queue by the slave thread includes:
obtaining a head pointer of the common task queue;
reading to-be-processed data stored in a first cache space of the common task queue; wherein the first cache space refers to a cache space to which the head pointer currently points;
after the reading is successful, the head pointer points to the next cache space of the first cache space in the public task queue, and the head pointer is released;
and processing the read data to be processed.
4. The processing method according to claim 3, wherein said writing each of said pieces of data to be processed into a common task queue comprises:
obtaining a tail pointer of the common task queue;
if the second cache space of the public task queue is empty, writing a piece of data to be processed into the second cache space; wherein the second cache space refers to a cache space to which a tail pointer of the common task queue currently points;
and if the service data set contains to-be-processed data which is not written into the common task queue, pointing the tail pointer to a next cache space of the second cache space, and releasing the tail pointer.
5. A processing apparatus of a batch service applied to each main thread of an information processing system including at least one main thread and a plurality of slave threads, the processing apparatus comprising:
the acquiring unit is used for acquiring a service data set corresponding to the batch services; the service data set comprises a plurality of pieces of data to be processed;
the screening unit is used for screening each piece of data to be processed from the service data set;
and the writing unit is used for writing each piece of the data to be processed into a common task queue so that each slave thread reads from the common task queue one by one and processes the data to be processed.
6. The processing apparatus according to claim 5, characterized in that the processing apparatus further comprises:
the receiving unit is used for receiving the feedback information of each slave thread; the feedback information is used for indicating whether the corresponding data to be processed is successfully processed.
7. The processing apparatus according to claim 5, wherein the common task queue includes a plurality of buffer spaces arranged in sequence, each of the buffer spaces is configured to store one piece of the to-be-processed data, and when the slave thread reads and processes the to-be-processed data from the common task queue, the slave thread is specifically configured to:
obtaining a head pointer of the common task queue;
reading to-be-processed data stored in a first cache space of the common task queue; wherein the first cache space refers to a cache space to which the head pointer currently points;
after the reading is successful, the head pointer points to the next cache space of the first cache space in the public task queue, and the head pointer is released;
and processing the read data to be processed.
8. The processing apparatus according to claim 7, wherein the writing unit, when writing each piece of the to-be-processed data into the common task queue, is specifically configured to:
obtaining a tail pointer of the common task queue;
if the second cache space of the public task queue is empty, writing a piece of data to be processed into the second cache space; wherein the second cache space refers to a cache space to which a tail pointer of the common task queue currently points;
and if the service data set contains to-be-processed data which is not written into the common task queue, pointing the tail pointer to a next cache space of the second cache space, and releasing the tail pointer.
9. An electronic device comprising a memory and a processor;
wherein the memory is for storing a computer program;
the processor is configured to execute the computer program, and in particular to implement the method for processing a batch service according to any one of claims 1 to 4.
10. A computer storage medium for storing a computer program, which, when executed, is particularly adapted to implement the method of processing a batch of services according to any one of claims 1 to 4.
CN202010540130.XA 2020-06-12 2020-06-12 Batch service processing method and device, electronic equipment and computer storage medium Pending CN111694681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010540130.XA CN111694681A (en) 2020-06-12 2020-06-12 Batch service processing method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010540130.XA CN111694681A (en) 2020-06-12 2020-06-12 Batch service processing method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN111694681A true CN111694681A (en) 2020-09-22

Family

ID=72480989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010540130.XA Pending CN111694681A (en) 2020-06-12 2020-06-12 Batch service processing method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111694681A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112187767A (en) * 2020-09-23 2021-01-05 上海万向区块链股份公司 Multi-party contract consensus system, method and medium based on block chain
CN113377501A (en) * 2021-06-08 2021-09-10 中国农业银行股份有限公司 Data processing method, apparatus, device, medium, and program product
CN117453422A (en) * 2023-12-22 2024-01-26 南京研利科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130014118A1 (en) * 2011-07-06 2013-01-10 Stephen Jones Simultaneous submission to a multi-producer queue by multiple threads
CN107515795A (en) * 2017-09-08 2017-12-26 北京京东尚科信息技术有限公司 Multi-task parallel data processing method, device, medium and equipment based on queue
US20180349181A1 (en) * 2017-06-04 2018-12-06 Apple Inc. Execution priority management for inter-process communication
CN110765167A (en) * 2019-10-23 2020-02-07 泰康保险集团股份有限公司 Policy data processing method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130014118A1 (en) * 2011-07-06 2013-01-10 Stephen Jones Simultaneous submission to a multi-producer queue by multiple threads
US20180349181A1 (en) * 2017-06-04 2018-12-06 Apple Inc. Execution priority management for inter-process communication
CN107515795A (en) * 2017-09-08 2017-12-26 北京京东尚科信息技术有限公司 Multi-task parallel data processing method, device, medium and equipment based on queue
CN110765167A (en) * 2019-10-23 2020-02-07 泰康保险集团股份有限公司 Policy data processing method, device and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112187767A (en) * 2020-09-23 2021-01-05 上海万向区块链股份公司 Multi-party contract consensus system, method and medium based on block chain
CN113377501A (en) * 2021-06-08 2021-09-10 中国农业银行股份有限公司 Data processing method, apparatus, device, medium, and program product
CN117453422A (en) * 2023-12-22 2024-01-26 南京研利科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN117453422B (en) * 2023-12-22 2024-03-01 南京研利科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN111694681A (en) Batch service processing method and device, electronic equipment and computer storage medium
CN100430945C (en) Device and method for writing data into disc by dynamic switching
US20170109203A1 (en) Task scheduling
US8763012B2 (en) Scalable, parallel processing of messages while enforcing custom sequencing criteria
EP2600246A1 (en) Batch processing of business objects
CN102667713B (en) With the response type user interface of background application logic
US7823157B2 (en) Dynamic queue for use in threaded computing environment
US20110252426A1 (en) Processing batch transactions
US7599968B2 (en) Technique for supplying a data warehouse whilst ensuring a consistent data view
US9529651B2 (en) Apparatus and method for executing agent
CN108959118B (en) Data writing method and device
CN108292162A (en) Software definition fifo buffer for multi-thread access
CN102541661A (en) Wait on address synchronization interface
US20160034895A1 (en) Personalized budgets for financial services
US9904470B2 (en) Tracking ownership of memory in a data processing system through use of a memory monitor
US20050044173A1 (en) System and method for implementing business processes in a portal
CN113535087A (en) Data processing method, server and storage system in data migration process
JP3752193B2 (en) How to allocate access device usage between the host operating system and the guest operating system
US6389482B1 (en) Dynamic transitioning from a local pipe to a cross-system pipe
WO2022267676A1 (en) Data processing method and apparatus for shared memory, and device and medium
US8458704B2 (en) System and method for an improved merge utility
US8977814B1 (en) Information lifecycle management for binding content
US10867288B1 (en) Blockchain payment notification system
JP2016173746A (en) Information processing device, control method thereof and program
CN117850352A (en) Multitasking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination