CN113778700A - Message processing method, system, medium and computer system - Google Patents

Message processing method, system, medium and computer system Download PDF

Info

Publication number
CN113778700A
CN113778700A CN202011167554.2A CN202011167554A CN113778700A CN 113778700 A CN113778700 A CN 113778700A CN 202011167554 A CN202011167554 A CN 202011167554A CN 113778700 A CN113778700 A CN 113778700A
Authority
CN
China
Prior art keywords
event
task
types
event objects
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011167554.2A
Other languages
Chinese (zh)
Inventor
张海燕
杨小刚
鲍阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202011167554.2A priority Critical patent/CN113778700A/en
Publication of CN113778700A publication Critical patent/CN113778700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/547Messaging middleware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure provides a message processing method, including: the method comprises the steps of pulling m task messages from a message queue, wherein the m task messages are generated based on m task data pulled from a database, each task data comprises a task identifier and a task type, the m task data correspond to n task types, m and n are positive integers, and m is larger than or equal to n. And based on the n task types, packaging the m task messages to produce m event objects, wherein the m event objects correspond to the n event types, and the n event types correspond to the n task types one by one. Based on n event types, m event objects are stored in n circular buffer queues in the first server to consume the m event objects, and consumption results of the m event objects are returned to the database to update the database, wherein one circular buffer queue is used for storing the event objects of one event type. In addition, the present disclosure also provides a message processing system, a computer system and a computer readable medium.

Description

Message processing method, system, medium and computer system
Technical Field
The present disclosure relates to the field of message processing, and more particularly, to a message processing method and system, a computer system, and a computer-readable medium.
Background
In a producer and consumer modeled architecture, message middleware is a supportive software system that provides producer and consumer synchronous or asynchronous, reliable message sharing and delivery based on message queuing and message delivery techniques. The producer scans the tasks to be processed with different task types in the database through Spring timed tasks, a distributed scheduling platform or Java threads and other modes, the production of the messages is completed according to the tasks to be processed, the messages are sent to the message middleware, the consumer pulls the subscribed messages from the message middleware, and the consumption of the messages is completed by activating corresponding task processing logic according to the task types of the messages.
However, in the case of a large amount of data for processing tasks, the messages produced by the producer are not consumed by the consumer in time, and a large amount of backlog is generated on the message middleware platform, which causes a great stress on the message middleware platform.
Disclosure of Invention
In view of the above, the present disclosure provides a message processing method and system, a computer system, and a computer readable medium.
One aspect of the present disclosure provides a message processing method, including: the method comprises the steps of pulling m task messages from a message queue, wherein the m task messages are generated based on m task data pulled from a database, each task data comprises a task identifier and a task type, the m task data correspond to n task types, m and n are positive integers, and m is larger than or equal to n. And based on the n task types, encapsulating the m task messages to produce m event objects, wherein the m event objects correspond to the n event types, and the n event types correspond to the n task types one to one. Based on the n event types, storing the m event objects into n ring buffer queues in a first server to consume the m event objects, and returning consumption results of the m event objects to the database to update the database, wherein one ring buffer queue is used for storing event objects of one event type.
According to an embodiment of the present disclosure, the method further includes: and sending confirmation information of successful message reception to the message queue.
According to an embodiment of the present disclosure, the method further includes: and issuing the m event objects. And responding to the monitoring of the m event objects, and sending the m event objects to n event monitoring processors based on the difference of the event types, wherein one event monitoring processor is used for monitoring and consuming the event objects of one event type. And consuming the m event objects through the n event monitoring processors.
According to an embodiment of the present disclosure, the method further includes: and determining the queue length corresponding to each circular buffer queue in the n circular buffer queues based on the n task types. And constructing the n circular buffer queues based on the queue length corresponding to each circular buffer queue.
According to the embodiment of the disclosure, the queue length corresponding to each circular buffer queue is 2r, and r is a positive integer.
According to an embodiment of the present disclosure, the storing the m event objects into the n circular buffer queues in the first server includes: and determining t event objects corresponding to the s event types in response to the existence of available s buffer queues in the n circular buffer queues, wherein s and t are positive integers, m is larger than or equal to t, and n is larger than or equal to s. Based on the s event types, the t event objects are stored in s circular buffer queues in the first server until the m event objects are stored in n circular buffer queues in the first server.
According to an embodiment of the present disclosure, the method further includes: in the process of consuming the m event objects through the n event monitoring processors, marking the task state according to the consumption state of the event object aiming at each event object in the m event objects.
According to an embodiment of the present disclosure, the consumption status includes at least one of: a wait for process state, wherein the wait for process state characterizes that the event object is not consumed. A locked state, wherein the locked state characterizes the event object being consumed. A completed state, wherein the completed state characterizes that the event object has been successfully consumed. An invalid state, wherein the invalid state characterizes that the event object was not successfully consumed multiple times.
According to an embodiment of the present disclosure, the method further includes: in the process of consuming the m event objects through the n event monitoring processors, performance data corresponding to each performance index of the first server is read for p performance indexes, wherein p is a positive integer. And determining whether the first server is in an overload state or not based on the performance data corresponding to each performance index. Under the condition that the first server is determined to be in an overload state, the newly produced m event objects are stored in n circular buffer queues in a second server so as to consume the newly produced m event objects, and the consumption results of the newly produced m event objects are returned to the database, wherein one circular buffer queue is used for storing event objects of one event type.
Another aspect of the present disclosure provides a message processing system including: the pull module is used for pulling m task messages from the message queue, wherein the m task messages are generated based on m task data pulled from a database, each task data comprises a task identifier and a task type, the m task data correspond to n task types, m and n are positive integers, and m is larger than or equal to n. And the production module is used for packaging the m task messages based on the n task types to produce m event objects, wherein the m event objects correspond to the n event types, and the n event types correspond to the n task types one by one. A first storage module, configured to store the m event objects to n circular buffer queues in a first server based on the n event types, so as to consume the m event objects, and return consumption results of the m event objects to the database to update the database, where one circular buffer queue is used to store an event object of one event type.
According to an embodiment of the present disclosure, the above system further includes: and the first sending module is used for sending the confirmation information of successful message receiving to the message queue.
According to an embodiment of the present disclosure, the above system further includes: and the issuing module is used for issuing the m event objects. And a second sending module, configured to send, in response to monitoring the m event objects, the m event objects to n event monitoring processors based on the difference in the event types, where one event monitoring processor is configured to monitor and consume an event object of one event type. And the consumption module is used for consuming the m event objects through the n event monitoring processors.
According to an embodiment of the present disclosure, the above system further includes: a first determining module, configured to determine, based on the n types of tasks, a queue length corresponding to each circular buffer queue in the n circular buffer queues. And the building module is used for building the n circular buffer queues based on the queue length corresponding to each circular buffer queue.
According to the embodiment of the disclosure, the queue length corresponding to each circular buffer queue is 2r, and r is a positive integer.
According to an embodiment of the present disclosure, the storage module includes: and the determining submodule is used for determining t event objects corresponding to the s event types in response to the existence of available s buffer queues in n annular buffer queues in the first server, wherein s and t are positive integers, m is greater than or equal to t, and n is greater than or equal to s. And the storage submodule is used for storing the t event objects into s circular buffer queues in a first server based on the s event types until the m event objects are stored into n circular buffer queues in the first server.
According to an embodiment of the present disclosure, the above system further includes: and a marking module, configured to mark, for each event object in the m event objects, a task state according to a consumption state of the event object in a process of consuming the m event objects through the n event monitoring processors.
According to an embodiment of the present disclosure, the consumption status includes at least one of: a wait for process state, wherein the wait for process state characterizes that the event object is not consumed. A locked state, wherein the locked state characterizes the event object being consumed. A completed state, wherein the completed state characterizes that the event object has been successfully consumed. An invalid state, wherein the invalid state characterizes that the event object was not successfully consumed multiple times.
According to an embodiment of the present disclosure, the above system further includes: a reading module, configured to, in a process of consuming the m event objects through the n event monitoring processors, read, for p performance indicators, performance data corresponding to each performance indicator of the first server, where p is a positive integer. A second determining module, configured to determine whether the first server is in an overload state based on the performance data corresponding to each performance indicator. A second storage module, configured to store the m newly produced event objects to n circular buffer queues in a second server to consume the m newly produced event objects and return consumption results of the m newly produced event objects to the database, where one circular buffer queue is used to store event objects of one event type, when it is determined that the first server is in an overload state.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; a storage device to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the above.
Another aspect of the disclosure provides a computer-readable medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method of any of the above.
According to the embodiment of the disclosure, under the condition that the data volume of the processing task is large, a plurality of event objects of different event types are constructed based on a plurality of task messages of different task types pulled from the message queue, and the plurality of event objects are respectively placed into a plurality of annular buffer queues corresponding to the event types according to the difference of the event types, so that the technical problem that the plurality of task messages generate message backlog on the message queue platform can be at least partially alleviated or even avoided, and therefore, the technical effect of effectively relieving the stress of the task messages on the message middleware platform can be realized.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically shows an exemplary system architecture to which the message processing method and the system thereof of the embodiments of the present disclosure can be applied;
FIG. 2 schematically shows a flow chart of a message processing method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of a message processing method according to another embodiment of the present disclosure;
FIG. 4 schematically shows a flow diagram of a message processing method according to another embodiment of the present disclosure;
FIG. 5 schematically shows a flow chart of a message processing method according to another embodiment of the present disclosure;
FIG. 6 schematically shows a block diagram of a message processing system according to an embodiment of the present disclosure; and
FIG. 7 schematically illustrates a block diagram of a computer system suitable for implementing a message processing method and system thereof, in accordance with an embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
It should be noted that the figures are not drawn to scale and that elements of similar structure or function are generally represented by like reference numerals throughout the figures for illustrative purposes.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
In large-scale message processing, in order to realize a high-performance (low latency, low resource consumption, etc.) system, a multi-task and multi-thread architecture is generally designed. The running of these task threads can contend for the computation time of the processor, and each of these task threads runs independently, and finally completes the same or different processing tasks, i.e. concurrence. In the system adopting the message middleware mechanism, different objects activate the event of the other side by transmitting messages, and the corresponding operation is completed. The sender sends the message to the message middleware, which stores the message in a plurality of queues and forwards the message to the receiver when appropriate. In order to solve the problem of user experience caused by synchronous requests in a system, synchronous services are generally designed into asynchronous services, and the processing performance and throughput of the system are improved in a horizontal extension mode. Under the condition of large data volume of processing tasks, messages produced by a producer are not consumed by consumers in time, a large amount of backlog is generated on a message middleware platform, and great pressure is caused on the message middleware platform.
Based on this, the present disclosure provides a message processing method, including: the method comprises the steps of firstly pulling m task messages from a message queue, wherein the m task messages are generated based on m task data pulled from a database, each task data comprises a task identifier and a task type, the m task data correspond to n task types, m and n are positive integers, and m is larger than or equal to n. And then packaging the m task messages based on the n task types to produce m event objects, wherein the m event objects correspond to the n event types, and the n event types correspond to the n task types one by one. And finally, storing the m event objects to n circular buffer queues in the first server based on the n event types so as to consume the m event objects, and returning consumption results of the m event objects to the database, wherein one circular buffer queue is used for storing the event objects of one event type.
According to the embodiment of the disclosure, under the condition that the data volume of the processing task is large, based on a plurality of task messages of different task types pulled from the message queue, a high-performance asynchronous processing framework technology of a provided 'producer-consumer' model constructs a plurality of event objects of different event types, and the plurality of event objects are respectively placed into a plurality of annular buffer queues corresponding to the event types according to the difference of the event types, so that the technical problem that the message backlog of the plurality of task messages on the message queue platform is at least partially relieved or even avoided, and the technical effect of effectively relieving the pressure of the task messages on the message middleware platform can be realized.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which the message processing method and system thereof of the embodiments of the present disclosure may be applied. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a database 101, a message queue 102, a server 103, and a network 104. Network 104 is used to provide a medium for communication links between database 101 and message queue 102, message queue 102 and server 103, and server 103 and database 101. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The database (Data Base, DB)101 is used to implement task storage, and may be a relational database composed of a plurality of Data tables, each having the same structure, and there may be an association relationship between different tables. And adding a Task table by utilizing the database, distinguishing different types of tasks in the Task table, and identifying the processing completion condition of the tasks by the Task state.
Message Queue (MQ) 102, also referred to as a Message middleware platform, may perform platform-independent data communication using efficient and reliable messaging mechanisms and perform integration of distributed systems based on data communication. By providing a model of message passing and message queuing, inter-process communication can be extended in a distributed environment. Different traffic messages may be classified by one or more topics (Topic) of MQ asynchronous messages. If all task messages of different types can be sent to an MQ asynchronous message of one Topic to be processed, only one subscribed Topic needs to be maintained, but under the conditions of large task data volume and low consumer consumption speed, the messages can cause backlog on an MQ platform and great pressure is caused on the MQ platform. If all different types of task messages can also be sent to MQ asynchronous messages of multiple different topics for processing, multiple subscribed topics need to be maintained, although the pressure of the same Topic message is avoided, as the task types increase or decrease, the application for adding or deleting the topics of the providers and consumers is also needed, resulting in the increase of maintenance cost.
The server 103 acts as an MQ message processing system providing a "producer-consumer" message processing model, which may be some concurrent package provided by Java itself, for example. Preferably, a Java-based concurrent programming framework dispatcher can be deployed on the server 103, and the dispatcher can greatly simplify the difficulty of developing concurrent programs and is better than some concurrent packages provided by Java itself in performance. It should be noted that, the dispute is used as an efficient "producer-consumer" model, and a lock-free algorithm is adopted, so that the problem of high processing delay can be overcome, in addition, frequent resource scheduling of a CPU is not required, the overall message processing performance and throughput are linearly improved along with the increase of the number of threads, and "pseudo sharing" can be avoided.
It should be noted that the message processing method provided by the embodiment of the present disclosure may be generally executed by the server 103. Accordingly, the message processing system provided by the embodiment of the present disclosure may be generally disposed in the server 103. The message processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 103 and is capable of communicating with the message queue 102 and/or the server 103. Accordingly, the message processing system provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 103 and capable of communicating with the message queue 102 and/or the server 103.
It should be understood that the number of databases, networks, and servers in fig. 1 are merely illustrative. There may be any number of databases, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a message processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method may include operations S210 to S230.
In operation S210, m task messages are pulled from the message queue, where the m task messages are generated based on m task data pulled from the database, each task data includes a task identifier and a task type, the m task data corresponds to n task types, m and n are positive integers, and m is greater than or equal to n.
In the present disclosure, the message queue may be a message middleware platform, for example, an autonomously developed MQ message middleware platform, i.e., JMQ platform. The method comprises the steps of scanning tasks to be processed in batches in the modes of Spring timing tasks, distributed scheduling platforms or Java threads and the like, pulling m pieces of task data from a task table of a relational database, generating m pieces of task information based on the m pieces of task data, wherein each piece of task data comprises a task identifier (task Id) and a task type (task type). Task identification (taskId) and task type (taskType) of task data can be serialized, a message body is assembled and generated, and MQ messages are sent to a message queue through the JMQ platform. Preferably, if the sending of the MQ message to the message queue through the JMQ platform fails, an alarm prompt is added, and the sending is continued at the next time point for scanning, so that the loss of the message is avoided until the message sending is successful. Optionally, the disclosure supports pooling and tabulating task data extraction. Multiple different types of tasks send MQ messages for the same Topic. Different task types can be provided according to different actual services of the service system. For example, it may include, but is not limited to, placing a ticket, sending a short message, traffic redeeming a coupon, additional coupons, a successful check-in.
In operation S220, m task messages are encapsulated based on the n task types to produce m event objects, where the m event objects correspond to the n event types, and the n event types correspond to the n task types one to one.
In the present disclosure, an encapsulation operation may be performed on the pulled task message to produce an event object.
As an alternative embodiment, based on different task types, task data of different task types may be encapsulated into event objects of different event types through a high-performance asynchronous processing framework (dispatcher). The dispatcher as an efficient producer-consumer model firstly needs to declare an event object and transmit an event type of task data when the dispatcher is used as a producer, and the event type in the disclosure is designed to be identified as each type of event through a tasktype in a task table. In the semantics of the dispatcher, the data exchanged between the producer and the consumer is called an Event (Event).
In operation S230, the m event objects are stored to the n circular buffer queues in the first server based on the n event types to consume the m event objects, and the consumption results of the m event objects are returned to the database to update the database.
In the disclosure, a first server is respectively connected with a message middleware and a database in a communication mode, the message middleware is connected with at least one computing node in a communication mode, a circular queue is arranged on the message middleware, and a circular buffer queue is used for storing event objects of one event type.
As an alternative embodiment, the circular buffer queue may be the RingBuffer of Disproptor. The RingBuffer is a data structure of a ring array and is used for caching data. A plurality of RingBuffers can be divided in the Disproptor, each RingBuffer is used for storing event types of cache data of different types, and then in the data storage process, because the cache is divided based on different event types, the speed of data cache can be improved, the searching of the data of the same type can be conveniently carried out in the later period, meanwhile, the condition that a cache slot is not enough in one RingBuffer is avoided, and the waiting time of event type cache is reduced. The event objects of different types are stored in different circular buffer queues according to different event types, so that the event objects of different types do not compete for storage resources.
It should be noted that, for n circular buffer queues, they exist independently, and do not interfere with each other, and do not form contention for storage resources.
According to the embodiment of the disclosure, under the condition that the data volume of the processing task is large, a plurality of event objects of different event types are constructed based on a plurality of task messages of different task types pulled from the message queue, and the plurality of event objects are respectively placed into a plurality of annular buffer queues corresponding to the event types according to the difference of the event types, so that the technical problem that the plurality of task messages generate message backlog on the message queue platform can be at least partially alleviated or even avoided, and therefore, the technical effect of effectively relieving the stress of the task messages on the message middleware platform can be realized.
As an alternative embodiment, the method may further include, in addition to the foregoing operations S210 to S230: and sending confirmation information of successful message reception to the message queue.
In the embodiment of the present disclosure, after the m pieces of task messages are successfully received, an acknowledgement message, that is, an ack (acknowledgement) message, may be returned to the message queue to inform the message queue that data has been received in reply, so that the m pieces of task messages are not repeatedly pulled.
As an alternative embodiment, after the producer produces m event objects, the method may further include, in addition to the foregoing operations S210 to S230: issuing m event objects. And responding to the monitored m event objects, sending the m event objects to the n event monitoring processors based on different event types, and consuming the m event objects through the n event monitoring processors. An event listening processor listens for and consumes event objects of an event type.
In the embodiment of the disclosure, when an event object is placed in the RingBuffer queue, the dispatcher issues information and the event object, monitors event object data in different event types through the dispatcher, and enters a sleep state without real-time monitoring when no event object data exists, thereby improving the performance of the system and the service life of the system.
In the embodiment of the disclosure, after m event objects issued by a disputer are received, the m event objects are placed in a RingBuffer of a memory storage structure in the disputer, and the m event objects are regarded as individual events (events) as objects to be processed and are monitored and processed by event monitoring processors registered in the disputer. The snoop handler can be implemented through an event handling interface EventHandler defined by the Disproptor, which is a real implementation for the consumer to handle events. The producer instead calls the user code of the Disproptor publish event, which does not define a specific interface or type. The event monitoring processor is used as a consumer, and can be one or more, and the message event processing is triggered by the thread. Just because the superior data structure of RingBuffer is taken as a basis, the core design element of indexing by Sequence Number (Sequence Number) in the Dispuptor is matched, so that the consumer thread can efficiently consume and process the data to be processed without expensive synchronous locks. For event objects of the same event type, balancing processing among data can be realized through a component in the dispatcher, namely a sequence number barrier (sequence barrier). And a plurality of consumers monitor the consumable event of the RingBuffer through the sequence Barrier, and a plurality of consumers judge whether the data can be acquired for consumption or not through the waiting strategy of the sequence Barrier.
According to the embodiment of the invention, the event object is processed asynchronously after being cached, so that high-throughput distributed task processing is realized, the consumption capability of a consumer is improved, and the condition that a large amount of messages are backlogged on a message platform can be avoided. And the event objects of different types are processed by different event monitoring processors according to different event types, so that the event objects of different types do not compete for processing resources.
Fig. 3 schematically shows a flow chart of a message processing method according to another embodiment of the present disclosure. As shown in fig. 3, with the Task timing Task, in operation S311, the pending Task whose database state is the unprocessed state is periodically scanned. In operation S312, the MQ message is sent to the JMQ platform. In operation S313, the platform JMQ returns an ACK result. In operation S321, the MQ message processing system pulls JMQ the platform MQ message. In operation S322, the MQ message processing system returns an ACK result to JMQ platform. In operation S331, the MQ message processing system places the pulled MQ message into the queue cache of the dispatcher. In operation S332, the MQ message processing system asynchronously processes the task.
Fig. 4 schematically shows a flow chart of a message processing method according to another embodiment of the present disclosure. As shown in fig. 4, a plurality of relational databases, such as relational database DB1, relational database DB1, and relational database DB.. The method includes scanning tasks to be processed in batches in a manner of Spring timed tasks, a distributed scheduling platform, or Java threads, and the like, executing operation S410, and extracting data from a plurality of relational databases to realize task extraction, so as to obtain a plurality of pieces of task data to be processed, which correspond to a plurality of different task types, for example, task type 1, task type 2, and task type 3. Serializing a primary key (taskId) of a task to be processed and a task type (tasktype) of the task to be processed, assembling the primary key and the task type into a message body, sending MQ messages to a message queue through an JMQ platform, if the sending fails, adding an alarm prompt, and continuing to send the MQ messages when scanning at the next time point, so that the messages are prevented from being lost until the messages are sent successfully. Operation S420 is performed to send a message to send a plurality of messages corresponding to a plurality of different task types to the JMQ platform, and in order to reduce the maintenance cost of Topic, it is preferable that one Topic is maintained on the JMQ platform. Operation S430 is performed to pull the message from the JMQ platform to achieve data consumption, that is, the pulled message is stored in a plurality of irruptor RingBuffer cache tasks, each RingBuffer stores a buffer task of a task type, and the tasks are asynchronously processed by a plurality of event snoop processors (eventlandler) of the irruptor. And executing operation S440 to return the data consumption result of each task to the corresponding relational database when the task data is pulled, so as to update the corresponding relational database.
Fig. 5 schematically shows a flow chart of a message processing method according to another embodiment of the present disclosure. As shown in fig. 5, the message processing method applied to the MQ message processing system may include operations S511 to S517.
In operation S511, the MQ message is pulled. In operation S512, an event type is constructed, and an event object is generated. The message Event is asynchronously executed in the subsequent sequence, and in operation S513, the producer puts the Event object into the RingBuffer queue, and after the message Event is put into the RingBuffer queue, the dispatcher issues the message and the Event object, and the Event handler monitors the Event and sends the Event to the consumer. In operation S514, the consumer receives the event object and modifies the task state to be locked. The task state may be modified to a locked state, for example, using an optimistic lock of the database. In operation S515, it is detected whether the state modification is successful, and if the state modification is successful, operation S516 is performed, and the task thread is routed according to the task type, that is, the independent thread routed to each task includes a 1-task-type processing eventlower corresponding to the task type of 1, a 2-task-type processing eventlower corresponding to the task type of 2, and an n-task-type processing eventlower corresponding to the task type of n, which respectively process real service logic, and the task state is modified to be a completed state after the task is considered to be successfully processed by using a callback mechanism of the thread. And if the modification fails, adding 1 to the failure times of the task, modifying the state to be processed, waiting for the next reprocessing, and if the failure times reach the upper limit, modifying the task state to be an invalid record. In operation S517, the updating of the task data state is completed, and finally, the entire task processing procedure is ended.
As an alternative embodiment, because the data cache lengths of different task types are different, the method may further include, in addition to the foregoing operations S210 to S230: based on the n types of tasks, a queue length corresponding to each circular buffer queue in the n circular buffer queues is determined. N circular buffer queues are constructed based on the queue length corresponding to each circular buffer queue. As an alternative embodiment, the queue length corresponding to each circular buffer queue is 2r, and r is a positive integer.
The data structure of the ring buffer is characterized in that: data is written to the circular buffer at a location, referred to individually as a slot, but provided that the slot is free. In a specific implementation, the number of the grooves can be defined and set by self, and as a preferred implementation, the number of the grooves in the RingBuffer is 2 r. It can be of varying data buffer sizes from tens of thousands to tens of millions. Each piece of data in the RingBuffer has a sequence number (sequence number) for indexing, and in a specific implementation, the sequence number may be set to be the same as the position number of the slot. RingBuffer maintains the sequence number of the currently most recently placed element, which is incremented (by the remainder to get the array index of the data under RingBuffer). It will be appreciated that if a slot in RingBuffer runs out, the slot that was previously taken out of data can be used as a free slot for a round, and the sequence number of this slot will vary depending on the current sequence number to be processed.
According to the embodiment of the disclosure, queue lengths with different lengths can be set according to different task types, different types of task data are realized, different technical effects are cached, and the technical problem of caching is effectively utilized.
As an alternative embodiment, the aforementioned operation S230 (storing m event objects to n circular buffer queues in the first server) may include: and determining t event objects corresponding to the s event types in response to the existence of available s buffer queues in n circular buffer queues in the first server, wherein s and t are positive integers, m is larger than or equal to t, and n is larger than or equal to s. Based on the s event types, t event objects are stored to s circular buffer queues in the first server until m event objects are stored to n circular buffer queues in the first server.
Through the embodiment of the disclosure, the sequence barrier in the dispatcher can be utilized between the data put into the RingBuffer queue and the data of the RingBuffer queue of the monitoring consumption, so that the balance among the data can be realized, and the possibility of message backlog is effectively reduced.
As an alternative embodiment, in the process of consuming m event objects by n event listening processors, the method may further include: for each of the m event objects, the task state is marked according to the consumption state of the event object.
As an alternative embodiment, the consumption status comprises at least one of: a wait for processing state, wherein the wait for processing state characterizes the event object as not being consumed. A locked state, wherein the locked state characterizes the event object as being consumed. A completed state, wherein the completed state characterizes that the event object has been successfully consumed. An invalid state, wherein the invalid state characterizes that the event object was not successfully consumed multiple times.
Through the embodiment of the disclosure, the identification and judgment of the task state can avoid repeated consumption and untimely consumption of the event object.
As an alternative embodiment, in the process of consuming m event objects by n event listening processors, the method may further include: and reading performance data corresponding to each performance index of the first server aiming at p performance indexes, wherein p is a positive integer. Based on the performance data corresponding to each performance indicator, it is determined whether the first server is in an overloaded state. And under the condition that the first server is determined to be in an overload state, storing the m newly produced event objects to n circular buffer queues in the second server so as to consume the m newly produced event objects, and returning consumption results of the m newly produced event objects to the database, wherein one circular buffer queue is used for storing the event objects of one event type.
According to embodiments of the present disclosure, performance metrics may include, but are not limited to, operational metrics and disk IO metrics. The operation indexes can include but are not limited to CPU indexes (CPU total core number, CPU total number, user state occupation ratio, core state occupation ratio, IO waiting occupation ratio, hard interrupt occupation ratio, soft interrupt occupation ratio and virtual machine stealing occupation ratio.
In specific implementation, if the processing capacity of the dispatcher of the first server cannot meet the requirement of message processing, the MQ message processing system may be supplied more than required to backlog messages, and at this time, the problem may be solved by means of horizontal capacity expansion, and the utilization of the dispatcher may improve the processing performance and the throughput. Accordingly, when the processing capacity of the dispatcher of the first server and the second server exceeds the requirement of message processing, the supply and demand of the MQ message processing system can be reduced, the problem can be solved in a horizontal capacity reduction mode, and the maintenance cost can be reduced by utilizing the dispatcher. According to the size of the data volume, the task consumer application system can easily perform horizontal capacity expansion and capacity reduction.
FIG. 6 schematically shows a block diagram of a message processing system according to an embodiment of the disclosure.
As shown in FIG. 6, the system 600 may include a pull module 610, a production module 620, and a first storage module 630.
The pulling module 610 is configured to pull m task messages from the message queue, where m task data correspond to n task types, m and n are positive integers, m is greater than or equal to n, the m task messages are generated based on the m task data pulled from the database, and each task data includes a task identifier and a task type. Optionally, the pulling module 610 may be configured to perform operation S210 described in fig. 2, for example, and is not described herein again.
A producing module 620, configured to encapsulate m task messages based on n task types to produce m event objects, where the m event objects correspond to the n event types, and the n event types correspond to the n task types one to one. Optionally, the production module 620 may be configured to perform operation S220 described in fig. 2, for example, and is not described herein again.
The first storage module 630 is configured to store the m event objects to n circular buffer queues in the first server based on the n event types, so as to consume the m event objects, and return consumption results of the m event objects to the database to update the database, where one circular buffer queue is used to store event objects of one event type. Optionally, the first storage module 630 may be configured to perform operation S230 described in fig. 2, for example, and is not described herein again.
As an alternative embodiment, the system may further comprise: the first sending module is used for sending the confirmation information of successful message receiving to the message queue.
As an alternative embodiment, the system may further comprise: and the issuing module is used for issuing the m event objects. And the second sending module is used for responding to the monitored m event objects and sending the m event objects to the n event monitoring processors based on the difference of the event types, wherein one event monitoring processor is used for monitoring and consuming the event objects of one event type. And the consumption module is used for consuming the m event objects through the n event monitoring processors.
As an alternative embodiment, the system may further comprise: and the first determining module is used for determining the queue length corresponding to each circular buffer queue in the n circular buffer queues based on the n task types. And the building module is used for building n circular buffer queues based on the queue length corresponding to each circular buffer queue.
As an alternative embodiment, the queue length corresponding to each circular buffer queue is 2r, and r is a positive integer.
As an alternative embodiment, the memory module comprises: the determining submodule is used for determining t event objects corresponding to the s event types in response to s available buffer queues existing in n annular buffer queues in the first server, wherein s and t are positive integers, m is larger than or equal to t, and n is larger than or equal to s. And the storage submodule is used for storing the t event objects into s circular buffer queues in the first server based on the s event types until the m event objects are stored into n circular buffer queues in the first server.
As an alternative embodiment, the system further comprises: and the marking module is used for marking the task state according to the consumption state of the event object aiming at each event object in the m event objects in the process of consuming the m event objects through the n event monitoring processors.
As an alternative embodiment, the consumption status comprises at least one of: a wait for processing state, wherein the wait for processing state characterizes the event object as not being consumed. A locked state, wherein the locked state characterizes the event object as being consumed. A completed state, wherein the completed state characterizes that the event object has been successfully consumed. An invalid state, wherein the invalid state characterizes that the event object was not successfully consumed multiple times.
As an alternative embodiment, the system may further comprise: the reading module is used for reading performance data corresponding to each performance index of the first server aiming at p performance indexes in the process of consuming m event objects through the n event monitoring processors, wherein p is a positive integer. A second determination module to determine whether the first server is in an overloaded state based on the performance data corresponding to each performance indicator. And the second storage module is used for storing the m newly produced event objects to n circular buffer queues in the second server to consume the m newly produced event objects under the condition that the first server is determined to be in an overload state, and returning consumption results of the m newly produced event objects to the database, wherein one circular buffer queue is used for storing event objects of one event type.
It should be noted that the implementation, solved technical problems, implemented functions, and achieved technical effects of each module in the apparatus part embodiment are respectively the same as or similar to the implementation, solved technical problems, implemented functions, and achieved technical effects of each corresponding step in the method part embodiment, and are not described herein again.
Any number of modules, sub-modules, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules and sub-modules according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a field programmable gate array (FNGA), a programmable logic array (NLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging the circuit, or in any one of three implementations, or in any suitable combination of any of the software, hardware and firmware. Alternatively, one or more of the modules, sub-modules according to embodiments of the disclosure may be implemented at least partly as computer program modules, which when executed may perform corresponding functions.
For example, the pull module, the production module, the first storage module, the publishing module, the second sending module, the consumption module, the first determination module, the construction module, the determination submodule, the storage submodule, the marking module, the reading module, the second determination module, and the second storage module may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the pulling module, the producing module, the first storing module, the publishing module, the second sending module, the consuming module, the first determining module, the constructing module, the determining sub-module, the storing sub-module, the marking module, the reading module, the second determining module, and the second storing module may be at least partially implemented as a hardware circuit, such as a field programmable gate array (FNGA), a programmable logic array (NLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three manners of software, hardware, and firmware, or by a suitable combination of any of them. Alternatively, at least one of the pulling module, the producing module, the first storing module, the publishing module, the second sending module, the consuming module, the first determining module, the constructing module, the determining submodule, the storing submodule, the marking module, the reading module, the second determining module and the second storing module may be at least partially implemented as a computer program module which, when executed, may perform a corresponding function.
FIG. 7 schematically illustrates a block diagram of a computer system suitable for implementing a message processing method and system thereof, in accordance with an embodiment of the present disclosure. The computer system illustrated in FIG. 7 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 7, a computer system 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure as described above.
In the RAM 703, various programs and data necessary for the operation of the system 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. The processor 701 performs various operations of the message processing method described above by executing programs in the ROM 702 and/or the RAM 703. Note that the programs may also be stored in one or more memories other than the ROM 702 and RAM 703. The processor 701 may also perform various operations of the message processing methods described above by executing programs stored in one or more memories.
According to an embodiment of the present disclosure, the system 700 may also include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The system 700 may also include one or more of the following components connected to the I/O interface 705: an input section 701 including a keyboard, a mouse, and the like. Including an output section 707 such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker and the like. A storage section 708 comprising a hard disk or the like. And a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
According to an embodiment of the present disclosure, the method described above with reference to the flow chart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing. According to embodiments of the present disclosure, a computer-readable medium may include the ROM 702 and/or the RAM 703 and/or one or more memories other than the ROM 702 and the RAM 703 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments. Or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform a message processing method.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (12)

1. A method of message processing, comprising:
pulling m task messages from a message queue, wherein the m task messages are generated based on m task data pulled from a database, each task data comprises a task identifier and a task type, the m task data correspond to n task types, m and n are positive integers, and m is larger than or equal to n;
based on the n task types, encapsulating the m task messages to produce m event objects, wherein the m event objects correspond to the n event types, and the n event types correspond to the n task types one to one;
based on the n event types, storing the m event objects to n circular buffer queues in a first server to consume the m event objects, and returning consumption results of the m event objects to the database to update the database, wherein one circular buffer queue is used for storing event objects of one event type.
2. The method of claim 1, wherein the method further comprises:
and sending confirmation information of successful message receiving to the message queue.
3. The method of claim 1, wherein the method further comprises:
issuing the m event objects;
in response to monitoring the m event objects, sending the m event objects to n event monitoring processors based on the difference of the event types, wherein one event monitoring processor is used for monitoring and consuming the event objects of one event type;
consuming, by the n event listening processors, the m event objects.
4. The method of claim 1, wherein the method further comprises:
determining a queue length corresponding to each circular buffer queue in the n circular buffer queues based on the n task types;
and constructing the n circular buffer queues based on the queue length corresponding to each circular buffer queue.
5. The method of claim 4, wherein the queue length corresponding to each circular buffer queue is 2r, r being a positive integer.
6. The method of claim 1, wherein said storing said m event objects to n circular buffer queues in a first server comprises:
determining t event objects corresponding to the s event types in response to s available buffer queues existing in n annular buffer queues in a first server, wherein s and t are positive integers, m is larger than or equal to t, and n is larger than or equal to s;
based on the s event types, storing the t event objects to s circular buffer queues in the first server until storing the m event objects to n circular buffer queues in the first server.
7. The method of claim 3, wherein the method further comprises:
in the process of consuming the m event objects through the n event monitoring processors, marking the task state according to the consumption state of the event object for each event object in the m event objects.
8. The method of claim 7, wherein the consumption status comprises at least one of:
a wait-to-process state, wherein the wait-to-process state characterizes an event object as not consumed;
a locked state, wherein the locked state characterizes an event object being consumed;
a completed state, wherein the completed state characterizes that the event object has been successfully consumed;
an invalid state, wherein the invalid state characterizes an event object's multiple-consumption unsuccessful.
9. The method of claim 3, wherein the method further comprises:
in the process of consuming the m event objects through the n event monitoring processors, reading performance data corresponding to each performance index of the first server aiming at p performance indexes, wherein p is a positive integer;
determining whether the first server is in an overloaded state based on the performance data corresponding to each performance indicator;
and under the condition that the first server is determined to be in an overload state, storing the m newly produced event objects to n circular buffer queues in a second server so as to consume the m newly produced event objects, and returning consumption results of the m newly produced event objects to the database, wherein one circular buffer queue is used for storing event objects of one event type.
10. A message processing system, comprising:
the system comprises a pulling module, a message queue and a message processing module, wherein the pulling module is used for pulling m task messages from the message queue, the m task messages are generated based on m task data pulled from a database, each task data comprises a task identifier and a task type, the m task data correspond to n task types, m and n are positive integers, and m is larger than or equal to n;
a production module, configured to encapsulate the m task messages based on the n task types to produce m event objects, where the m event objects correspond to the n event types, and the n event types correspond to the n task types one to one;
a first storage module, configured to store the m event objects to n circular buffer queues in a first server based on the n event types, so as to consume the m event objects, and return consumption results of the m event objects to the database to update the database, where one circular buffer queue is used to store event objects of one event type.
11. A computer system, comprising:
one or more processors; and
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-9.
12. A computer readable medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 9.
CN202011167554.2A 2020-10-27 2020-10-27 Message processing method, system, medium and computer system Pending CN113778700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011167554.2A CN113778700A (en) 2020-10-27 2020-10-27 Message processing method, system, medium and computer system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011167554.2A CN113778700A (en) 2020-10-27 2020-10-27 Message processing method, system, medium and computer system

Publications (1)

Publication Number Publication Date
CN113778700A true CN113778700A (en) 2021-12-10

Family

ID=78835148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011167554.2A Pending CN113778700A (en) 2020-10-27 2020-10-27 Message processing method, system, medium and computer system

Country Status (1)

Country Link
CN (1) CN113778700A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115840654A (en) * 2023-01-30 2023-03-24 北京万里红科技有限公司 Message processing method, system, computing device and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115840654A (en) * 2023-01-30 2023-03-24 北京万里红科技有限公司 Message processing method, system, computing device and readable storage medium

Similar Documents

Publication Publication Date Title
US10733019B2 (en) Apparatus and method for data processing
CN112134909B (en) Time sequence data processing method, device, system, server and readable storage medium
CN110413822B (en) Offline image structured analysis method, device and system and storage medium
US11294740B2 (en) Event to serverless function workflow instance mapping mechanism
KR100850387B1 (en) Processing architecture having passive threads and active semaphores
CN110673959A (en) System, method and apparatus for processing tasks
CN114095537A (en) Netty-based mass data access method and system in application of Internet of things
JP4584935B2 (en) Behavior model based multi-thread architecture
CN114691595A (en) Multi-core circuit, data exchange method, electronic device, and storage medium
CN113778700A (en) Message processing method, system, medium and computer system
CN112949847B (en) Neural network algorithm acceleration system, scheduling system and scheduling method
CN118113766A (en) Batch data processing method, device, equipment and medium
CN110968433A (en) Information processing method and system and electronic equipment
US10025605B2 (en) Message handler compiling and scheduling in heterogeneous system architectures
CN114584618A (en) Information interaction method, device, equipment, storage medium and system
US20130152104A1 (en) Handling of synchronous operations realized by means of asynchronous operations
CN112328410A (en) Method, device, equipment and storage medium for realizing remote procedure call
CN114675908B (en) Service data processing system, method, computer device and storage medium
CN110825342B (en) Memory scheduling device and system, method and apparatus for processing information
CN108075989B (en) Extensible protocol-based load balancing network middleware implementation method
CN112748855A (en) Method and device for processing high-concurrency data request
CN114168626A (en) Database operation processing method, device, equipment and medium
CN116107774A (en) IO request processing method and device, electronic equipment and storage medium
CN113961330A (en) Distributed timing task execution method, device, equipment and computer readable medium
US9507654B2 (en) Data processing system having messaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination