CN111966511A - Message queue data read-write processing method and device - Google Patents

Message queue data read-write processing method and device Download PDF

Info

Publication number
CN111966511A
CN111966511A CN202010806595.5A CN202010806595A CN111966511A CN 111966511 A CN111966511 A CN 111966511A CN 202010806595 A CN202010806595 A CN 202010806595A CN 111966511 A CN111966511 A CN 111966511A
Authority
CN
China
Prior art keywords
thread
message
read
message queue
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010806595.5A
Other languages
Chinese (zh)
Other versions
CN111966511B (en
Inventor
陈受凯
王伟权
许佳煜
林鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010806595.5A priority Critical patent/CN111966511B/en
Publication of CN111966511A publication Critical patent/CN111966511A/en
Application granted granted Critical
Publication of CN111966511B publication Critical patent/CN111966511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method and a device for processing data read-write of a message queue, when a thread needs to read or write data on the message queue, a message slot serial number is allocated to the thread on the basis of an atomic increment method by combining message slots operated by all threads which are currently corresponding to read or write messages, so that the thread correspondingly reads or writes data according to the message slot serial number.

Description

Message queue data read-write processing method and device
Technical Field
The invention relates to the technical field of message queues, in particular to a message queue data read-write processing method and device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The in-process message queue is a very important data structure, and the purpose of the in-process message queue is quite wide, and the in-process message queue is mainly used for coordinating the execution rate of a plurality of threads, transmitting messages and converting synchronous to asynchronous. For example, in Java, JDK itself already carries a reliable and stable message queue such as ArrayBlockingQueue, LinkedBlockingQueue, etc. However, the JDK message queues are implemented based on heavyweight locks, and heavyweight locks are used to ensure synchronization among multiple threads between write and write, write and read, read and read, and read and write, and the heavy lock is used in a large amount, which does not mean low concurrency performance, but affects performance more or less. In addition, as JDK is more general in implementation, enterprise-level applications have higher requirements for low latency, monitoring capability, and capability of supporting dynamic expansion. In addition, when the life cycle of the data accumulated in the queue needs to be considered, in some service scenarios, when the data is not consumed for a while, the data is invalid, which is also a problem to be considered by the message queue.
Disclosure of Invention
The embodiment of the invention provides a method and a device for reading and writing data of a message queue, aiming at solving the problem that the performance and enterprise-level characteristics of the traditional message queue cannot meet the enterprise-level application service scene.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements any one of the methods in the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program for executing any one of the methods in the first aspect is stored in the computer-readable storage medium.
In summary, according to the method and apparatus for processing data read/write in a message queue provided by the present invention, when a thread needs to read or write data in the message queue, a message slot sequence number is allocated to the thread based on an atomic increment method in combination with message slots operated by all threads currently corresponding to read or write messages, so that the thread reads or writes data according to the message slot sequence number, and the method and apparatus of the present invention do not generate lock wait, and can ensure low delay of message transmission, thereby ensuring high requirement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a diagram illustrating an example of a high performance message queue according to an embodiment of the present invention;
FIG. 2 is a diagram of a message queue structure provided in an embodiment of the present invention;
FIG. 3 is an exemplary diagram providing for ensuring efficient reads and avoiding write overwriting in embodiments of the present invention;
FIG. 4 is a diagram illustrating exemplary data availability using a bitmap method provided in an embodiment of the present invention;
fig. 5 is a schematic flow chart of a message queue data read-write processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a message queue data read-write processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device suitable for implementing the message queue data read-write processing method in the present invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Although the present invention provides the method operation steps or apparatus structures as shown in the following embodiments or figures, more or less operation steps or module units may be included in the method or apparatus based on conventional or non-inventive labor. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution order of the steps or the block structure of the apparatus is not limited to the execution order or the block structure shown in the embodiment or the drawings of the present invention. The described methods or modular structures, when applied in an actual device or end product, may be executed sequentially or in parallel according to embodiments or the methods or modular structures shown in the figures.
Fig. 5 shows a data read-write processing method for a message queue in an embodiment of the present invention, where the message queue includes a plurality of message slots, and specifically includes:
s1: when a thread needs to read or write data on the message queue, a message slot sequence number is allocated to the thread on the basis of an atomic increment method by combining message slots operated by all threads corresponding to read or write messages at present, so that the thread reads or writes data correspondingly according to the message slot sequence number.
It can be understood that the present invention does not generate lock wait, and can ensure low delay of message transmission, thereby ensuring high requirement.
Specifically, as shown in fig. 1, the core functions of the message queue data read-write processing method provided by the present invention include a lock-free, low-latency, high-performance core message queue 1, a queue monitor 2, a support dynamic capacity expansion 3, and a data lifecycle management 4.
The core message queue 1 with no lock, low delay and high performance means that on the basis of realizing the basic function of a message queue, some optimization is performed on some places needing locking in the traditional realization, and a lock-free design is adopted, so that the low delay of message transmission is ensured, and the requirement of high performance is ensured. As shown in fig. 2, a message queue is represented by a circular diagram, the size of the queue is 8, and each grid is called a slot, which is hereinafter referred to as a slot. The queue supports multiple operation modes, namely a single-write and single-read mode, a single-write and multi-read mode, a multi-write and single-read mode and a multi-write and multi-read mode, wherein the difference between single and multiple refers to the difference between single thread and multiple thread. In the implementation, the single-thread writing and the multi-thread writing are greatly different, the single-thread writing can be completely processed without locking, and only one sequence number pointing to the increasing sequence number of the queue is needed. The single-thread reading only needs to maintain one reading sequence number without any locking processing. For a multi-threaded write, as shown in fig. 2, there are two write threads and two read threads, thread P1 gets slot4, thread P2 gets slot5, and if one thread P3 comes at this time, it must be guaranteed that P3 gets slot6, here we adopt an atomic increment method, and use the CAS algorithm to guarantee strict increment. This method is also very different from the conventional message queue, which cannot do such parallel writing, for example, there is a thread P1 generating data in slot4, then P2 has to wait for P1 to complete the data writing, and there is a lock wait, which has a certain performance impact on the message delivery. For reading, there are also single reading and multiple reading, where the single reading indicates that only one thread is reading data, and for the multiple reading, indicates that there are multiple threads reading data, and a sequence number is also used to indicate the maximum slot currently consumed. As shown in FIG. 2, there are two threads reading messages at the same time, C1 reads slot0, C2 reads slot1, and if there is a thread C3 in the future, slot2 will be consumed. Thus for multiple reads, atomic increment is also used to ensure strict increments. For write-once or read-once, then the normal incremental method is used to generate the locations for pointing to the read and write.
In order to further improve the performance, the invention can further solve a problem called pseudo sharing, and the invention also comprises:
when threads of a plurality of cores of the cpu operate the same cache line, a tail filling mode is adopted to ensure that two objects are filled in one cache line.
In this embodiment, the pseudo sharing means that if threads of multiple cores of a CPU operate different variable data in the same Cache line, frequent Cache line failures occur, which may cause data that may be read from the Cache, but data needs to be read from a memory again due to the pseudo sharing, which may seriously affect performance. There are two places corresponding to the message queue that are frequently modified and read, the first being the sequence number object storing the write location and the second being the sequence number object storing the read location. In design, a tail filling mode is adopted, and the two objects are guaranteed to fill one cache line to avoid pseudo sharing.
Further, it is ensured that valid data can be read and that writing does not cover data that has not been read yet, the embodiment of the present invention further includes: the validity of each message slot is recorded by a large bitmap.
In this embodiment, recording the validity of each message slot through the large bitmap specifically includes:
when a thread reads a message slot, whether the read message slot is available is judged by reading a corresponding bitmap;
if not, the thread is in a wait-to-read state.
Specifically, as shown in the left diagram of fig. 3, thread C1 is reading slot4, at which time thread P1 has not yet written data back, at which time the write sequence number is already 6, and if no control is performed, then C1 is ready to read slot4 and slot 5. This problem can be efficiently solved by using a large bitmap to record the validity of a slot, as shown in fig. 4, when C1 reads slot4, it first needs to determine whether the slot is available, and by reading the corresponding bitmap, if it is 1, it indicates that the slot is available, at this time, slot4 is 0, it indicates that data is not available, and at this time, the read thread must wait. As to how to wait, the design provides three ways, one is by way of the traditional wait/notify between threads, one is by way of spinning, and the last is by way of thread sleep. On the right hand side of FIG. 3, thread P2 intends to fetch and write to slot0 while thread C1 is consuming slot0, which would cause unpredictable problems if P1 were to write to slot0 at this time, and it would be desirable to avoid this problem. In design, except that a common reading sequence number is held for all reading threads, each reading thread also holds a sequence number to represent the position read by the current reading thread, the writing thread needs to judge that the writing sequence number is smaller than the minimum value of the sequence numbers held by all the reading threads when acquiring a slot, otherwise, the writing thread is covered, if the reading thread is full, the measures taken by the writing thread are step sleep, for example, the first sleep is 1 nanosecond, the second sleep is 2 nanoseconds, the increment is exponential, and the longest sleep time is not more than 1 second.
It will be appreciated that in a preferred embodiment, said causing said thread to be in a wait-to-read state comprises:
distributing a common reading sequence number for all reading threads and distributing a self sequence number for each reading thread; each serial number corresponds to the position of the read message slot;
and judging whether the sequence number of the current write thread is smaller than the minimum value of the sequence numbers of all the read threads, and if not, adopting a step-type sleep measure by the write thread.
In addition, when the amount of data accumulated in the queue exceeds a certain threshold, or a write thread acquires a slot and does not put back the queue all the time, or a read thread fails to consume the data, and the like, for these abnormal situations, it is necessary to monitor, so in design, an observer mode is adopted, a user subscribes for these abnormal events in advance, and if the queue is abnormal, the user is notified. In addition, the requirement of monitoring the running state of the queue is fully considered, and some state data of the running is saved during the running, such as the current available capacity, the size of the accumulated data amount, the number and time of write waiting, the number and time of reading no data and the like. The data can be referred by operation and maintenance personnel to ensure that the queue runs stably and reliably for a long time.
Further, when data accumulation frequently occurs in the queue, for the conventional message queue, if capacity expansion is needed, system restart is needed, which is not acceptable for a bank core amount system. Therefore, it is necessary to provide an automatic capacity expansion function without application sense, and when the capacity of the queue is insufficient or the capacity idle time and the idle number exceed specified thresholds, dynamic capacity expansion or capacity reduction is triggered. If the system is matched with a distributed configuration center for use, manual triggering expansion or contraction can be achieved. The capacity expansion or the capacity reduction is carried out according to the multiple of 2. If the capacity expansion operation is triggered, a new queue 2 times the size of the previous queue is created inside the message queue, half of the read threads are moved to read the new queue, and half of the data in the old queue are moved to the new queue, and the process holds a lock and releases the lock until the moving is finished. After the moving is finished, the old queue is destroyed after the data of the old queue is completely consumed, and during the period, the newly written data is ensured not to run into the old queue but into the new queue. The message queue is simple to reduce, and the internal implementation is to use an array to store data, so that the data replication can be guaranteed to be very efficient. There is a premise that the current amount of unread data must be less than half the queue capacity, otherwise the capacity will fail. During capacity reduction, a capacity reduction state is set firstly, then the queue does not work outwards in the period, a queue is newly opened, the capacity is only half of the original capacity, then the old data is copied to the new queue, and then the previous read-write thread is transferred to the new queue.
It is to be understood that the present invention further includes: and when the capacity of the queue is insufficient or the capacity idle time and the idle number exceed a set threshold value, triggering dynamic capacity expansion or capacity reduction.
Finally, for some application scenarios, if data has not been consumed for more than a certain period of time, then some specified processing is performed while consuming the data, such as discarding the data, sending an alert, or invoking a pre-registered method, etc. This allows the life cycle of the data in the queue to be managed, e.g., invalidated for long periods without consumption.
It will be appreciated that the message queue of the above embodiments has the advantages of high performance, no lock, low latency, dynamic scalability, monitorable, and manageable data lifecycle, which satisfies the characteristics required by enterprise-class applications. The method greatly improves the message transmission performance of the message queue, and reduces the thread context switching caused by the lock, thereby improving the transaction throughput. The addition of the monitoring characteristic ensures that the running state of the message queue can be monitored, and abnormal alarms can be given, so that problems can be positioned and found more quickly and efficiently. The dynamic expansion enables enterprise-level application to have higher availability, can be automatically suitable for different working load environments, is transparent to the application, provides convenience for operation and maintenance, and improves the stability and reliability of the application. And finally, managing the life cycle of the data, and performing custom processing operation on the expired data, thereby improving the controllability of the application.
Based on the same inventive concept, an embodiment of the present invention further provides a device for reading and writing data in a message queue, as shown in fig. 6, where the message queue includes a plurality of message slots, including:
when a thread needs to read or write data on the message queue, a sequence number allocation module 10 allocates a message slot sequence number to the thread based on an atomic increment method in combination with message slots operated by all threads currently corresponding to read or write messages, so that the thread correspondingly reads or writes data according to the message slot sequence number.
Based on the same inventive concept, in some embodiments, the method further comprises:
and the cache module adopts a tail filling mode to ensure that two objects are filled in one cache line when the threads of a plurality of cores of the cpu operate the same cache line.
Based on the same inventive concept, in some embodiments, the method further comprises:
and the validity determining module records the validity of each message slot through a large bitmap.
Based on the same inventive concept, in some embodiments, the validity determination module includes:
the judging unit is used for judging whether the read message slot is available or not by reading the corresponding bitmap when the thread reads the message slot;
and the waiting reading state setting unit is used for enabling the thread to be in a waiting reading state if the thread is not available.
Based on the same inventive concept, in some embodiments, the validity determination module further comprises:
the sequence number distribution unit is used for distributing a common reading sequence number for all the reading threads and distributing a self sequence number for each reading thread; each serial number corresponds to the position of the read message slot;
and the sleep measure taking unit is used for judging whether the sequence number of the current write thread is smaller than the minimum value of the sequence numbers of all the read threads, and if not, the write thread takes a step-type sleep measure.
Based on the same inventive concept, in some embodiments, the method further comprises:
and the capacity expansion and reduction module triggers dynamic capacity expansion or reduction when the capacity of the queue is insufficient or the capacity idle time and the idle number exceed a set threshold value.
It can be understood that the message queue data read-write processing device provided by the invention has the advantages of high performance, no lock, low delay, dynamic expansion, monitoring and data life cycle manageability, and the message queue meets the characteristics required by enterprise-level application. The method greatly improves the message transmission performance of the message queue, and reduces the thread context switching caused by the lock, thereby improving the transaction throughput. The addition of the monitoring characteristic ensures that the running state of the message queue can be monitored, and abnormal alarms can be given, so that problems can be positioned and found more quickly and efficiently. The dynamic expansion enables enterprise-level application to have higher availability, can be automatically suitable for different working load environments, is transparent to the application, provides convenience for operation and maintenance, and improves the stability and reliability of the application. And finally, managing the life cycle of the data, and performing custom processing operation on the expired data, thereby improving the controllability of the application.
In another embodiment, the message queue data read-write processing method apparatus may be configured separately from the central processing unit 9100, for example, the message queue data read-write processing method apparatus may be configured as a chip connected to the central processing unit 9100, and the message queue data read-write processing method function is realized by the control of the central processing unit.
As shown in fig. 7, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 also does not necessarily include all of the components shown in fig. 7; further, the electronic device 9600 may further include components not shown in fig. 7, which may be referred to in the art.
As shown in fig. 7, a central processor 9100, sometimes referred to as a controller or operational control, can include a microprocessor or other processor device and/or logic device, which central processor 9100 receives input and controls the operation of the various components of the electronic device 9600.
The memory 9140 can be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 9100 can execute the program stored in the memory 9140 to realize information storage or processing, or the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. Power supply 9170 is used to provide power to electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, an LCD display, but is not limited thereto.
The memory 9140 can be a solid state memory, e.g., Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 9140 could also be some other type of device. Memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 being used for storing application programs and function programs or for executing a flow of operations of the electronic device 9600 by the central processor 9100.
The memory 9140 can also include a data store 9143, the data store 9143 being used to store data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers for the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, contact book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. The communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and receive audio input from the microphone 9132, thereby implementing ordinary telecommunications functions. The audio processor 9130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100, thereby enabling recording locally through the microphone 9132 and enabling locally stored sounds to be played through the speaker 9131.
An embodiment of the present invention further provides a computer-readable storage medium, which can implement all the steps in the message queue data read-write processing method of the server as an execution subject in the above embodiment, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements all the steps of the message queue data read-write processing method in the above embodiment.
From the above description, it can be seen that the computer-readable storage medium provided by the embodiments of the present invention has the advantages of high performance, no lock, low latency, dynamic scalability, monitoring capability, and manageable data lifecycle, and the message queue satisfies the characteristics required by enterprise-class applications. The method greatly improves the message transmission performance of the message queue, and reduces the thread context switching caused by the lock, thereby improving the transaction throughput. The addition of the monitoring characteristic ensures that the running state of the message queue can be monitored, and abnormal alarms can be given, so that problems can be positioned and found more quickly and efficiently. The dynamic expansion enables enterprise-level application to have higher availability, can be automatically suitable for different working load environments, is transparent to the application, provides convenience for operation and maintenance, and improves the stability and reliability of the application. And finally, managing the life cycle of the data, and performing custom processing operation on the expired data, thereby improving the controllability of the application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the embodiment of the present invention are explained by applying the specific embodiment in the present invention, and the above description of the embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A data read-write processing method for a message queue, wherein the message queue comprises a plurality of message slots, is characterized by comprising the following steps:
when a thread needs to read or write data on the message queue, a message slot sequence number is allocated to the thread on the basis of an atomic increment method by combining message slots operated by all threads corresponding to read or write messages at present, so that the thread reads or writes data correspondingly according to the message slot sequence number.
2. The message queue data read-write processing method according to claim 1, further comprising:
when threads of a plurality of cores of the cpu operate the same cache line, a tail filling mode is adopted to ensure that two objects are filled in one cache line.
3. The message queue data read-write processing method according to claim 1, further comprising:
the validity of each message slot is recorded by a large bitmap.
4. The message queue data read-write processing method according to claim 3, wherein the recording the validity of each message slot through a large bitmap further comprises:
when a thread reads a message slot, whether the read message slot is available is judged by reading a corresponding bitmap;
if not, the thread is in a wait-to-read state.
5. The message queue data read-write processing method according to claim 4, wherein the causing the thread to be in a wait-to-read state includes:
distributing a common reading sequence number for all reading threads and distributing a self sequence number for each reading thread; each serial number corresponds to the position of the read message slot;
and judging whether the sequence number of the current write thread is smaller than the minimum value of the sequence numbers of all the read threads, and if not, adopting a step-type sleep measure by the write thread.
6. The message queue data read-write processing method according to claim 1, further comprising:
and when the capacity of the queue is insufficient or the capacity idle time and the idle number exceed a set threshold value, triggering dynamic capacity expansion or capacity reduction.
7. A data read-write processing apparatus for a message queue, the message queue including a plurality of message slots, comprising:
and the sequence number distribution module is used for distributing a message slot sequence number to the thread on the basis of an atomic increment method by combining message slots operated by all threads corresponding to read or write messages when the thread needs to read or write data on the message queue, so that the thread correspondingly reads or writes the data according to the message slot sequence number.
8. The message queue data read-write processing device according to claim 7, further comprising:
and the cache module adopts a tail filling mode to ensure that two objects are filled in one cache line when the threads of a plurality of cores of the cpu operate the same cache line.
9. The message queue data read-write processing device according to claim 7, further comprising:
and the validity determining module records the validity of each message slot through a large bitmap.
10. The message queue data read-write processing device according to claim 9, wherein the validity determining module includes:
the judging unit is used for judging whether the read message slot is available or not by reading the corresponding bitmap when the thread reads the message slot;
and the waiting reading state setting unit is used for enabling the thread to be in a waiting reading state if the thread is not available.
11. The message queue data read-write processing device according to claim 10, wherein the validity determining module further comprises:
the sequence number distribution unit is used for distributing a common reading sequence number for all the reading threads and distributing a self sequence number for each reading thread; each serial number corresponds to the position of the read message slot;
and the sleep measure taking unit is used for judging whether the sequence number of the current write thread is smaller than the minimum value of the sequence numbers of all the read threads, and if not, the write thread takes a step-type sleep measure.
12. The message queue data read-write processing device according to claim 7, further comprising:
and the capacity expansion and reduction module triggers dynamic capacity expansion or reduction when the capacity of the queue is insufficient or the capacity idle time and the idle number exceed a set threshold value.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when executing the computer program.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 6.
CN202010806595.5A 2020-08-12 2020-08-12 Message queue data read-write processing method and device Active CN111966511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010806595.5A CN111966511B (en) 2020-08-12 2020-08-12 Message queue data read-write processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010806595.5A CN111966511B (en) 2020-08-12 2020-08-12 Message queue data read-write processing method and device

Publications (2)

Publication Number Publication Date
CN111966511A true CN111966511A (en) 2020-11-20
CN111966511B CN111966511B (en) 2024-02-13

Family

ID=73364832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010806595.5A Active CN111966511B (en) 2020-08-12 2020-08-12 Message queue data read-write processing method and device

Country Status (1)

Country Link
CN (1) CN111966511B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112612513A (en) * 2020-12-31 2021-04-06 平安养老保险股份有限公司 Message middleware upgrading method and device, computer equipment and storage medium
CN113986555A (en) * 2021-11-10 2022-01-28 深圳前海微众银行股份有限公司 Cache optimization method, device, equipment and readable storage medium
CN115866039A (en) * 2022-11-29 2023-03-28 北京达佳互联信息技术有限公司 Message processing method and device, electronic equipment and storage medium
CN118093230A (en) * 2024-04-22 2024-05-28 深圳华锐分布式技术股份有限公司 Cross-process communication method, device, equipment and storage medium based on shared memory

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978321A (en) * 2014-04-02 2015-10-14 阿里巴巴集团控股有限公司 Method and device for constructing data queue, method for inserting object into data queue and method for consuming object from data queue
CN110362348A (en) * 2018-04-09 2019-10-22 武汉斗鱼网络科技有限公司 A kind of method, apparatus and electronic equipment of queue access data
CN111416858A (en) * 2020-03-16 2020-07-14 广州市百果园信息技术有限公司 Media resource processing platform, method, device and server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978321A (en) * 2014-04-02 2015-10-14 阿里巴巴集团控股有限公司 Method and device for constructing data queue, method for inserting object into data queue and method for consuming object from data queue
CN110362348A (en) * 2018-04-09 2019-10-22 武汉斗鱼网络科技有限公司 A kind of method, apparatus and electronic equipment of queue access data
CN111416858A (en) * 2020-03-16 2020-07-14 广州市百果园信息技术有限公司 Media resource processing platform, method, device and server

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112612513A (en) * 2020-12-31 2021-04-06 平安养老保险股份有限公司 Message middleware upgrading method and device, computer equipment and storage medium
CN113986555A (en) * 2021-11-10 2022-01-28 深圳前海微众银行股份有限公司 Cache optimization method, device, equipment and readable storage medium
CN113986555B (en) * 2021-11-10 2023-04-07 深圳前海微众银行股份有限公司 Cache optimization method, device, equipment and readable storage medium
CN115866039A (en) * 2022-11-29 2023-03-28 北京达佳互联信息技术有限公司 Message processing method and device, electronic equipment and storage medium
CN118093230A (en) * 2024-04-22 2024-05-28 深圳华锐分布式技术股份有限公司 Cross-process communication method, device, equipment and storage medium based on shared memory

Also Published As

Publication number Publication date
CN111966511B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN111966511B (en) Message queue data read-write processing method and device
EP2733617A1 (en) Data buffer device, data storage system and method
US20070226417A1 (en) Power efficient media playback on general purpose portable devices
KR100690804B1 (en) Method for executing garbage collection of mobile terminal
US9274798B2 (en) Multi-threaded logging
CN104156361B (en) A kind of method and system for realizing data syn-chronization
CN110018914B (en) Shared memory based message acquisition method and device
US7069396B2 (en) Deferred memory allocation for application threads
US8949549B2 (en) Management of ownership control and data movement in shared-memory systems
CN106951488B (en) Log recording method and device
US20080147213A1 (en) Lock-Free Shared Audio Buffer
CN113495889B (en) Distributed object storage method and device, electronic equipment and storage medium
CN101136825A (en) Asynchronous configuration information management method and system for client terminal/server structure
CN109426434B (en) CD data read-write method
CN112799595A (en) Data processing method, device and storage medium
CN115407943A (en) Memory dump file generation method, device and equipment and readable storage medium
CN110413689B (en) Multi-node data synchronization method and device for memory database
CN115840654B (en) Message processing method, system, computing device and readable storage medium
CN114785662B (en) Storage management method, device, equipment and machine-readable storage medium
JP2009037403A (en) Valid activation method for core memory in multi-core processor
CN114115750B (en) Caching method and device applied to full flash memory storage
CN115794446A (en) Message processing method and device, electronic equipment and storage medium
CN110018987B (en) Snapshot creating method, device and system
CN112084048B (en) Kafka synchronous disk brushing method and device and message server
CN115454765A (en) Data processing method based on asynchronous log of blocking queue and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant