CN111966511B - Message queue data read-write processing method and device - Google Patents

Message queue data read-write processing method and device Download PDF

Info

Publication number
CN111966511B
CN111966511B CN202010806595.5A CN202010806595A CN111966511B CN 111966511 B CN111966511 B CN 111966511B CN 202010806595 A CN202010806595 A CN 202010806595A CN 111966511 B CN111966511 B CN 111966511B
Authority
CN
China
Prior art keywords
message
thread
read
message queue
reading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010806595.5A
Other languages
Chinese (zh)
Other versions
CN111966511A (en
Inventor
陈受凯
王伟权
许佳煜
林鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010806595.5A priority Critical patent/CN111966511B/en
Publication of CN111966511A publication Critical patent/CN111966511A/en
Application granted granted Critical
Publication of CN111966511B publication Critical patent/CN111966511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

When a thread needs to read or write data on the message queue, a message slot sequence number is allocated to the thread based on an atomic increment method by combining message slots operated by all threads currently corresponding to read or write messages, so that the thread correspondingly reads or writes data according to the message slot sequence number.

Description

Message queue data read-write processing method and device
Technical Field
The present invention relates to the field of message queues, and in particular, to a method and an apparatus for processing data read-write of a message queue.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The in-process message queue is a very important data structure, and has quite wide application, and is mainly used for coordinating the execution rate of a plurality of threads, transmitting messages and converting from synchronous to asynchronous. Taking the Java language as an example, JDK itself already has a stable and reliable message queue, such as ArrayBlockingQueue, linkedBlockingQueue, etc. However, JDK's own message queues are implemented by means of heavyweight-based locks, and the synchronization between the threads is guaranteed by means of heavyweight locks between writing and writing, writing and reading, reading and reading, and reading and writing, and the massive use of heavyweight locks does not mean a low performance with high concurrency, but affects the performance more or less. In addition, since JDK is more versatile in implementation, enterprise-level applications have higher demands on low latency, monitorability, support for dynamic extensions, and the like. In addition, when the life cycle of the data accumulated in the queue is also considered, in some business scenarios, when the data is not consumed for a period of time, the data is invalid, which is also a problem that needs to be considered by the message queue.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing read-write of message queue data, which aim to solve the problem that the traditional message queue cannot meet the enterprise-level application service scene in terms of performance and enterprise-level characteristics.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing any of the methods of the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program for performing any one of the methods of the first aspect.
In summary, according to the method and the device for processing the read-write of the message queue data, when a thread needs to read or write data on the message queue, a message slot sequence number is allocated to the thread based on an atomic increment method by combining message slots operated by all threads currently corresponding to read or write messages, so that the thread correspondingly reads or writes data according to the message slot sequence number.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a diagram of a high performance message queue provided in an embodiment of the present invention;
FIG. 2 is a diagram of a message queue structure provided in an embodiment of the present invention;
FIG. 3 is a diagram of an example of ensuring efficient read and write avoidance overrides provided in an embodiment of the present invention;
FIG. 4 is a diagram illustrating exemplary availability of bitmap marking data provided in an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a method for reading and writing message queue data according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a message queue data read-write processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a computer device adapted to implement the method for processing data read from and write to a message queue in the present invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments thereof.
Although the invention provides a method operation or apparatus structure as shown in the following examples or figures, more or fewer operation steps or module units may be included in the method or apparatus based on routine or non-inventive labor. In the steps or the structures of the apparatuses in which there is no necessary cause and effect logically, the execution order of the steps or the structure of the modules is not limited to the execution order or the structure of the modules shown in the embodiments or the drawings of the present invention. The method or module structure described may be performed sequentially or in parallel according to the embodiment or the method or module structure shown in the drawings when applied to a device or an end product in practice.
FIG. 5 shows a method for processing data read-write of a message queue, where the message queue includes a plurality of message slots, and the method specifically includes:
s1: when a thread needs to read or write data on the message queue, combining message slots operated by all threads currently corresponding to the read or write message, and distributing a message slot sequence number to the thread based on an atomic increment method so that the thread correspondingly reads or writes data according to the message slot sequence number.
It will be appreciated that the invention does not create lock waiting and can ensure low latency in messaging, thus ensuring high performance requirements.
Specifically, as shown in fig. 1, the core functions of the method for processing data read-write of a message queue provided by the invention include a lock-free, low-delay and high-performance core message queue 1, a queue monitor 2, a support dynamic capacity expansion 3 and a data life cycle management 4.
The core message queue 1 with no lock, low delay and high performance is characterized in that on the basis of realizing the basic function of a message queue, some optimization is carried out on some places needing locking in the traditional realization, and the design of no lock is adopted, so that the low delay of message transmission is ensured, and the requirement of high performance is ensured. As shown in fig. 2, a ring chart is used to represent a message queue, the capacity of the queue is 8, and each cell is called a slot, and is hereinafter referred to as a slot. The queue supports multiple operation modes, namely a single-write single-read mode, a single-write multi-read mode, a multi-write single-read mode and a multi-write multi-read mode, wherein the distinction between single and multi-threads refers to the distinction between single threads and multi-threads. In the implementation, the single-thread writing and the multi-thread writing are greatly different, and the single-thread writing can be completely unlocked, and only one sequence number pointing to the increment sequence number of the queue is needed. The single-thread reading only needs to maintain one reading sequence number, and no locking processing is performed. For multi-threaded writing, as shown in FIG. 2, there are two write threads and two read threads, thread P1 gets slot4 and thread P2 gets slot5, if this time comes by one thread P3, then it must be guaranteed that P3 gets slot6, here we use atomic increment method, using CAS algorithm to guarantee strict increment. This method is also very different from the traditional message queue, which cannot write in parallel, for example, there is a thread P1 generating data in slot4, so P2 has to wait P1 to complete data writing, where lock waiting is generated, and transfer of the message has a certain performance impact. For reading, there are also single reading and multiple reading, single reading means that only one thread is reading data, and for multiple reading, there are multiple threads reading data, and also a sequence number is used to represent the maximum slot currently consumed. As shown in fig. 2, there are two threads reading messages at the same time, C1 reads slot0, C2 reads slot1, if one thread C3 is coming at this time, then slot2 will be consumed. For multiple reads, atomic incrementing is also used to ensure strict incrementing. For single write or single read, then the normal incremental method is used to generate the locations pointing to the read and write.
In order to further improve the performance, the invention can further solve a problem called pseudo sharing, and the invention further comprises:
and when the threads of a plurality of cores of the CPU operate the same cache line, adopting a tail filling mode to ensure that two objects are filled in one cache line.
In this embodiment, the pseudo sharing refers to that if threads of multiple cores of the CPU operate different variable data in the same Cache line, frequent Cache line invalidation occurs, which may result in data that may be read from the Cache, but the pseudo sharing needs to be read from the memory again, which may seriously affect performance. Corresponding to the message queue, there are two places that are frequently modified and read, the first is a sequence number object storing a write location and the second is a sequence number object storing a read location. In design, we use tail filling mode to ensure that the two objects just fill one cache line to avoid pseudo sharing.
Further, ensuring that valid data can be read and that writes do not overwrite data that has not yet been read, embodiments of the present invention further include: the validity of each message slot is recorded by a large bitmap.
In this embodiment, recording the validity of each message slot by a large bitmap specifically includes:
when a thread reads a message slot, judging whether the read message slot is available or not by reading a corresponding bitmap;
and if the thread is not available, putting the thread in a waiting reading state.
Specifically, as shown in the left diagram of fig. 3, thread C1 is reading slot4, at which point thread P1 has not written data back, at which point the write sequence number is already 6, and if no control is performed, then C1 is readable to slots 4 and 5. This problem can be effectively solved by using a large bitmap to record the validity of a slot, as shown in fig. 4, when C1 reads slot4, it is first determined whether the slot is available, by reading the corresponding bitmap, if it is 1, it indicates that the slot is available, and when slot4 is 0, it indicates that the data is not available, and when the read thread has to wait. As to how to wait, the design provides three ways, one by way of conventional inter-thread wait/notify, one by way of spin, and the last by way of thread sleep. In the right hand diagram of FIG. 3, thread P2 intends to acquire slot0 and write, while thread C1 is consuming slot0, an unpredictable problem would arise if P1 were to write to slot0 at this time, and therefore needs to be avoided. In design, besides holding a common reading number for all the reading threads, each reading thread also holds a serial number, which indicates the position read by the current reading thread, and the writing thread needs to judge that the writing serial number must be smaller than the minimum value of the serial numbers held by all the reading threads when obtaining slots, otherwise, the writing thread can cover, and if full, the writing thread takes measures of stepwise sleep, such as 1 nanosecond of first sleep and 2 nanoseconds of second sleep, exponentially increases, and the longest sleep time is not more than 1 second.
It will be appreciated that in a preferred embodiment, said placing said thread in a wait for read state comprises:
a common reading sequence number is allocated to all the reading threads, and a self sequence number is allocated to each reading thread; each serial number corresponds to the read message slot position;
judging whether the serial number of the current writing thread is smaller than the minimum value of the serial numbers of all the reading threads, and if not, taking step sleep measures by the writing thread.
In addition, when the amount of data accumulated in the queue exceeds a certain threshold, or after the write thread acquires slots, the write thread is not put back into the queue all the time, or the read thread fails to consume data, etc., the abnormal conditions need to be monitored, so in design, an observer mode is adopted, a user subscribes to the abnormal events in advance, and if the queue is abnormal, the user is notified. In addition, the requirement of queue running state monitoring is fully considered, and some running state data such as the current available capacity, the size of the accumulated data, the number and time of write waiting, the number and time of no data reading and the like are saved in running. These data can be referred to by the operation and maintenance personnel to ensure that the queue operates stably and reliably for a long period of time.
Further, when data accumulation occurs frequently in the queue, for a conventional message queue, if expansion is required, a system restart is required, which is not acceptable for a bank core monetary system. It is therefore necessary to provide an automatic capacity expansion function without application or sense, which triggers dynamic capacity expansion or contraction when there is insufficient queue capacity or when the capacity free time and free number exceed a specified threshold. If the device is used with a distributed configuration center, the device can also be used for manually triggering expansion or contraction. The expansion or the contraction is carried out according to a multiple of 2. If the capacity expansion operation is triggered, a new queue with the size being 2 times that of the previous queue is created in the message queue, half of the read threads are moved to read the new queue, half of data in the old queue is migrated to the new queue, and the process holds a lock and releases the lock until the movement is finished. After the moving is finished, destroying the old queue after the data of the old queue is completely consumed, wherein the newly written data is ensured not to run into the old queue but to run into the new queue. The capacity reduction of the message queue is simpler, and the data copying can be ensured to be very efficient because the internal implementation uses an array to store the data. The capacity reduction is a precondition that the current amount of unread data must be less than half the capacity of the queue, otherwise the capacity reduction fails. When the capacity is reduced, firstly, a capacity reducing state is set, then the queue does not work outwards during the period, a new queue is opened, the capacity is only half of that of the original queue, then old data is copied to the new queue, and then the previous read-write thread is transferred to the new queue.
It will be appreciated that the invention further comprises: and triggering dynamic capacity expansion or contraction after the capacity of the queue is insufficient or the capacity idle time and idle quantity exceed a set threshold.
Finally, for some application scenarios, if the data has not been consumed for more than a certain period of time, then some specified processing is performed when the data is consumed, such as discarding the data, sending a warning, or invoking a method of pre-registration. This allows for management of the lifecycle of the data in the queue, such as being invalidated without long term consumption.
It will be appreciated that the message queues of the above embodiments have the advantages of high performance, no lock, low latency, dynamic expansion, monitorability, and manageability of data lifecycle, which meets the characteristics required for enterprise-level applications. The performance of the message queue in message transmission is greatly improved, and the thread context switching caused by lock is reduced, so that the throughput of transaction is improved. The addition of the monitoring characteristic ensures that the running state of the message queue can be monitored, and the abnormal state can be alarmed, so that the problems can be positioned and found more quickly and efficiently. The dynamic expansion enables enterprise-level application to have higher availability, can be automatically suitable for different work load environments, is transparent to the application, provides convenience for operation and maintenance, and improves the stability and reliability of the application. And finally, managing the life cycle of the data, and carrying out self-defined processing operation on the expired data, thereby improving the controllability of the application.
Based on the same inventive concept, an embodiment of the present invention further provides a message queue data read-write processing device, as shown in fig. 6, where the message queue includes a plurality of message slots, including:
the sequence number allocation module 10 allocates a message slot sequence number to a thread based on an atomic increment method in combination with message slots operated by all threads currently corresponding to a read or write message when the thread needs to read or write data on the message queue, so that the thread correspondingly reads or writes data according to the message slot sequence number.
Based on the same inventive concept, in certain embodiments, further comprising:
and the cache module is used for ensuring that two objects are filled in one cache line by adopting a tail filling mode when the threads of a plurality of cores of the CPU operate the same cache line.
Based on the same inventive concept, in certain embodiments, further comprising:
and the validity determining module records the validity of each message slot through a large bitmap.
Based on the same inventive concept, in certain embodiments, the validity determination module comprises:
the judging unit is used for judging whether the read message slot is available or not by reading the corresponding bitmap when the thread reads the message slot;
and the waiting read state setting unit is used for enabling the thread to be in a waiting read state if the thread is unavailable.
Based on the same inventive concept, in certain embodiments, the validity determination module further comprises:
the serial number distribution unit distributes a common reading serial number for all the reading threads and distributes a self serial number for each reading thread; each serial number corresponds to the read message slot position;
and the sleep measure taking unit is used for judging whether the serial number of the current writing thread is smaller than the minimum value of the serial numbers of all the reading threads, and if not, the writing thread takes step sleep measures.
Based on the same inventive concept, in certain embodiments, further comprising:
and the capacity expansion and contraction module triggers dynamic capacity expansion or contraction after the capacity of the queue is insufficient or the capacity idle time and idle quantity exceed a set threshold value.
It can be appreciated that the message queue data read-write processing device provided by the invention has the advantages of high performance, no lock, low delay, dynamic expansion, monitoring and manageability of the data life cycle, and the message queue meets the characteristics required by enterprise-level application. The performance of the message queue in message transmission is greatly improved, and the thread context switching caused by lock is reduced, so that the throughput of transaction is improved. The addition of the monitoring characteristic ensures that the running state of the message queue can be monitored, and the abnormal state can be alarmed, so that the problems can be positioned and found more quickly and efficiently. The dynamic expansion enables enterprise-level application to have higher availability, can be automatically suitable for different work load environments, is transparent to the application, provides convenience for operation and maintenance, and improves the stability and reliability of the application. And finally, managing the life cycle of the data, and carrying out self-defined processing operation on the expired data, thereby improving the controllability of the application.
In another embodiment, the message queue data read-write processing method device may be configured separately from the central processor 9100, for example, the message queue data read-write processing method may be configured as a chip connected to the central processor 9100, and the message queue data read-write processing method function is implemented by control of the central processor.
As shown in fig. 7, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 need not include all of the components shown in fig. 7; in addition, the electronic device 9600 may further include components not shown in fig. 7, and reference may be made to the related art.
As shown in fig. 7, the central processor 9100, sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, which central processor 9100 receives inputs and controls the operation of the various components of the electronic device 9600.
The memory 9140 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information about failure may be stored, and a program for executing the information may be stored. And the central processor 9100 can execute the program stored in the memory 9140 to realize information storage or processing, and the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. The power supply 9170 is used to provide power to the electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, but not limited to, an LCD display.
The memory 9140 may be a solid state memory such as Read Only Memory (ROM), random Access Memory (RAM), SIM card, etc. But also a memory which holds information even when powered down, can be selectively erased and provided with further data, an example of which is sometimes referred to as EPROM or the like. The memory 9140 may also be some other type of device. The memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 storing application programs and function programs or a flow for executing operations of the electronic device 9600 by the central processor 9100.
The memory 9140 may also include a data store 9143, the data store 9143 for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers of the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, address book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. A communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, as in the case of conventional mobile communication terminals.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, etc., may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and to receive audio input from the microphone 9132 to implement usual telecommunications functions. The audio processor 9130 can include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100 so that sound can be recorded locally through the microphone 9132 and sound stored locally can be played through the speaker 9131.
The embodiment of the present invention also provides a computer readable storage medium capable of implementing all the steps in the message queue data read-write processing method of the server for the execution subject in the above embodiment, where the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements all the steps in the message queue data read-write processing method in the above embodiment.
From the foregoing, it will be appreciated that embodiments of the invention provide a computer readable storage medium having the advantages of high performance, no lock, low latency, dynamic expansion, monitorability, and manageability of data lifecycle, with the message queue satisfying the characteristics required for enterprise-level applications. The performance of the message queue in message transmission is greatly improved, and the thread context switching caused by lock is reduced, so that the throughput of transaction is improved. The addition of the monitoring characteristic ensures that the running state of the message queue can be monitored, and the abnormal state can be alarmed, so that the problems can be positioned and found more quickly and efficiently. The dynamic expansion enables enterprise-level application to have higher availability, can be automatically suitable for different work load environments, is transparent to the application, provides convenience for operation and maintenance, and improves the stability and reliability of the application. And finally, managing the life cycle of the data, and carrying out self-defined processing operation on the expired data, thereby improving the controllability of the application.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principles and embodiments of the present invention have been described in detail with reference to specific embodiments thereof, the description of the above embodiments being only for aiding in the understanding of the method of the present invention and its core ideas; meanwhile, as those skilled in the art will have variations in specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (12)

1. A method for processing data read from and write to a message queue, the message queue comprising a plurality of message slots, comprising:
when a thread needs to read or write data on the message queue, combining message slots operated by all threads currently corresponding to the read or write message, and distributing a message slot sequence number to the thread based on an atomic increment method so that the thread correspondingly reads or writes data according to the message slot sequence number;
the putting the thread in a waiting read state comprises:
a common reading sequence number is allocated to all the reading threads, and a self sequence number is allocated to each reading thread; each serial number corresponds to the read message slot position;
judging whether the serial number of the current writing thread is smaller than the minimum value of the serial numbers of all the reading threads, and if not, taking step sleep measures by the writing thread.
2. The message queue data read-write processing method according to claim 1, further comprising:
and when the threads of a plurality of cores of the CPU operate the same cache line, adopting a tail filling mode to ensure that two objects are filled in one cache line.
3. The message queue data read-write processing method according to claim 1, further comprising:
the validity of each message slot is recorded by a large bitmap.
4. The message queue data read-write processing method according to claim 3, wherein the recording of the validity of each message slot by a large bitmap further comprises:
when a thread reads a message slot, judging whether the read message slot is available or not by reading a corresponding bitmap;
and if the thread is not available, putting the thread in a waiting reading state.
5. The message queue data read-write processing method according to claim 1, further comprising:
and triggering dynamic capacity expansion or contraction after the capacity of the queue is insufficient or the capacity idle time and idle quantity exceed a set threshold.
6. A message queue data read-write processing apparatus, the message queue including a plurality of message slots, comprising:
the sequence number distribution module is used for distributing a message slot sequence number to a thread based on an atomic increment method by combining message slots operated by all threads currently corresponding to the read or write message when the thread needs to read or write data on the message queue, so that the thread correspondingly reads or writes data according to the message slot sequence number;
a validity determination module comprising:
the serial number distribution unit distributes a common reading serial number for all the reading threads and distributes a self serial number for each reading thread; each serial number corresponds to the read message slot position;
and the sleep measure taking unit is used for judging whether the serial number of the current writing thread is smaller than the minimum value of the serial numbers of all the reading threads, and if not, the writing thread takes step sleep measures.
7. The message queue data read-write processing apparatus of claim 6, further comprising:
and the cache module is used for ensuring that two objects are filled in one cache line by adopting a tail filling mode when the threads of a plurality of cores of the CPU operate the same cache line.
8. The message queue data read-write processing apparatus of claim 6, further comprising:
and the validity determining module records the validity of each message slot through a large bitmap.
9. The message queue data read-write processing apparatus of claim 8, wherein the validity determination module comprises:
the judging unit is used for judging whether the read message slot is available or not by reading the corresponding bitmap when the thread reads the message slot;
and the waiting read state setting unit is used for enabling the thread to be in a waiting read state if the thread is unavailable.
10. The message queue data read-write processing apparatus of claim 6, further comprising:
and the capacity expansion and contraction module triggers dynamic capacity expansion or contraction after the capacity of the queue is insufficient or the capacity idle time and idle quantity exceed a set threshold value.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 5 when executing the computer program.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 5.
CN202010806595.5A 2020-08-12 2020-08-12 Message queue data read-write processing method and device Active CN111966511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010806595.5A CN111966511B (en) 2020-08-12 2020-08-12 Message queue data read-write processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010806595.5A CN111966511B (en) 2020-08-12 2020-08-12 Message queue data read-write processing method and device

Publications (2)

Publication Number Publication Date
CN111966511A CN111966511A (en) 2020-11-20
CN111966511B true CN111966511B (en) 2024-02-13

Family

ID=73364832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010806595.5A Active CN111966511B (en) 2020-08-12 2020-08-12 Message queue data read-write processing method and device

Country Status (1)

Country Link
CN (1) CN111966511B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986555B (en) * 2021-11-10 2023-04-07 深圳前海微众银行股份有限公司 Cache optimization method, device, equipment and readable storage medium
CN115866039A (en) * 2022-11-29 2023-03-28 北京达佳互联信息技术有限公司 Message processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978321A (en) * 2014-04-02 2015-10-14 阿里巴巴集团控股有限公司 Method and device for constructing data queue, method for inserting object into data queue and method for consuming object from data queue
CN110362348A (en) * 2018-04-09 2019-10-22 武汉斗鱼网络科技有限公司 A kind of method, apparatus and electronic equipment of queue access data
CN111416858A (en) * 2020-03-16 2020-07-14 广州市百果园信息技术有限公司 Media resource processing platform, method, device and server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978321A (en) * 2014-04-02 2015-10-14 阿里巴巴集团控股有限公司 Method and device for constructing data queue, method for inserting object into data queue and method for consuming object from data queue
CN110362348A (en) * 2018-04-09 2019-10-22 武汉斗鱼网络科技有限公司 A kind of method, apparatus and electronic equipment of queue access data
CN111416858A (en) * 2020-03-16 2020-07-14 广州市百果园信息技术有限公司 Media resource processing platform, method, device and server

Also Published As

Publication number Publication date
CN111966511A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
US7218566B1 (en) Power management of memory via wake/sleep cycles
US7069396B2 (en) Deferred memory allocation for application threads
US20070226417A1 (en) Power efficient media playback on general purpose portable devices
US7673105B2 (en) Managing memory pages
CN110018914B (en) Shared memory based message acquisition method and device
US8949549B2 (en) Management of ownership control and data movement in shared-memory systems
US8166194B2 (en) Lock-free shared audio buffer
CN111966511B (en) Message queue data read-write processing method and device
US20140208083A1 (en) Multi-threaded logging
CA2706737A1 (en) A multi-reader, multi-writer lock-free ring buffer
CN106951488B (en) Log recording method and device
US9256535B2 (en) Conditional notification mechanism
KR100959712B1 (en) Method and apparatus for sharing memory in a multiprocessor system
EP2225633A2 (en) Data parallel production and consumption
CN103020003A (en) Multi-core program determinacy replay-facing memory competition recording device and control method thereof
CN102567225A (en) Method and device for managing system memory
US20120102012A1 (en) Cross-region access method for embedded file system
US20070055839A1 (en) Processing operation information transfer control systems and methods
US20070079061A1 (en) Writing to file by multiple application threads in parallel
KR102351200B1 (en) Apparatus and method for setting clock speed/voltage of cache memory based on memory request information
CN115840654B (en) Message processing method, system, computing device and readable storage medium
CN116561091A (en) Log storage method, device, equipment and readable storage medium
US9411663B2 (en) Conditional notification mechanism
US11176039B2 (en) Cache and method for managing cache
CN103019829A (en) Multi-core program memory competition recording and replaying method realized by signature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant