CN117762618A - Data message storage method, device, equipment and storage medium - Google Patents

Data message storage method, device, equipment and storage medium Download PDF

Info

Publication number
CN117762618A
CN117762618A CN202311703459.3A CN202311703459A CN117762618A CN 117762618 A CN117762618 A CN 117762618A CN 202311703459 A CN202311703459 A CN 202311703459A CN 117762618 A CN117762618 A CN 117762618A
Authority
CN
China
Prior art keywords
memory block
storage
thread
target number
main memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311703459.3A
Other languages
Chinese (zh)
Inventor
刘志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd
Original Assignee
New H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Technologies Co Ltd filed Critical New H3C Technologies Co Ltd
Priority to CN202311703459.3A priority Critical patent/CN117762618A/en
Publication of CN117762618A publication Critical patent/CN117762618A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure provides a data message storage method, a device, equipment and a storage medium, wherein the method comprises the following steps: invoking a target number of threads to obtain data messages from the received information in parallel, wherein the target number of threads corresponds to the target number of main memory blocks contained in the main memory block set one by one; for any thread in the target number of threads, after a data message is acquired, detecting whether at least one memory block occupied by the thread meets a storage requirement; if at least one memory block occupied by the thread does not meet the storage requirement, calling the thread to acquire an unoccupied extended memory block from an extended memory block set to store the data message; and the sum of the storage spaces of the main memory block set and the extended memory block set is twice the preset byte number, and the preset byte number is the total length of all data messages to be acquired. According to the embodiment of the disclosure, the memory occupancy rate and the memory efficiency in a multithreading scene can be considered.

Description

Data message storage method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a data message storage method, a device, equipment and a storage medium.
Background
With the development of technology, multithreaded forwarding of data messages has become a mainstream. The scheme for multithreaded forwarding of data messages includes: the computer calls a plurality of threads to acquire the data messages, the data messages acquired by each thread are written into the memory, and after the data messages stored in the memory reach a preset amount, the computer reads the data messages in the memory.
In a processing scenario of writing to memory by using multiple threads, the requirement for writing efficiency (i.e., storage efficiency) is high. The occupancy rate of the memory is one of the important characterizations of the computer processing performance. However, current multithreading-based data storage techniques can only maintain storage efficiency or memory occupancy at a good level.
Therefore, there is a need in the art for a technology that combines memory occupancy and storage efficiency.
Disclosure of Invention
The disclosure provides a data message storage method, a device, equipment and a storage medium, wherein a storage area is configured according to twice of the total length of a data message to be acquired, which is beneficial to reducing the occupancy rate of a memory. And the storage area is partitioned into a main memory block set and an extended memory block set, and each main memory block in the main memory block set can be in one-to-one correspondence with a target number of threads, so that each thread is supported to store data in a parallel mode, and the storage efficiency is ensured.
An embodiment of a first aspect of the present disclosure provides a data packet storage method, where the method includes:
invoking a target number of threads to obtain data messages from the received information in parallel, wherein the target number is greater than or equal to 2, and the target number of threads corresponds to the target number of main memory blocks contained in the main memory block set one by one;
for any thread in the target number of threads, after a data message is acquired, detecting whether at least one memory block occupied by the thread meets a storage requirement, wherein the at least one memory block comprises a main memory block corresponding to the thread;
if at least one memory block occupied by the thread does not meet the storage requirement, calling the thread to acquire an unoccupied extended memory block from an extended memory block set to store the data message;
and the sum of the storage spaces of the main memory block set and the extended memory block set is twice the preset byte number, and the preset byte number is the total length of all data messages to be acquired.
In the embodiment of the present disclosure, the sum of storage spaces corresponding to the main memory block set and the sum of storage spaces corresponding to the extended memory block set are both the preset byte number;
The storage space of any main memory block is the product of the preset byte number and the inverse of the target number;
the set of extended memory blocks includes the target number of extended memory blocks.
In an embodiment of the present disclosure, the detecting whether at least one memory block occupied by the thread meets a storage requirement includes:
if the storage rule for the data message is that the data message with the preset byte number is stored, acquiring the residual storage space of each memory block in the at least one memory block;
and if the residual storage space of each memory block is smaller than the byte number of the datagram, determining that the at least one memory block does not meet the storage requirement.
In an embodiment of the present disclosure, the detecting whether at least one memory block occupied by the thread meets a storage requirement includes:
if the storage rule for the data message is that the data message with the preset byte number is stored according to the preset sequence, acquiring the residual storage space of the memory block currently used by the thread, wherein the currently used memory block belongs to the at least one memory block;
and if the remaining storage space is smaller than the number of bytes of the datagram, determining that the at least one memory block does not meet the storage requirement.
In an embodiment of the present disclosure, further includes:
if the data message is a first data message acquired by the thread, determining the main memory block corresponding to the thread according to a starting address corresponding to the thread;
and storing the data message into the main memory block.
In an embodiment of the present disclosure, the calling the thread to acquire an unoccupied extended memory block from the extended memory block set to store the data packet includes:
determining at least one unoccupied extended memory block in the extended memory block set;
and selecting one extended memory block from the at least one unoccupied extended memory block to store the target data message.
In an embodiment of the present disclosure, before the invoking the target number of threads to obtain the target data packet from the received information in parallel, the method further includes:
acquiring a storage area of the preset byte number, wherein the storage space of the storage area is doubled;
partitioning the storage area to obtain the main memory block set and the extended memory block set;
establishing a corresponding relation between a starting address of any main memory block in the main memory block set and a thread corresponding to the main memory block;
and transmitting the corresponding relation and the starting address of each extended memory block in the extended memory block set to the target number of threads respectively.
In an embodiment of the present disclosure, partitioning the storage area to obtain the main memory block set and the extended memory block set includes:
dividing the storage area into a first storage area and a second storage area, wherein the storage space of the first storage area and the storage space of the second storage area are both the preset byte number;
carrying out average partitioning on the first storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the first storage area are the main memory block set;
partitioning the second storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the second storage area are the extended memory block set.
Embodiments of the second aspect of the present disclosure provide a data packet storage device, the device including:
the acquisition module is used for calling a target number of threads to acquire data messages from the received information in parallel, the target number is greater than or equal to 2, and the target number of threads corresponds to the target number of main memory blocks contained in the main memory block set one by one;
The detection module is used for detecting whether at least one memory block occupied by the threads meets the storage requirement or not according to any thread in the target number of threads after the data message is acquired, wherein the at least one memory block comprises a main memory block corresponding to the threads;
the obtaining module is further configured to invoke the thread to obtain an unoccupied extended memory block from the extended memory block set to store the data packet if at least one memory block occupied by the thread does not meet the storage requirement;
and the sum of the storage spaces of the main memory block set and the extended memory block set is twice the preset byte number, and the preset byte number is the total length of all data messages to be acquired.
Embodiments of the third aspect of the present disclosure provide an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor running the computer program to implement the method of the first aspect.
Embodiments of the fourth aspect of the present disclosure provide a computer-readable storage medium having stored thereon a computer program for execution by a processor to perform the method of the first aspect described above.
The technical scheme provided in the embodiment of the disclosure has at least the following technical effects or advantages:
the total length of all the data messages to be acquired is a preset byte number, and in the embodiment of the present disclosure, before acquiring the data messages, a storage area with twice the preset byte number may be deployed according to the preset. That is, the embodiment of the present disclosure may control the memory occupied by the target number of threads by two times of the total length of the data packet, which is beneficial to reduce the occupancy rate of the memory, where the target number is greater than or equal to 2. Further, in the embodiment of the present disclosure, the storage area is partitioned into a main memory block set and an extended memory block set, each main memory block in the main memory block set may correspond to a target number of threads one by one, so that the target number of threads may obtain data packets from the received information in parallel, and after any main memory block is fully written, the corresponding threads may use the extended memory block to continue storing, that is, the process that the target number of threads store the data packets to any memory block may be executed in parallel, thereby being beneficial to ensuring the storage efficiency. Therefore, the technical scheme of the embodiment of the disclosure can give consideration to the memory occupancy rate and the memory efficiency in the multithreading scene.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic diagram of a network architecture according to an embodiment of the present disclosure;
fig. 2 is a flow chart illustrating a data message storage method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a memory block according to an embodiment of the disclosure;
FIG. 4A is a schematic diagram of a scenario of data message storage according to an embodiment of the present disclosure;
FIG. 4B is a schematic diagram of another scenario of data message storage according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a data message storage device according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of an electronic device according to an embodiment of the disclosure;
fig. 7 shows a schematic diagram of a storage medium according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is noted that unless otherwise indicated, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure pertains.
Technical scenarios related to the embodiments of the present disclosure are described below.
Embodiments of the present disclosure relate to internet technology, and in particular, to a multithreaded (Thread) program processing technology for a computer. The thread refers to the minimum unit of operation scheduling of an operating system, and multiple threads can execute different tasks in parallel in one computer process, such as downloading files in parallel, acquiring data messages in parallel, processing requests in parallel, and the like, so that the performance of the computer and the task execution efficiency can be improved.
Taking the data message capturing by using multiple threads as an example, in order to improve the writing efficiency, after the multiple threads capture the data message, the data message can be written into the memory first, and after the data message meeting the number condition is captured, all the data messages are read out from the memory. In the context of writing memory by multiple threads, the memory occupancy rate and the storage efficiency should be considered.
Based on this, a conventional data message storage method includes: and allocating a memory area for a plurality of threads, so that the threads write data messages into the memory area. However, in this method, in order to avoid situations such as contention and deadlock among a plurality of threads, a thread that is preempted to write permission among the plurality of threads performs a write memory operation, and threads that are not preempted to write permission among the plurality of threads all need to wait. It can be seen that this implementation, while reducing memory usage, reduces storage efficiency. Another conventional data message storage method includes: and respectively distributing a memory area for a plurality of threads, namely, each thread corresponds to a special memory area, so that each thread respectively writes a data message into the corresponding memory area. Because of the situation that any thread acquires all the data messages, in the method, the storage space of the memory area corresponding to each thread can be equal to the total byte number of the data messages to be acquired. It can be seen that, although the implementation can improve the storage efficiency, the memory occupancy rate is increased, which causes memory waste.
In view of this, an embodiment of the present disclosure provides a data message storage method, in an embodiment of the present disclosure, a storage area with twice the number of preset bytes is pre-deployed, and the storage area is partitioned into a main memory block set and an extended memory block set. The preset byte number is the total length of all the data messages to be acquired. That is, the memory occupied in the embodiment of the present disclosure may be controlled to be twice the total length of the data packet, which is beneficial to reducing the occupancy rate of the memory. In addition, each main memory block in the main memory block set can be in one-to-one correspondence with the target number (greater than or equal to 2) of threads, that is, each thread can store data messages in the corresponding main memory block in a parallel manner, and after any main memory block is fully written, the extended memory block can be used for continuous storage, so that the storage efficiency can not be affected.
The embodiments of the present disclosure may be applied to a network architecture. As shown in fig. 1, a network structure to which an embodiment of the present disclosure applies may include a Control Plane (Control Plane) 10 and a Data Plane (Data Plane) 20, and the Control Plane 10 may perform signaling interaction with the Data Plane 20 through a predefined interface.
The control plane 10 may be used for controlling and managing a network, including dividing memory blocks, corresponding memory blocks and threads, setting storage rules, routing, path computation, and the like, for example, in the embodiments disclosed below, the memory area is obtained, the memory area is partitioned to obtain a main memory block set and an extended memory block set, a corresponding relationship between the main memory block and threads is established, a policy for partitioning the memory area is formulated, and implementation of the functions such as formulating a storage policy. The control plane 10 may be composed of one or more controllers (controllers), and controls the data message acquisition, storage, and other actions of the data message according to the embodiments of the present disclosure by means of exchanging information and issuing configuration instructions through a protocol.
The data plane 20 may be used for processing, forwarding, etc. data messages transmitted in a network, including receiving data messages, searching routes, capturing data messages according to a policy set by the control plane 10, determining a main memory block corresponding to each thread according to a correspondence obtained from the control plane 10, encapsulating, decapsulating, forwarding, storing data messages according to a storage policy set by the control plane 10, and obtaining an extended memory block, etc., for example, in the embodiments disclosed below, the execution of the embodiments of obtaining data messages in the received information, detecting and judging logic of each thread on occupied memory blocks, obtaining extended memory blocks by each thread, etc. The data plane 20 may be implemented by a network device (e.g., a switch, router, firewall, etc.).
It should be appreciated that the control plane 10 and the data plane 20 are components of a computer network architecture and may be functional planes that are divided by the logical functions of the computer network architecture. In this regard, it is understood that the embodiments executed by the control plane 10 or the embodiments executed by the data plane 20 are both computer-implemented embodiments.
The data message storage method proposed according to the embodiment of the present disclosure is described below with reference to examples.
Fig. 2 illustrates a data message storage method according to an embodiment of the present disclosure, where the data message storage method illustrated in fig. 2 may be applied to a computer, and the computer may deploy the network architecture illustrated in fig. 1. The data message storage method according to the embodiment of the disclosure may include the following steps:
in step S11, a target number of threads are invoked to obtain data packets from the received information in parallel.
Wherein the target number is greater than or equal to 2. It can be seen that the technical scenario of the embodiments of the present disclosure is a multi-threaded scenario.
In some embodiments, the information may be communication information to be forwarded, including but not limited to control information (e.g., control signaling, routing information), access information (e.g., request information to access a server), data information (e.g., text, sound, image, video), etc.
In some embodiments, the data packet may be a data packet in the communication information that satisfies a predefined condition, for example, a data packet including a predefined field; such as a data message containing fields of a predefined format.
It should be noted that, before step S11, the computer may preset a main memory block set, where the main memory block set may include a target number of main memory blocks, and may establish one-to-one correspondence between the target number of threads and the target number of main memory blocks.
By way of example, the correspondence between each main memory block and each thread may be established by the starting address of each main memory block. The starting address of the memory block refers to the address of the first byte in the memory as the first memory block. A memory block may be a sequence of consecutive bytes of a size defined at the time of partitioning, each byte having a unique address, and a starting address may represent a starting location of the memory block. The memory address may typically be represented in hexadecimal, for example 0x00000000, 0x00010000, etc. Where 0x represents a hexadecimal prefix, and the following numbers represent specific address values.
In this way, the computer can call at least two threads to receive communication information in parallel and acquire data messages from the received communication information, so that the extraction process of the data messages can be quickened based on the concurrent execution characteristic of multiple threads, and the processing efficiency of the computer is further improved.
In some embodiments, the sum of the storage spaces corresponding to the set of main memory blocks may be a preset byte number, where the storage space of any main memory block may be a product of the preset byte number and the inverse of the target number. The preset number of bytes may refer to the total length of the data packet to be acquired.
For example, as shown in fig. 3, the total length of the data packet to be acquired is, for example, X bytes, and the computer may call n threads, such as thread T1, thread T2, thread T3, and thread … … Tn. Wherein X and n may be positive integers, and X may be in units of kilo (K), mega (M), giga (G), etc. The computer may divide the storage area with the storage space of X bytes into n main memory blocks on average, for example, to obtain main memory block X1-1, main memory block X2-1, main memory block X3-1, … … main memory block Xn-1 in fig. 3, thread T1 may correspond to main memory block X1-1, thread T2 may correspond to main memory block X2-1, thread T3 may correspond to main memory block X3-1, and so on. The storage space of each main memory block can be X1/n bytes.
It should be understood that the steps of obtaining the storage space, partitioning the storage space to obtain the set of main memory blocks, establishing the correspondence between the main memory blocks and the threads, and the like may be performed by the control plane of the computer.
Therefore, by adopting the technical scheme, the memory blocks are respectively allocated to each thread according to the total length of all the data messages to be acquired, so that under the condition that each thread can execute the storage operation in parallel, the memory occupied by each thread can be ensured not to be excessively large.
In step S12, for any thread in the target number of threads, after the data packet is acquired, it is detected whether at least one memory block occupied by the thread meets a storage requirement.
In combination with the foregoing description, the target number of threads operate in parallel, and the processing procedure and the operation steps of each thread are similar, so that for convenience of description, the technical solution of the embodiments of the disclosure will be described below by taking the processing procedure of one thread as an example.
It should be noted that, before step S11, the computer may further set an extended memory block set at the stage of setting the main memory block set, so as to provide the extended memory space for the corresponding thread after any main memory block is fully written. The sum of the storage spaces of the main memory block set and the extended memory block set is twice a preset byte number, for example, 2X bytes. Then, in the case that the sum of the storage spaces corresponding to the main memory block set is the preset byte number (for example, X bytes), the sum of the storage spaces of the extended memory block set is also the preset byte number (for example, X bytes).
Therefore, by adopting the technical scheme of the embodiment of the disclosure, the occupied memory of the multithread storage is doubled as the total length of the data message, so that the occupancy rate of the memory is controlled to be relatively small, and the occupancy rate and the storage efficiency of the memory are balanced.
In some embodiments, referring again to fig. 3, the set of extended memory blocks may include a target number of extended memory blocks, for example, the extended memory blocks X1-2, X2-2, X3-2, and Xn-2 in fig. 3 are obtained, and the storage space between the target number of extended memory blocks may be the same or different. If the storage space between the expansion memory blocks is the same, the storage space of any expansion memory block may be the product of the preset byte number and the inverse of the target number, for example, x×1/n bytes. And each extended memory block and the threads with the target number do not have a direct corresponding relation, and any thread in the threads with the target number can occupy any extended memory block under the condition that the extended memory block is not occupied.
If a first data message is acquired for any thread of the target number of threads, the thread may determine a main memory block corresponding to the thread according to a start address corresponding to the thread, and then store the first data message into the main memory block. That is, in the technical solution, any thread may first store a data packet to a main memory block corresponding to the thread, and may obtain an extended memory block to store the data packet from an extended memory block set under the condition that the corresponding main memory block cannot meet the storage requirement. In this way, it is advantageous to process the operations of the target number of threads in parallel.
Further, the at least one memory block occupied by the thread may include a main memory block corresponding to the thread. In some embodiments, if the at least one memory block is a memory block, the memory block is a main memory block corresponding to the thread. In other embodiments, if the at least one memory block is at least two memory blocks, the at least one memory block includes a main memory block corresponding to the thread and at least one extended memory block occupied by the thread (as shown in fig. 4A and 4B).
It should be noted that, for a data packet, a storage rule is pre-deployed, where the storage rule refers to a policy and convention for storing and accessing the data packet in a computer system. The storage rules of the data messages may be determined according to the usage scenario of the data messages, and in an actual implementation scenario, the control plane 10 in fig. 1 may formulate the storage rules for the usage scenario of the acquired data messages. For example, if the usage scenario is a certain type of data packet for obtaining the preset number of bytes, the access sequence of the type of data packet is not limited, and the storage rule may be to store the data packet of the preset number of bytes. For another example, if the usage scenario is that a certain type of data message is accessed according to a byte sequence, then in a process of obtaining a type of data message with a preset byte number, the obtained data message should be stored according to the byte sequence, and the storage rule may be that the data message with the preset byte number is stored according to the preset sequence. The storage requirements described in the embodiments of the present disclosure are different for different storage rules, and the conditions for whether the storage requirements are satisfied are also different.
The storage requirements described in the embodiments of the present disclosure are described below in conjunction with different storage rules.
If the storage rule for the data packet is to store the data packet with the preset byte number, and the byte order of the data packet is not restricted, the storage requirement for the data packet may be that any memory block in the at least one memory block has enough remaining storage space. Then, after the data packet is obtained, the thread may obtain the remaining storage space of each memory block in the at least one memory block, and detect the size relationship between the remaining storage space of each memory block and the data packet, respectively, if the remaining storage space of each memory block is smaller than the byte number of the data packet, that is, there is no memory block in at least one memory block occupied by the thread that can continue to store the data packet, and it may be determined that the at least one memory block does not meet the storage requirement. Otherwise, it may be determined that the at least one memory block meets the storage requirement.
For example, the length of the data message is 1000 bytes. The memory blocks occupied by the threads comprise a main memory block X1-1 and an extended memory block X1-2, the residual memory space of the main memory block X1-1 is 600 bytes, the residual memory space of the main memory block X1-1 is 800 bytes, and the memory blocks occupied by the threads can be considered to not meet the storage requirement of the data message. Of course, if the remaining storage space of the main memory block X1-1 is 1200 bytes, it can be considered that the memory block occupied by the thread meets the storage requirement of the data packet.
If the storage rule for the data messages is that the data messages with the preset byte number are stored according to the preset sequence, that is, a certain requirement is made on the byte sequence of the data messages in the present scenario, the data messages can be stored according to the sequence of the acquired data messages. The method comprises the steps of obtaining the residual storage space of a memory block currently used by a thread, detecting whether the residual storage space of the memory block currently used is larger than or equal to the byte number of a data message, and if the residual storage space is smaller than the byte number of the data message, determining that at least one memory block does not meet the storage requirement; otherwise, it may be determined that the at least one memory block meets the storage requirement.
The currently used memory block belongs to the at least one memory block and is the latest memory block occupied by the thread. For example, if the thread occupies only the main memory block X2-1 of the thread, the main memory block X2-1 is the memory block currently used by the thread. For another example, if the thread occupies the main memory block X2-1, the extended memory block X2-2, and the extended memory block X3-2 is the latest memory block occupied by the thread, then the extended memory block X3-2 is the memory block currently used by the thread.
It should be appreciated that the above implementation, taking the processing procedure of one thread as an example, is applicable to the target number of threads, and the target number of threads may execute the above implementation in parallel. It should be noted that, in one scenario where the target number of threads perform operations in parallel, some threads may acquire more data packets and occupy more memory blocks, while other threads may not acquire data packets, may occupy fewer memory blocks, or may not occupy memory blocks. In another scenario where the target number of threads execute operations in parallel, each thread may acquire a portion of the data packet and occupy a memory block as required. And is not limited herein.
By adopting the implementation mode, after the main memory block of any thread is fully written, the corresponding thread can use the extended memory block to continue storing, and the target number of threads can execute the process in parallel, so that the storage efficiency is guaranteed.
In step S13, if at least one memory block occupied by the thread does not meet the storage requirement, the thread is invoked to acquire an unoccupied extended memory block from the extended memory block set to store the data packet.
For any thread, according to the description of the foregoing embodiment, if at least one memory block occupied by the thread does not meet the storage requirement, it is indicated that the data packet cannot be stored in at least one memory block occupied by the thread, and the thread may acquire an unoccupied extended memory block from an extended memory block set to store the data packet.
It should be noted that, the thread may determine each extended memory block through the starting address of each extended memory block. The attribute of the start address of the extended memory block is the same as the attribute of the start address of the main memory block, and will not be described here again.
For example, the thread may determine at least one unoccupied extended memory block in the set of extended memory blocks, and further select one extended memory block from the at least one unoccupied extended memory block to store the target data packet.
After any extended memory block is occupied, the thread occupying the corresponding extended memory block can add an occupied identifier for the extended memory block to mark the use state of the extended memory block, thereby being beneficial to memory management and resource allocation of the extended memory block set.
It should be appreciated that in other embodiments, if at least one memory block occupied by the thread meets the storage requirement, it is indicated that the at least one memory block includes a memory block capable of storing the data packet, and the thread may store the target data packet to a corresponding memory block.
It can be seen that, by adopting the technical solution of the embodiment of the present disclosure, the total length of all data messages to be acquired is a preset number of bytes, and in the embodiment of the present disclosure, before acquiring a data message, a storage area with a preset number of bytes twice as large as that of the preset number of bytes may be deployed in advance. That is, the embodiment of the present disclosure may control the memory occupied by the target number of threads by two times of the total length of the data packet, which is beneficial to reduce the occupancy rate of the memory, where the target number is greater than or equal to 2. Further, in the embodiment of the present disclosure, the storage area is partitioned into a main memory block set and an extended memory block set, each main memory block in the main memory block set may correspond to a target number of threads one by one, so that the target number of threads may obtain data packets from the received information in parallel, and after any main memory block is fully written, the corresponding threads may use the extended memory block to continue storing, that is, the process that the target number of threads store the data packets to any memory block may be executed in parallel, thereby being beneficial to ensuring the storage efficiency. Therefore, the technical scheme of the embodiment of the disclosure can give consideration to the memory occupancy rate and the memory efficiency in the multithreading scene.
The implementations illustrated in fig. 2 relating to retrieving data messages and storing data messages may be performed by the data plane 20 of fig. 1. Before the data plane 20 performs the above embodiments, the control plane 10 may partition the main memory block set and the extended memory block set, the storage rule, and so on.
Before the data plane invokes the target number of threads to obtain the target data message from the received information in parallel, the control plane of the computer may obtain a storage area (for example, a storage area with a storage space of 2X) of the preset byte number, where the storage space is doubled, and partition the storage area to obtain the main memory block set and the extended memory block set. Further, the control plane may establish a correspondence between a start address of any main memory block in the set of main memory blocks and a corresponding thread, and transmit the correspondence and the start address of each extended memory block in the set of extended memory blocks to the target number of threads, respectively.
In some embodiments, a storage area with a storage space of 2X may be at least one storage area, which is not limited by the present disclosure. Further, in an alternative example, the control plane may partition the storage area into a first storage area and a second storage area, where the storage space of the first storage area and the storage space of the second storage area are both the preset byte number (for example, X bytes). Then, the first storage area can be divided into an average area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the first storage area are the main memory block set; and partitioning the second storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the second storage area are the extended memory block set. For example, the control plane may perform an average partition on the second storage area to obtain an extended memory block set. In this way, the storage space of any main memory block and any extended memory block is the product (for example, x×1/n bytes) of the preset byte number and the inverse of the target number, which is beneficial to managing and allocating resources to each memory block.
It should be noted that, with the data packet storage method according to the embodiment of the present disclosure, the storage effects that may be obtained may include any of the following cases:
case one: and the corresponding main memory blocks are fully written by the threads with the target number, and the extended memory blocks are not required to be occupied. For example, in connection with the scenario illustrated in FIG. 3, thread T1 may fill main memory block X1-1, thread T2 may fill main memory block X2-1, thread T3 may fill main memory block X3-1, and so on, thread Tn may fill main memory block Xn-1.
It should be understood that, in the first case, the ideal implementation effect of the present technical scenario is that in an actual implementation, the length of the data packet is relatively random, and the length of the data packet acquired by each thread is difficult to adapt to the storage space of the corresponding main memory block, so in general, each thread may occupy a part of the extended memory block, as shown in fig. 4A in the second case.
And a second case: as shown in FIG. 4A, n threads occupy most of the space of the corresponding main memory block and a portion of the expansion memory block, wherein thread T1 occupies most of the main memory block X1-1 and a portion of the expansion memory block X1-2, thread T2 occupies most of the main memory block X2-1 and a portion of the expansion memory block X2-2, and so on, thread Tn occupies most of the main memory block Xn-1 and a portion of the expansion memory block Xn-2.
And the first case and the second case are scenes in which the data message is acquired by n threads. In other embodiments, some of the n threads may acquire more or even all data messages, while other threads may not acquire or acquire fewer data messages, then the threads acquiring more data messages may occupy the corresponding main memory block and the plurality of extended memory blocks, and the threads acquiring fewer data messages may not be able to write the corresponding main memory block.
For example, as shown in fig. 4B, in case three, the thread T1 of the n threads acquires all the data messages, for example, and none of the threads T2 to Tn acquire the data messages. Then, the thread T1 may continuously occupy the extended memory block to store the data packet according to the above example, and the thread T1 occupies, for example, the main memory block X1-1 and the extended memory blocks X1-2 to Xn-2. While threads T2 through Tn do not occupy any memory blocks.
It should be understood that the above cases one to three and fig. 4A and 4B are only schematic illustrations, and do not limit the technical solutions of the embodiments of the present disclosure. In practical implementation, the storage effect corresponding to each thread and the occupation condition of each memory block may also include other situations, and the embodiments of the present disclosure are not expanded one by one here.
As can be seen, the total length of the data packet to be acquired is a preset number of bytes, and in this embodiment of the present disclosure, before the data packet is acquired, a storage area with twice the preset number of bytes may be deployed according to the preset. That is, the embodiment of the present disclosure may control the memory occupied by the target number of threads by two times of the total length of the data packet, which is beneficial to reduce the occupancy rate of the memory, where the target number is greater than or equal to 2. Further, in the embodiment of the present disclosure, the storage area is partitioned into a main memory block set and an extended memory block set, each main memory block in the main memory block set may correspond to a target number of threads one by one, so that the target number of threads may obtain data packets from the received information in parallel, and after any main memory block is fully written, the corresponding threads may use the extended memory block to continue storing, that is, the process that the target number of threads store the data packets to any memory block may be executed in parallel, thereby being beneficial to ensuring the storage efficiency. Therefore, the technical scheme of the embodiment of the disclosure can give consideration to the memory occupancy rate and the memory efficiency in the multithreading scene.
Corresponding to the implementation manner of the data message storage method, the embodiment of the disclosure further provides a data message storage device, which can be applied to the network architecture described in fig. 1, and is used for executing the data message storage method described in any one of the embodiments illustrated in fig. 2 to 4B. As shown in fig. 5, the data message storage device includes:
An obtaining module 501, configured to invoke a target number of threads to obtain, in parallel, a data packet from the received information, where the target number is greater than or equal to 2, and the target number of threads corresponds to the target number of main memory blocks contained in the main memory block set one-to-one; the detecting module 502 is configured to detect, for any thread of the target number of threads, after obtaining a data packet, whether at least one memory block occupied by the thread meets a storage requirement, where the at least one memory block includes a main memory block corresponding to the thread; the obtaining module 501 is further configured to invoke the thread to obtain an unoccupied extended memory block from the extended memory block set to store the data packet if at least one memory block occupied by the thread does not meet the storage requirement; and the sum of the storage spaces of the main memory block set and the extended memory block set is twice the preset byte number, and the preset byte number is the total length of all data messages to be acquired.
Optionally, the sum of storage spaces corresponding to the main memory block set and the sum of storage spaces corresponding to the extended memory block set are both the preset byte number; the storage space of any main memory block is the product of the preset byte number and the inverse of the target number; the set of extended memory blocks includes the target number of extended memory blocks.
Optionally, the detection module 502 is further configured to obtain a remaining storage space of each memory block in the at least one memory block if the storage rule for the data packet is a data packet storing the preset byte number; and if the remaining storage space of each memory block is smaller than the number of bytes of the datagram, determining that the at least one memory block does not meet the storage requirement.
Optionally, the detecting module 502 is further configured to obtain a remaining storage space of a memory block currently used by the thread if the storage rule for the data packet is that the data packet with the preset byte number is stored according to a preset sequence, where the currently used memory block belongs to the at least one memory block; and if the remaining storage space is smaller than the datagram byte number, determining that none of the at least one memory block meets the storage requirement.
Optionally, the data message storage device further includes: the storage module is used for determining the main memory block corresponding to the thread according to the starting address corresponding to the thread if the data message is the first data message acquired by the thread; and storing the data message into the main memory block.
Optionally, the obtaining module 501 is further configured to determine at least one unoccupied extended memory block in the set of extended memory blocks; and selecting one extended memory block from the at least one unoccupied extended memory block to store the target data message.
Optionally, the data message storage device further includes: the device comprises a partitioning module, a building module and a transmission module. The obtaining module 501 is further configured to obtain a storage area of the preset byte number with a storage space being twice;
the partition module is used for partitioning the storage area to obtain the main memory block set and the extended memory block set;
the establishing module is used for establishing the corresponding relation between the starting address of any main memory block in the main memory block set and the corresponding thread of the main memory block;
and the transmission module is used for respectively transmitting the corresponding relation and the starting address of each extended memory block in the extended memory block set to the threads with the target number.
The partitioning module is further configured to partition the storage area into a first storage area and a second storage area, where the storage space of the first storage area and the storage space of the second storage area are both the preset byte number; carrying out average partitioning on the first storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the first storage area are the main memory block set; partitioning the second storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the second storage area are the extended memory block set.
The data message storage device provided by the above embodiment of the present disclosure and the data message storage method provided by the embodiment of the present disclosure have the same beneficial effects as the method adopted, operated or implemented by the application program stored therein, because of the same inventive concept.
The embodiment of the disclosure also provides an electronic device, which can be used as a computer for deploying the network architecture shown in fig. 1 to execute the data message storage method. Referring to fig. 6, a schematic diagram of an electronic device according to some embodiments of the present disclosure is shown. As shown in fig. 6, the electronic device 6 includes: a processor 600, a memory 601, a bus 602 and a communication interface 603, the processor 600, the communication interface 603 and the memory 601 being connected by the bus 602; the memory 601 stores a computer program that can be executed on the processor 600, where the processor 600 executes the data message storage method provided in any of the foregoing embodiments illustrated in fig. 2 and 3 of the present disclosure when executing the computer program.
The memory 601 may include a high-speed random access memory (Random Access Memory, RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 603 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 602 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. The memory 601 is configured to store a program, the processor 600 executes the program after receiving an execution instruction, and the data packet storage method disclosed in any of the foregoing embodiments illustrated in fig. 2 and 3 may be applied to the processor 600 or implemented by the processor 600.
The processor 600 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in processor 600. The processor 600 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 601 and the processor 600 reads the information in the memory 601 and performs the steps of the method described above in combination with its hardware.
The electronic device provided by the embodiment of the present disclosure and the data message storage method provided by the embodiment of the present disclosure are the same inventive concept, and have the same beneficial effects as the method adopted, operated or implemented by the same.
The present disclosure further provides a computer readable storage medium corresponding to the data message storage method provided in the foregoing embodiments, referring to fig. 7, the computer readable storage medium is shown as an optical disc 30, on which a computer program (i.e. a program product) is stored, where the computer program, when executed by a processor, performs the data message storage method provided in any of the foregoing embodiments.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The computer readable storage medium provided by the above embodiments of the present disclosure and the data packet storage method provided by the embodiments of the present disclosure have the same advantages as the method adopted, operated or implemented by the application program stored therein, because of the same inventive concept.
It should be noted that:
in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the following schematic diagram: i.e., the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
While the invention has been described with respect to the preferred embodiments, it will be apparent to those skilled in the art that various changes and substitutions can be made herein without departing from the scope of the invention. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method for storing data messages, the method comprising:
invoking a target number of threads to obtain data messages from the received information in parallel, wherein the target number is greater than or equal to 2, and the target number of threads corresponds to the target number of main memory blocks contained in the main memory block set one by one;
For any thread in the target number of threads, after a data message is acquired, detecting whether at least one memory block occupied by the thread meets a storage requirement, wherein the at least one memory block comprises a main memory block corresponding to the thread;
if at least one memory block occupied by the thread does not meet the storage requirement, calling the thread to acquire an unoccupied extended memory block from an extended memory block set to store the data message;
and the sum of the storage spaces of the main memory block set and the extended memory block set is twice the preset byte number, and the preset byte number is the total length of all data messages to be acquired.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the sum of the storage spaces corresponding to the main memory block sets and the sum of the storage spaces corresponding to the extended memory block sets are the preset byte number;
the storage space of any main memory block is the product of the preset byte number and the inverse of the target number;
the set of extended memory blocks includes the target number of extended memory blocks.
3. The method according to claim 1 or 2, wherein said detecting whether at least one memory block occupied by the thread meets a storage requirement comprises:
If the storage rule for the data message is that the data message with the preset byte number is stored, acquiring the residual storage space of each memory block in the at least one memory block;
and if the residual storage space of each memory block is smaller than the byte number of the datagram, determining that the at least one memory block does not meet the storage requirement.
4. The method according to claim 1 or 2, wherein said detecting whether at least one memory block occupied by the thread meets a storage requirement comprises:
if the storage rule for the data message is that the data message with the preset byte number is stored according to the preset sequence, acquiring the residual storage space of the memory block currently used by the thread, wherein the currently used memory block belongs to the at least one memory block;
and if the remaining storage space is smaller than the number of bytes of the datagram, determining that the at least one memory block does not meet the storage requirement.
5. The method as recited in claim 1, further comprising:
if the data message is a first data message acquired by the thread, determining the main memory block corresponding to the thread according to a starting address corresponding to the thread;
And storing the data message into the main memory block.
6. The method of claim 1, wherein the invoking the thread to obtain an unoccupied extended memory block from an extended memory block set to store the data message comprises:
determining at least one unoccupied extended memory block in the extended memory block set;
and selecting one extended memory block from the at least one unoccupied extended memory block to store the target data message.
7. The method of claim 1, wherein before invoking the target number of threads to obtain the target data message from the received information in parallel, further comprising:
acquiring a storage area of the preset byte number, wherein the storage space of the storage area is doubled;
partitioning the storage area to obtain the main memory block set and the extended memory block set;
establishing a corresponding relation between a starting address of any main memory block in the main memory block set and a thread corresponding to the main memory block;
and transmitting the corresponding relation and the starting address of each extended memory block in the extended memory block set to the target number of threads respectively.
8. The method of claim 7, wherein partitioning the storage area to obtain the set of main memory blocks and the set of extended memory blocks comprises:
Dividing the storage area into a first storage area and a second storage area, wherein the storage space of the first storage area and the storage space of the second storage area are both the preset byte number;
carrying out average partitioning on the first storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the first storage area are the main memory block set;
partitioning the second storage area to obtain the target number of memory blocks, wherein the target number of memory blocks contained in the second storage area are the extended memory block set.
9. A data message storage device, the device comprising:
the acquisition module is used for calling a target number of threads to acquire data messages from the received information in parallel, the target number is greater than or equal to 2, and the target number of threads corresponds to the target number of main memory blocks contained in the main memory block set one by one;
the detection module is used for detecting whether at least one memory block occupied by the threads meets the storage requirement or not according to any thread in the target number of threads after the data message is acquired, wherein the at least one memory block comprises a main memory block corresponding to the threads;
The obtaining module is further configured to invoke the thread to obtain an unoccupied extended memory block from the extended memory block set to store the data packet if at least one memory block occupied by the thread does not meet the storage requirement;
and the sum of the storage spaces of the main memory block set and the extended memory block set is twice the preset byte number, and the preset byte number is the total length of all data messages to be acquired.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor runs the computer program to implement the method of any one of claims 1-8.
11. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the method of any of claims 1-8.
CN202311703459.3A 2023-12-12 2023-12-12 Data message storage method, device, equipment and storage medium Pending CN117762618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311703459.3A CN117762618A (en) 2023-12-12 2023-12-12 Data message storage method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311703459.3A CN117762618A (en) 2023-12-12 2023-12-12 Data message storage method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117762618A true CN117762618A (en) 2024-03-26

Family

ID=90317505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311703459.3A Pending CN117762618A (en) 2023-12-12 2023-12-12 Data message storage method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117762618A (en)

Similar Documents

Publication Publication Date Title
CN107450981B (en) Block chain consensus method and equipment
CN108647104B (en) Request processing method, server and computer readable storage medium
CN115210693A (en) Memory transactions with predictable latency
US20080263554A1 (en) Method and System for Scheduling User-Level I/O Threads
CN107797848B (en) Process scheduling method and device and host equipment
JP2006515690A (en) Data processing system having a plurality of processors, task scheduler for a data processing system having a plurality of processors, and a corresponding method of task scheduling
CN111857946A (en) Location-based virtualized workload placement
WO2018049873A1 (en) Application scheduling method and device
CN110224943B (en) Flow service current limiting method based on URL, electronic equipment and computer storage medium
CN113495795A (en) Inter-process communication method and related equipment
CN113254240B (en) Method, system, device and medium for managing control device
CN109831394B (en) Data processing method, terminal and computer storage medium
CN110569063A (en) sub-application APP generation method and generation device
CN114168271A (en) Task scheduling method, electronic device and storage medium
US20240152474A1 (en) On-chip integrated circuit, data processing device, and data processing method
CN115002046A (en) Message processing method, NUMA node, electronic device and storage medium
CN113342544B (en) Design method of data storage architecture, message transmission method and device
KR101442369B1 (en) Dual mode reader writer lock
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN111597058B (en) Data stream processing method and system
CN110932998B (en) Message processing method and device
CN117762618A (en) Data message storage method, device, equipment and storage medium
CN111638979A (en) Call request processing method and device, electronic equipment and readable storage medium
CN110895517B (en) Method, equipment and system for transmitting data based on FPGA
KR20050080704A (en) Apparatus and method of inter processor communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination