CN112148488A - Message processing method and system based on multi-cycle cache - Google Patents
Message processing method and system based on multi-cycle cache Download PDFInfo
- Publication number
- CN112148488A CN112148488A CN202011004152.0A CN202011004152A CN112148488A CN 112148488 A CN112148488 A CN 112148488A CN 202011004152 A CN202011004152 A CN 202011004152A CN 112148488 A CN112148488 A CN 112148488A
- Authority
- CN
- China
- Prior art keywords
- data
- memory
- current
- readoffset
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 16
- 239000012634 fragment Substances 0.000 abstract description 5
- 238000000034 method Methods 0.000 description 10
- 238000013508 migration Methods 0.000 description 4
- 230000005012 migration Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 238000007157 ring contraction reaction Methods 0.000 description 1
- 238000006049 ring expansion reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/543—User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a message processing method based on multiple circular caches, which comprises the following steps: s11, presetting a memory with the size of 1K as an initial data cache of a circular queue; s12, setting a write cursor writeOffset and a read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data; s13, setting the currently written data as T and the length of the current annular queue as S, judging whether the data needing to be written by a producer is larger than T, if so, expanding a preset memory in a multiplication mode to obtain the annular queue containing the current memory; s14, copying all data needing to be written by the producer to the obtained circular queue. The invention can avoid frequently applying and releasing the memory, effectively and reasonably reduce memory fragments and accelerate the use of the memory.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a message processing method and system based on multi-cycle cache.
Background
Some network message processing is not avoided in the network game development process, one thread receives messages, the other thread processes the messages, message contents are put into a first-in first-out (FIFO) queue, and under the condition of multithreading, a lock is used for controlling data of a critical area. The problem is encountered here, the data size is variable, and the space application of the queue is too small, which causes frequent application and release of the memory, causes memory fragmentation, and affects the operation efficiency; the queue space is applied too much, and if the data volume actually used is small, the space is wasted unnecessarily.
Disclosure of Invention
The invention aims to provide a message processing method and a message processing system based on multi-cycle cache, aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a message processing method based on multi-cycle cache comprises the following steps:
s1, presetting a memory with the size of 1K as an initial data cache of a circular queue;
s2, setting a write cursor writeOffset and a read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data;
s3, setting the currently written data as T and the length of the current annular queue as S, judging whether the data needing to be written by a producer is larger than T, if so, expanding a preset memory in a multiplication mode to obtain the annular queue containing the current memory;
and S4, copying all data needing to be written by the producer onto the obtained annular queue.
Further, the producer is responsible for writing data in step S2, where the writing data is to copy the dataram starting from writing the vernier writeOffset;
the consumer is responsible for reading the data, which is copying the memory starting with reading the vernier readOffset.
Further, if the current producer write data is set to T and the length of the current ring queue is set to S in step S3, the relationship between the write cursor writeOffset and the read cursor readOffset includes:
if readOffset is equal to writeOffset, then there is no data in the buffer, which is expressed as: t is S;
if readOffset is less than writeOffset, then the memory for the remaining write data is represented as: T-S-writeOffset + readOffset;
if readOffset is greater than writeOffset, the memory for the remaining write data is denoted as readOffset-writeOffset.
Further, in the step S3, the expanding of the preset memory by using a multiplication method is twice that of the setting of the data cache of the current memory size to the data cache of the previous memory size.
Further, the step S4 is followed by:
and S5, judging whether the current residual write-in data T is larger than 1/2 of the length S of the current annular queue within preset time, if so, copying the current memory to the lower annular queue, and releasing the memory of the current annular queue.
Further, the step S5 is followed by:
and S6, judging whether the writeOffset is equal to the readOffset or not, if so, copying the memory onto a ring queue with the size of 1K of the memory, and releasing the memory of the current ring queue.
Correspondingly, a message processing system based on multi-cycle cache is also provided, which comprises:
the device comprises a presetting module, a data processing module and a data processing module, wherein the presetting module is used for presetting a memory with the size of 1K as an initial data cache of a circular queue;
the setting module is used for setting the write cursor writeOffset and the read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data;
the first judgment module is used for setting the currently written data as T and the length of the current annular queue as S and judging whether the data needing to be written by a producer is larger than T or not;
and the copying module is used for copying all the data which needs to be written by the producer onto the obtained ring queue.
Further, the producer in the setup module is responsible for writing data, wherein the writing data is to copy the dataram starting from writing the vernier writeOffset;
the consumer is responsible for reading the data, which is copying the memory starting with reading the vernier readOffset.
Further, the method also comprises the following steps:
and a second judging module, configured to judge 1/2 whether the current remaining write data T is greater than the length S of the current circular queue within a preset time.
Further, the method also comprises the following steps:
and a third judging module for judging whether the writeOffset is equal to the readOffset.
Compared with the prior art, the invention has the following beneficial effects:
1. under the following application scenes (the message is generally small data), frequent application and release of the memory can be avoided, memory fragments can be effectively and reasonably reduced, and the use of the memory is accelerated;
2. compared with the array time complexity O (1), the vernier mode is more efficient;
3. the memory is recycled, so that the utilization rate of the memory can be increased under the condition of not expanding the ring;
4. compared with the method of increasing a memory with a fixed length, the multiplication is more standard, the rule is simple, and the universality is stronger;
5. and performing residual memory detection each time data is inserted, recording time, and if the time is greater than 1/2 of the length of the current ring and is greater than a certain threshold value, performing memory migration and releasing a higher-level memory ring, so that the total memory occupation is reduced in a policy manner.
Drawings
Fig. 1 is a flowchart of a message processing method based on multiple circular caches according to an embodiment;
FIG. 2 is a diagram of a queue for receiving and processing messages using a multiple circular buffer according to an embodiment;
FIG. 3 is a diagram illustrating write data provided according to an embodiment;
fig. 4 is a structural diagram of a message processing system based on multi-cycle cache according to the second embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide a message processing method and a message processing system based on multi-cycle cache, aiming at the defects of the prior art.
Example one
The present embodiment provides a message processing method based on multiple circular caches, as shown in fig. 1-2, including:
s11, presetting a memory with the size of 1K as an initial data cache of a circular queue;
s12, setting a write cursor writeOffset and a read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data;
s13, setting the currently written data as T and the length of the current annular queue as S, judging whether the data needing to be written by a producer is larger than T, if so, expanding a preset memory in a multiplication mode to obtain the annular queue containing the current memory;
s14, copying all data needing to be written by the producer to the obtained circular queue.
In step S11, a memory with a size of 1K is preset as an initial data buffer of the circular queue.
Initially applying for a memory with the size of 1K (1024 bytes) as an initial data cache of the circular queue, and if the data amount written in the producer during data writing does not exceed 1K, the memory of the initially applied initial data cache is the memory which is finally needed.
In the embodiment, the memory is pre-allocated firstly, so that frequent application and release of the memory can be avoided when the message is generally an application scene of small data, memory fragments are effectively and reasonably reduced, and the use of the memory is accelerated.
In step S12, the write cursor writeOffset and the read cursor readOffset are set; where the producer is responsible for writing data and the consumer is responsible for reading data.
Defining a write cursor writeOffset and a read cursor readOffset; the producer is responsible for writing data, where writing data is copying the dataram starting with writing the vernier writeOffset; the consumer is responsible for reading the data, which is copying the memory starting with reading the vernier readOffset.
In step S13, the currently written data is set to be T, the length of the current circular queue is set to be S, and it is determined whether the data that needs to be written by the producer is greater than T, if yes, the preset memory is expanded in a multiplication manner, so as to obtain the circular queue accommodating the current size of the memory.
In step S14, the data that the producer needs to write is all copied onto the resulting ring queue.
The buffer is not circulated physically, but can be circulated logically, and the circulation has the advantage of being reusable; the actual memory structure is shown in fig. 3: the light grey parts represent data that the producer has written and that has not been used by the consumer.
Firstly, setting the current producer write-in data as T, namely recording the memory size of the current write-in data T; setting the length of the circular queue of the current memory size as S, the relationship between the write cursor writeOffset and the read cursor readOffset includes:
if readOffset is equal to writeOffset, then there is no data in the buffer, which is expressed as: t is S;
if readOffset is less than writeOffset, then the memory for the remaining write data is represented as: T-S-writeOffset + readOffset;
if readOffset is greater than writeOffset, the memory for the remaining write data is denoted as readOffset-writeOffset.
Then, the producer may continue to write data, and then judge whether the data that the producer needs to write is larger than the currently written data T, if yes, the memory needs to be expanded, the expansion mode adopts multiplication, if the data that the producer needs to write cannot be enlarged by one multiplication, the memory continues to be expanded (multiplied) until finding a memory size that can accommodate the data that the producer needs to write, that is, a ring of a ring queue that can accommodate the producer needs to write data, and then copy the data that the user needs to write and the previous data together to migrate to the most suitable ring of the ring queue; wherein, multiplication is to apply for a cache with memory size twice as large as before.
In this embodiment, step S14 is followed by:
and S15, judging whether the current residual write-in data T is larger than 1/2 of the length S of the current annular queue within preset time, if so, copying the current memory to the lower annular queue, and releasing the memory of the current annular queue.
Since data writing and data reading are performed simultaneously, at a certain time, if data is on an 8K ring queue, but the consumption capacity of the consumer is greater than that of the producer, only tens of bytes of memory (less memory is needed) may be used each time, so that it is unnecessary to occupy the 8K ring all the time, and a policy may be made here, if the current remaining writable data T lasts for a long time longer than 1/2 of the length of the current ring, a ring shrinking operation may be performed, that is, a ring queue lower than the current memory copy is placed on the current memory copy, and the memory of the current queue is released, so as to save space. The ring expansion and ring contraction are a strategy for migration according to the actual memory use condition.
In this embodiment, step S15 is followed by:
and S16, judging whether the writeOffset is equal to the readOffset, if so, copying the memory to a ring queue with the size of 1K of the memory, and releasing the memory of the current ring queue.
If the write cursor writeOffset is equal to the read cursor readOffset, it indicates that the amount of data written does not exceed 1K, and the initially applied memory of the initial data cache is the finally required memory, and all the current data are migrated to the ring of the lowest layer 1K.
Compared with the prior art, the embodiment has the following beneficial effects:
1. under the following application scenes (the message is generally small data), frequent application and release of the memory can be avoided, memory fragments can be effectively and reasonably reduced, and the use of the memory is accelerated;
2. compared with the array time complexity O (1), the vernier mode is more efficient;
3. the memory is recycled, so that the utilization rate of the memory can be increased under the condition of not expanding the ring;
4. compared with the method of increasing a memory with a fixed length, the multiplication is more standard, the rule is simple, and the universality is stronger;
5. and performing residual memory detection each time data is inserted, recording time, and if the time is greater than 1/2 of the length of the current ring and is greater than a certain threshold value, performing memory migration and releasing a higher-level memory ring, so that the total memory occupation is reduced in a policy manner.
Example two
The present embodiment provides a message processing system based on multiple circular caches, as shown in fig. 4, including:
the preset module 11 is used for presetting a memory with the size of 1K as an initial data cache of the circular queue;
a setting module 12 for setting a write cursor writeOffset and a read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data;
a first judging module 13, configured to set currently written data to be T, set a length of a current circular queue to be S, and judge whether data that needs to be written by a producer is greater than T;
and the copying module 14 is used for copying all the data needing to be written by the producer onto the obtained ring queue.
Further, the producer in the setup module 11 is responsible for writing data, where the writing data is to copy the dataram from the writing of the vernier writeOffset;
the consumer is responsible for reading the data, which is copying the memory starting with reading the vernier readOffset.
Further, the method also comprises the following steps:
and a second judging module, configured to judge 1/2 whether the current remaining write data T is greater than the length S of the current circular queue within a preset time.
Further, the method also comprises the following steps:
and a third judging module for judging whether the writeOffset is equal to the readOffset.
It should be noted that, a message processing system based on a multi-cycle cache provided in this embodiment is similar to the embodiment, and is not repeated herein.
Compared with the prior art, the embodiment has the following beneficial effects:
1. under the following application scenes (the message is generally small data), frequent application and release of the memory can be avoided, memory fragments can be effectively and reasonably reduced, and the use of the memory is accelerated;
2. compared with the array time complexity O (1), the vernier mode is more efficient;
3. the memory is recycled, so that the utilization rate of the memory can be increased under the condition of not expanding the ring;
4. compared with the method of increasing a memory with a fixed length, the multiplication is more standard, the rule is simple, and the universality is stronger;
5. and performing residual memory detection each time data is inserted, recording time, and if the time is greater than 1/2 of the length of the current ring and is greater than a certain threshold value, performing memory migration and releasing a higher-level memory ring, so that the total memory occupation is reduced in a policy manner.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (10)
1. A message processing method based on multi-cycle cache is characterized by comprising the following steps:
s1, presetting a memory with the size of 1K as an initial data cache of a circular queue;
s2, setting a write cursor writeOffset and a read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data;
s3, setting the currently written data as T and the length of the current annular queue as S, judging whether the data needing to be written by a producer is larger than T, if so, expanding a preset memory in a multiplication mode to obtain the annular queue containing the current memory;
and S4, copying all data needing to be written by the producer onto the obtained annular queue.
2. The message processing method based on multi-cycle buffer as claimed in claim 1, wherein the producer is responsible for writing data in step S2, wherein the writing data is to copy the dataram starting from writing the vernier writeOffset;
the consumer is responsible for reading the data, which is copying the memory starting with reading the vernier readOffset.
3. The message processing method based on multi-cycle buffer of claim 1, wherein the step S3 sets the current producer write data as T and the current ring queue as S, and the relationship between the write cursor writeOffset and the read cursor readOffset comprises:
if readOffset is equal to writeOffset, then there is no data in the buffer, which is expressed as: t is S;
if readOffset is less than writeOffset, then the memory for the remaining write data is represented as: T-S-writeOffset + readOffset;
if readOffset is greater than writeOffset, the memory for the remaining write data is denoted as readOffset-writeOffset.
4. The message processing method according to claim 3, wherein the step S3 of expanding the preset memory in a multiplication manner is to set the data buffer with the current memory size to be twice as large as the data buffer with the previous memory size.
5. The message processing method according to claim 1, wherein the step S4 is followed by further comprising:
and S5, judging whether the current residual write-in data T is larger than 1/2 of the length S of the current annular queue within preset time, if so, copying the current memory to the lower annular queue, and releasing the memory of the current annular queue.
6. The message processing method according to claim 5, wherein the step S5 is further followed by:
and S6, judging whether the writeOffset is equal to the readOffset or not, if so, copying the memory onto a ring queue with the size of 1K of the memory, and releasing the memory of the current ring queue.
7. A message processing system based on multi-cycle caching, comprising:
the device comprises a presetting module, a data processing module and a data processing module, wherein the presetting module is used for presetting a memory with the size of 1K as an initial data cache of a circular queue;
the setting module is used for setting the write cursor writeOffset and the read cursor readOffset; wherein the producer is responsible for writing data and the consumer is responsible for reading data;
the first judgment module is used for setting the currently written data as T and the length of the current annular queue as S and judging whether the data needing to be written by a producer is larger than T or not;
and the copying module is used for copying all the data which needs to be written by the producer onto the obtained ring queue.
8. The message processing system as claimed in claim 7, wherein the setup module is configured to enable a producer to write data, wherein the write data is copied from a write vernier write offset to the dataram;
the consumer is responsible for reading the data, which is copying the memory starting with reading the vernier readOffset.
9. The multi-cycle cache-based message processing system of claim 7, further comprising:
and a second judging module, configured to judge 1/2 whether the current remaining write data T is greater than the length S of the current circular queue within a preset time.
10. The multi-cycle cache-based message processing system of claim 7, further comprising:
and a third judging module for judging whether the writeOffset is equal to the readOffset.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011004152.0A CN112148488A (en) | 2020-09-22 | 2020-09-22 | Message processing method and system based on multi-cycle cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011004152.0A CN112148488A (en) | 2020-09-22 | 2020-09-22 | Message processing method and system based on multi-cycle cache |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112148488A true CN112148488A (en) | 2020-12-29 |
Family
ID=73897727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011004152.0A Pending CN112148488A (en) | 2020-09-22 | 2020-09-22 | Message processing method and system based on multi-cycle cache |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112148488A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112506683A (en) * | 2021-01-29 | 2021-03-16 | 腾讯科技(深圳)有限公司 | Data processing method, related device, equipment and storage medium |
US20220374270A1 (en) * | 2021-05-20 | 2022-11-24 | Red Hat, Inc. | Assisting progressive chunking for a data queue by using a consumer thread of a processing device |
CN117579386A (en) * | 2024-01-16 | 2024-02-20 | 麒麟软件有限公司 | Network traffic safety control method, device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104461933A (en) * | 2014-11-07 | 2015-03-25 | 珠海全志科技股份有限公司 | Memory management method and device thereof |
CN110134439A (en) * | 2019-03-30 | 2019-08-16 | 北京百卓网络技术有限公司 | The method of method for constructing data structure and write-in data, reading data without lockization |
CN110474851A (en) * | 2019-08-01 | 2019-11-19 | 北京世纪东方通讯设备有限公司 | A kind of access method and device recycling storage organization |
WO2019227724A1 (en) * | 2018-05-28 | 2019-12-05 | 深圳市道通智能航空技术有限公司 | Data read/write method and device, and circular queue |
-
2020
- 2020-09-22 CN CN202011004152.0A patent/CN112148488A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104461933A (en) * | 2014-11-07 | 2015-03-25 | 珠海全志科技股份有限公司 | Memory management method and device thereof |
WO2019227724A1 (en) * | 2018-05-28 | 2019-12-05 | 深圳市道通智能航空技术有限公司 | Data read/write method and device, and circular queue |
CN110134439A (en) * | 2019-03-30 | 2019-08-16 | 北京百卓网络技术有限公司 | The method of method for constructing data structure and write-in data, reading data without lockization |
CN110474851A (en) * | 2019-08-01 | 2019-11-19 | 北京世纪东方通讯设备有限公司 | A kind of access method and device recycling storage organization |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112506683A (en) * | 2021-01-29 | 2021-03-16 | 腾讯科技(深圳)有限公司 | Data processing method, related device, equipment and storage medium |
US20220374270A1 (en) * | 2021-05-20 | 2022-11-24 | Red Hat, Inc. | Assisting progressive chunking for a data queue by using a consumer thread of a processing device |
CN117579386A (en) * | 2024-01-16 | 2024-02-20 | 麒麟软件有限公司 | Network traffic safety control method, device and storage medium |
CN117579386B (en) * | 2024-01-16 | 2024-04-12 | 麒麟软件有限公司 | Network traffic safety control method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112148488A (en) | Message processing method and system based on multi-cycle cache | |
US7821864B2 (en) | Power management of memory via wake/sleep cycles | |
US8386713B2 (en) | Memory apparatus, memory control method, and program | |
US6571326B2 (en) | Space allocation for data in a nonvolatile memory | |
US7412565B2 (en) | Memory optimization for a computer system having a hibernation mode | |
US20170199814A1 (en) | Non-volatile random access system memory with dram program caching | |
JP2006518492A (en) | PERMANENT MEMORY MANAGEMENT METHOD AND PERMANENT MEMORY MANAGEMENT DEVICE | |
CN102968380B (en) | The management method of memory partitioning and device in memory file system | |
WO2021238260A1 (en) | Pre-read data caching method and apparatus, device, and storage medium | |
CN100580669C (en) | Method for realizing cache memory relates to file allocation table on Flash storage medium | |
US10073851B2 (en) | Fast new file creation cache | |
US8478956B2 (en) | Computing system and method controlling memory of computing system | |
CN105630697B (en) | A kind of storage device using MRAM storage small documents | |
KR20000039727A (en) | Method for approaching flash memory | |
CN113806295B (en) | File migration method, system, equipment and computer readable storage medium | |
JP2008009702A (en) | Arithmetic processing system | |
KR102076248B1 (en) | Selective Delay Garbage Collection Method And Memory System Using The Same | |
US8068373B1 (en) | Power management of memory via wake/sleep cycles | |
JPH04305741A (en) | Data base input/output control system | |
CN112650693A (en) | Static memory management method and device | |
CN115509763B (en) | Fingerprint calculation method and device | |
KR102053406B1 (en) | Data storage device and operating method thereof | |
TWI802689B (en) | Data processing system, data processing method, and program | |
CN107621926B (en) | Stack area data access method and device, readable storage medium and computer equipment | |
JP6157158B2 (en) | Information processing apparatus, control method thereof, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Zhou Tianya Inventor after: Xu Xiaohang Inventor before: Zhou Tianya |