CN110096453B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN110096453B
CN110096453B CN201910290393.7A CN201910290393A CN110096453B CN 110096453 B CN110096453 B CN 110096453B CN 201910290393 A CN201910290393 A CN 201910290393A CN 110096453 B CN110096453 B CN 110096453B
Authority
CN
China
Prior art keywords
memory space
virtual memory
size
offset
shared memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910290393.7A
Other languages
Chinese (zh)
Other versions
CN110096453A (en
Inventor
王洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing H3C Technologies Co Ltd
Original Assignee
Beijing H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing H3C Technologies Co Ltd filed Critical Beijing H3C Technologies Co Ltd
Priority to CN201910290393.7A priority Critical patent/CN110096453B/en
Publication of CN110096453A publication Critical patent/CN110096453A/en
Application granted granted Critical
Publication of CN110096453B publication Critical patent/CN110096453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • G06F12/0261Garbage collection, i.e. reclamation of unreferenced memory using reference counting

Abstract

The invention provides a data processing method and a device, wherein the method comprises the following steps: receiving an entry reading request, wherein the entry reading request carries a first Handle and a first Offset; determining a first virtual memory space of a corresponding first process according to the first Handle; when the size of the first virtual memory space is smaller than that of a first shared memory corresponding to the first virtual memory space, performing read-copy update (RCU) -based expansion on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space; and inquiring a corresponding first target item from the second virtual memory space according to the first Offset, and returning the address information of the user data of the first target item to an item reading requester. By applying the embodiment of the invention, the support for the increase of the database can be realized, and the data reading performance is improved.

Description

Data processing method and device
Technical Field
The present invention relates to the field of network communication technologies, and in particular, to a data processing method and apparatus.
Background
In a network device such as a switch/router, there is a high demand for access speed to fixed-length data indexed by an integer. For example, the relevant data of the interface is looked up by the interface index (integer), which is an integer value, and the relevant data of each interface is a fixed size.
At present, in order to increase the access speed of fixed-length data indexed by integers, a Database based on a shared memory, such as a lightning memory-Mapped Database (LMDB), is usually used to store the fixed-length data indexed by integers. The LMDB is a Memory Map (MMAP) based database. The MMAP can map the content of the database file to the inside of a process (virtual memory space), a plurality of processes can map one database file at the same time, and the LMDB realizes the sharing of data among different processes based on the LMDB.
However, practice shows that the LMDB does not support expansion of the database, and the tree structure adopted by the general database results in multiple addressing required for finding the corresponding data, and the access performance is not high enough.
Disclosure of Invention
The invention provides a data processing method and a data processing device, which aim to solve the problem of poor access performance when a database based on a shared memory is used for storing fixed-length data with integers as indexes in the prior art.
According to a first aspect of the present invention, there is provided a data processing method applied to a data storage system based on a shared memory, where an array structure is adopted in the shared memory to store fixed-length data indexed by an integer, the method including:
receiving an entry reading request, wherein the entry reading request carries a first Handle and a first Offset;
determining a first virtual memory space of a corresponding first process according to the first Handle;
when the size of the first virtual memory space is smaller than that of a first shared memory corresponding to the first virtual memory space, performing read-copy update (RCU) -based expansion on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space;
and inquiring a corresponding first target item from the second virtual memory space according to the first Offset, and returning the address information of the user data of the first target item to an item reading requester.
According to a second aspect of the present invention, there is provided a data processing method and apparatus, applied to a data storage system based on a shared memory, where the shared memory stores fixed-length data indexed by integers by using an array structure, and the apparatus includes:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving an entry reading request, and the entry reading request carries a first Handle and a first Offset;
a determining unit, configured to determine, according to the first Handle, a first virtual memory space of a corresponding first process;
a determining unit, configured to determine whether a size of the first virtual memory space is consistent with a size of a first shared memory corresponding to the first virtual memory space;
an extension unit, configured to, when the size of the first virtual memory space is smaller than the size of a first shared memory corresponding to the first virtual memory space, perform extension based on a read copy update RCU on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space;
and the data processing unit is used for inquiring a corresponding first target item from the second virtual memory space according to the first Offset and returning the address information of the user data of the first target item to an item reading requester.
By applying the technical scheme disclosed by the invention, the fixed-length data with the integer as the index is stored by combining the shared memory with the storage form of the RCU, so that the support for the increase of the database is realized; in addition, by adopting the array structure for data storage, compared with tree structure data storage, the data reading performance is improved.
Drawings
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present invention;
fig. 2A is a schematic diagram of a data structure of a shared memory according to an embodiment of the present invention;
fig. 2B is a schematic diagram of a data structure of a virtual memory space according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an item reading method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an item allocation method according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating an item release method according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a method for expanding a virtual memory space according to an embodiment of the present invention;
FIG. 7 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions in the embodiments of the present invention better understood and make the above objects, features and advantages of the embodiments of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a data processing method according to an embodiment of the present invention is shown, where the data processing method may be applied to a data storage system based on a shared memory, where an array structure is used in the shared memory to store fixed-length data indexed by integers, as shown in fig. 1, the data processing method may include the following steps:
it should be noted that, in the embodiment of the present invention, since the fixed-length data indexed by the integer is stored in the shared memory by using the array structure, the fixed-length data indexed by the integer is also stored in the virtual memory space mapped in the process by using the array structure.
Step 101, receiving an entry reading request, wherein the entry reading request carries a first Handle and a first Offset.
In the embodiment of the present invention, when an entry read (GetItem) is required, a virtual memory space in which the entry (Item) to be read is located and Offset (i.e. index of the Item) in the virtual memory space need to be specified.
The handles (handles) are used for identifying that the shared memory is in the virtual memory space mapped in the process, that is, the handles corresponding to the virtual memory spaces mapped in different processes in the same shared memory are different, and the handles corresponding to the virtual memory spaces mapped in the same or different processes in different shared memories are different.
And 102, determining a first virtual memory space of the corresponding first process according to the first Handle.
In the embodiment of the present invention, when an entry reading request is received, a Handle (referred to as a first Handle herein) carried in the entry reading request is obtained, and a virtual memory space (referred to as a first virtual memory space herein) of a corresponding process (referred to as a first process herein) is determined according to the first Handle.
Step 103, when the size of the first virtual memory space is smaller than the size of the first shared memory corresponding to the first virtual memory space, performing RCU-based expansion on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space.
In the embodiment of the present invention, for any virtual memory space mapped in any process, the size of the virtual memory space and the size of the shared memory corresponding to the virtual memory space may be recorded.
Referring to fig. 2A, the data of the shared memory includes two major types:
1. a Meta (metadata) portion placed at the head, including the actual size (size) of the current shared memory, where the actual size of the shared memory is typically identified by the number of included items, each Item being the same size;
2. each Item contains a header for recording the state of the Item: allocated or unallocated; and User Data, i.e., actual Data of the User.
Referring to fig. 2B, for any virtual memory space in any process, the process may record the size (identified by the number of items) of the virtual memory space mapped in the process, in addition to the data of the shared memory described above; for the same shared memory, the size of the virtual memory space mapped in different processes may not be consistent with the actual size of the shared memory.
In the embodiment of the present invention, when the first virtual memory space of the first process corresponding to the first Handle is determined, it may be determined whether the size of the first virtual memory space is consistent with the size of the shared memory (referred to as a first shared memory herein) corresponding to the first virtual memory space.
If the size of the first virtual memory space is smaller than the size of the first shared memory, the first virtual memory space may be expanded based on the RCU according to the size of the first shared memory, so as to obtain an expanded virtual memory space (referred to as a second virtual memory space herein).
And step 104, inquiring a corresponding first target item from the second virtual memory space according to the first Offset, and returning the address information of the user data of the first target item to the item reading requester.
In the embodiment of the present invention, when the second virtual memory space is obtained by performing RCU-based expansion on the first virtual memory space, a corresponding Item (referred to as a first target Item herein) may be queried from the second virtual memory space according to the first Offset, and address information of user data in the first target Item may be returned to the Item reading requester.
It should be noted that, in the embodiment of the present invention, if the size of the first virtual memory space is equal to the size of the first shared memory, the corresponding Item may be directly queried from the first virtual memory space according to the first Offset, and address information of the user data in the Item may be returned to the entry read requester.
In addition, considering that there may be invalid data in the virtual memory space, when the corresponding Item is determined according to the first Offset, the state (allocated or unallocated) of the Item may also be acquired, and if the state of the Item is allocated, the address information of the user data in the Item is returned to the Item reading requester; if the state of the Item is unallocated, the address information of the user data in the Item may not be returned to the Item reading requester, for example, Null (empty) is returned to the Item reading requester, and the specific implementation thereof is not described herein.
It can be seen that, in the method flow shown in fig. 1, the fixed-length data indexed by the integer is stored in the storage form of combining the shared memory and the RCU, so as to realize the support for the increase of the database; in addition, by adopting the array structure for data storage, compared with tree structure data storage, the data reading performance is improved.
Optionally, in an embodiment of the present invention, the data processing method may further include:
receiving a shared memory mapping request, wherein the shared memory mapping request carries an identifier of a second process and an identifier of a second shared memory;
and mapping a third virtual memory space in the second process according to the size of the second shared memory, and returning a second Handle corresponding to the third virtual memory space to the shared memory mapping requester.
In this embodiment, when the shared memory mapping needs to be performed in the process, an identifier (such as a process number) of the process that needs to perform the shared memory mapping and an identifier (such as a file name of a corresponding database file) of the shared memory that needs to perform the mapping may be specified.
Accordingly, when a shared memory mapping request is received, an identifier of a process (referred to as a second process herein) and an identifier of a shared memory (referred to as a second shared memory herein) carried by the shared memory mapping request may be obtained, and the second shared memory is mapped in the second process according to the obtained identifier of the second shared memory and the obtained identifier of the second process.
Optionally, in an example, the mapping, in the second process, the third virtual memory space according to the size of the second shared memory may include:
if the second shared memory does not exist, creating a second shared memory according to the preset size, and mapping a third virtual memory space in a second process according to the preset size;
and if the second shared memory exists, mapping a third virtual memory space in the second process according to the size of the second shared memory.
In this example, when the shared memory mapping request is received and the identifier of the second process and the identifier of the second shared memory carried in the shared memory mapping request are obtained, it may be determined whether the second shared memory exists first, that is, whether the second shared memory is created.
If the second shared memory does not exist, the second shared memory may be created according to a preset size (e.g., 256 items), and the second shared memory may be initialized.
The specific implementation of initializing the shared memory may refer to related descriptions in the prior art, and details of the embodiment of the present invention are not described herein.
In this example, after the second shared memory initialization is completed, a virtual memory space of a preset size (referred to herein as a third virtual memory space) may be mapped in the second process.
In this example, if it is determined that the second shared memory exists, the third virtual memory space may be mapped in the second process according to the size of the second shared memory.
In this embodiment, after the third virtual memory space is mapped in the second process, a Handle (referred to as a second Handle herein) corresponding to the third virtual memory space may be returned to the shared memory mapping requester.
Optionally, in an embodiment of the present invention, the data processing method may further include:
receiving an item allocation request, wherein the item allocation request carries a third Handle and a second Offset;
determining a fourth virtual memory space of a corresponding third process according to the third Handle;
when the size of the fourth virtual memory space is smaller than that of a third shared memory corresponding to the fourth virtual memory space, performing expansion based on the RCU on the fourth virtual memory space according to the size of the third shared memory to obtain a fifth virtual memory space;
and querying a corresponding second target item from the fifth virtual memory space according to the second Offset, and returning the address information of the user data of the second target item to the item allocation requester.
In this embodiment, when an entry allocation (allocation Item) is required, a Handle of the virtual memory space where the entry allocation is performed and Offset of the allocated Item in the virtual memory space may be specified.
Accordingly, in this embodiment, when an entry allocation request is received, a Handle (referred to as a third Handle herein) and an Offset (referred to as a second Offset herein) carried by the entry allocation request may be obtained, and a virtual memory space (referred to as a fourth virtual memory space herein) of a corresponding process (referred to as a third process herein) is determined according to the third Handle.
When the fourth virtual memory space is determined, it may be determined whether the size of the fourth virtual memory space is consistent with the size of the shared memory (referred to as a third shared memory herein) corresponding to the fourth virtual memory space.
If the size of the fourth virtual memory space is smaller than the size of the third shared memory, the fourth virtual memory space may be expanded based on the RCU according to the size of the third shared memory to obtain an expanded virtual memory space (referred to as a fifth virtual memory space herein), a corresponding Item (referred to as a second target Item herein) is queried from the fifth virtual memory space according to the second Offset, and address information of the user data of the second target Item is returned to the Item allocation requester.
Optionally, in an example, the querying, according to the second Offset, the corresponding second target Item from the fifth virtual memory space may include:
judging whether the second Offset exceeds the size of the third shared memory;
if so, performing RCU-based expansion on the fifth virtual memory space to obtain a sixth virtual memory space; wherein the size of the sixth virtual memory space is greater than the second Offset;
and querying a corresponding second target Item from the sixth virtual memory space according to the second Offset.
In this example, when the fifth virtual memory space is obtained by RCU-based expansion of the fourth virtual memory space, it may also be determined whether the second Offset exceeds the size of the third shared memory (i.e., the size of the fifth virtual memory space).
If the second Offset exceeds the size of the third shared memory, the RCU-based expansion may be further performed on the fifth virtual memory space to obtain an expanded virtual memory space (referred to as a sixth virtual memory space herein) whose size is greater than or equal to the second Offset, and a corresponding second target Item is queried from the sixth virtual memory space according to the second Offset, and address information of the user data of the second target Item is returned to the Item allocation requester.
Further, in this example, considering that when performing entry allocation, the Offset specified by the user may be an arbitrary value (generally expressed by full f), in this case, the address information of the user data of the entry may be returned to the entry allocation requester by traversing the fifth virtual memory space and selecting a free entry.
It should be noted that, in the embodiment of the present invention, the Offset having an arbitrary value (e.g., all f) may be determined as the minimum Offset.
Accordingly, in this example, after determining whether the second Offset exceeds the size of the third shared memory, the method may further include:
if the second Offset does not exceed the size of the third shared memory, judging whether the second Offset is an arbitrary value;
if the second Offset is an arbitrary value, traversing the fifth virtual memory space, selecting an idle Item, and determining the user data of the Item as a second target Item;
and if the second Offset is not an arbitrary value, querying a corresponding second target Item from a fifth virtual memory space according to the second Offset.
In this example, if the second Offset does not exceed the size of the third shared memory, it can be determined whether the second Offset is an arbitrary value.
If the second Offset is not any value, the corresponding second target Item may be queried from the fifth virtual memory space according to the second Offset, and address information of the user data in the Item may be returned to the entry allocation requester.
If the second Offset is any value, the fifth virtual memory space may be traversed, a free Item may be selected, and the Item may be determined as the second target Item.
Optionally, in an example, the traversing the fifth virtual memory space and selecting a free Item may include:
traversing the fifth virtual memory space to determine whether a free Item exists;
if yes, selecting an Item from the free items;
and if not, performing expansion based on the RCU on the fifth virtual memory space to obtain a seventh virtual memory space, and selecting an idle Item from the seventh virtual memory space.
In this example, when there is no free Item in the fifth virtual memory space, the fifth virtual memory space may be expanded based on the RCU to obtain a seventh virtual memory space, and one free Item may be selected from the seventh virtual memory space.
Optionally, in an embodiment of the present invention, the data processing method may further include:
receiving an item release request, wherein the item release request carries a fourth Handle and a third Offset;
determining an eighth virtual memory space of a corresponding fourth process according to the fourth Handle;
when the size of the eighth virtual memory space is smaller than the size of a fourth shared memory corresponding to the eighth virtual memory space, performing RCU-based expansion on the eighth virtual memory space according to the size of the fourth shared memory to obtain a ninth virtual memory space;
and when the third Offset is smaller than the size of the ninth virtual memory space, querying a corresponding third target entry from the ninth virtual memory space according to the third Offset, and marking the third target entry as unallocated.
In this embodiment, when an Item release (Destroy Item) is required, a Handle of a virtual memory space to which the Item for the Item release belongs and Offset of the Item in the virtual memory space need to be specified.
Accordingly, in this embodiment, when an entry release request is received, a Handle (referred to as a fourth Handle herein) and an Offset (referred to as a third Offset herein) carried by the entry release request may be acquired.
In this embodiment, a virtual memory space (referred to as an eighth virtual memory space) of a corresponding process (referred to as a fourth process herein) may be determined according to the fourth Handle, and a size of the eighth virtual memory space may be compared with a size of a shared memory (referred to as a fourth shared memory herein) corresponding to the eighth virtual memory space.
When the size of the eighth virtual memory space is smaller than the size of the fourth shared memory, the eighth virtual memory space may be expanded based on the RCU according to the size of the fourth shared memory, so as to obtain an expanded virtual memory space (referred to as a ninth virtual memory space herein).
In this embodiment, after the eighth virtual memory space is expanded based on the RCU to obtain the ninth virtual memory space, it may be determined whether the third Offset is smaller than the size of the ninth virtual memory space.
If so, querying a corresponding Item (referred to as a third target Item herein) from the ninth virtual memory space according to the third Offset, and marking the third target Item as unallocated.
It should be noted that, in this embodiment, if the third Offset is larger than the size of the ninth virtual memory space, the released entry may be directly returned to the entry release requester.
Optionally, in an embodiment of the present invention, the performing RCU-based expansion on the virtual memory space includes:
mapping the target shared memory in the target process according to the expanded size to obtain an expanded virtual memory space; the target process is a process to which the virtual memory space belongs, and the target shared memory is a shared memory corresponding to the virtual memory space;
and modifying the Handle corresponding to the virtual memory space into the virtual memory space after the virtual memory space is extended.
In this embodiment, in order to implement the expansion of the virtual memory space, the expansion of the virtual memory space may be performed in an RCU manner.
In this embodiment, when the virtual memory space needs to be expanded, the shared memory (referred to as a target shared memory herein) corresponding to the virtual memory space may be mapped in a process (referred to as a target process herein) to which the virtual memory space belongs according to the expanded size, so as to obtain an expanded virtual memory space (such as the second virtual memory space, the fifth virtual memory space, or the sixth virtual memory space described above).
In this embodiment, after the extended virtual memory space is obtained, the Handle corresponding to the virtual memory space (the virtual memory space before extension) may be modified to point to the extended virtual memory space, and then, subsequent access to the virtual memory space may be directed to the extended virtual memory space, while access to the virtual memory space before extension is not affected.
It should be noted that, in this embodiment, for the virtual memory space before expansion, after the access to the virtual memory space is finished, the virtual memory space may be recovered through a Garbage recovery (GC) mechanism of the RCU, and specific implementation thereof is not described herein again.
In addition, in the embodiment of the present invention, when the expansion of the virtual memory space is not triggered by the fact that the size of the virtual memory space is not consistent with the size of the shared memory corresponding to the virtual memory space, for example, when the entry is allocated, the Offset carried in the entry allocation request is larger than the expansion of the virtual memory space triggered by the size of the virtual memory space, and the shared memory and the virtual memory space need to be expanded to keep the sizes of the shared memory and the virtual memory space consistent.
In order to make those skilled in the art better understand the technical solutions provided by the embodiments of the present invention, the following respectively describes the entry reading, entry allocation, entry releasing, and virtual memory space expansion in the embodiments of the present invention with reference to the drawings.
Referring to fig. 3, a flow chart of an entry reading method according to an embodiment of the present invention is shown, and as shown in fig. 3, the entry reading method may include the following steps:
step 301, receiving an entry reading request, where the entry reading request carries a first Handle and a first Offset.
Step 302, determining a first virtual memory space of a corresponding first process according to the first Handle.
Step 303, determine whether the size of the first virtual memory space is consistent with the size of the first shared memory. If yes, go to step 305; otherwise, go to step 304.
In this embodiment, the size of the virtual memory space may be determined according to Local Info (Local information) recorded in the process, and the size of the shared memory may be determined according to Meta field recorded in the header of the virtual memory space.
Step 304, performing RCU-based expansion on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space, and go to step 305.
It should be noted that, in this embodiment, if the expansion of the first virtual memory space fails, the read failure is directly returned to the entry read requester, for example, NULL is returned.
And 305, querying the corresponding Item according to the first Offset.
In this embodiment, when the size of the first virtual memory space is consistent with the size of the first shared memory, the corresponding Item in the first virtual memory space may be queried according to the first Offset.
When the size of the first virtual memory space is not consistent with the size of the first shared memory, the corresponding Item can be queried in the second virtual memory space according to the first Offset.
And step 306, judging whether the state of the Item is the distribution state. If yes, go to step 307; otherwise, NULL is returned to the entry read requester.
Step 307, the address information of the user data in the Item is returned to the Item reading requester.
In this embodiment, when an Item corresponding to the first Offset is determined, the state of the Item may be further read; if the state of the Item is the distribution state, returning the address information of the user data in the Item to an Item reading requester; if the state of the Item is an unallocated state, the data (if any) in the Item is determined to be invalid data, and NULL is returned to the Item read requester.
Referring to fig. 4, a flowchart of an entry allocation method according to an embodiment of the present invention is shown, and as shown in fig. 4, the entry allocation method may include the following steps:
step 401, an entry allocation request is received, where the entry allocation request carries a third Handle and a second Offset.
Wherein, the second Offset may be called Offset Expected, which may be all f, or other values; where the second Offset is full f, it indicates that it can be any value (i.e., Offset of Item for which allocation is not explicitly specified).
And step 402, determining a fourth virtual memory space of the corresponding third process according to the third Handle.
Step 403, determine whether the size of the fourth virtual memory space is consistent with the size of the third shared memory. If yes, go to step 405; otherwise, go to step 404.
Step 404, performing RCU expansion on the fourth virtual memory space according to the size of the third shared memory to obtain a fifth virtual memory space, and go to step 405.
Step 405, determine whether the second Offset exceeds the size of the third shared memory. If yes, go to step 406; otherwise, go to step 407.
Step 406, performing RCU-based expansion on the virtual memory space to obtain a sixth virtual memory space, and going to step 407; the size of the sixth virtual memory space is greater than or equal to the second Offset.
In this embodiment, when the second Offset is larger than the third shared memory size, the fourth virtual memory space (or the fifth virtual memory space) may be expanded based on the RCU to obtain a sixth virtual memory space with a size larger than or equal to the second Offset.
In this embodiment, assuming that the size of the virtual memory space is an integer multiple of 256 items, when the virtual memory is expanded, it is necessary to ensure that the size of the expanded virtual memory space is an integer multiple of 256 items.
Step 407 determines whether the second Offset is an arbitrary value. If yes, go to step 409; otherwise, go to step 408.
And step 408, inquiring the corresponding Item according to the second Offset, and returning the address information of the user data of the Item to the Item allocation requester.
In this embodiment, if the size of the fourth virtual memory space is consistent with the size of the third shared memory, the corresponding Item may be queried in the fourth virtual memory space according to the second Offset.
If the size of the fourth virtual memory space is smaller than the size of the third shared memory and the second Offset does not exceed the size of the third shared memory, the corresponding Item may be queried in the fifth virtual memory according to the second Offset.
If the size of the fourth virtual memory space is smaller than the size of the third shared memory and the second Offset exceeds the size of the shared memory, the corresponding Item in the sixth virtual memory may be queried according to the second Offset.
Step 409, traversing the virtual memory space, and judging whether an idle Item exists. If yes, go to step 410; otherwise, go to step 411.
In this embodiment, if the size of the fourth virtual memory space is consistent with the size of the third shared memory and the second Offset is an arbitrary value, it may be determined whether there is a free Item by traversing the fourth virtual memory space.
If the size of the fourth virtual memory space is smaller than the size of the third shared memory and the second Offset is an arbitrary value, the fifth virtual memory space may be traversed to determine whether there is a free Item.
And step 410, performing RCU-based expansion on the virtual memory space to obtain a seventh virtual memory space, selecting a free Item from the seventh virtual memory space, and returning the address information of the user data of the Item to the Item allocation requester.
In this embodiment, if the size of the fourth virtual memory space is consistent with the size of the third shared memory, the fourth virtual memory space is expanded based on the RCU to obtain a seventh virtual memory space.
And if the size of the fourth virtual memory space is smaller than that of the third shared memory, performing RCU-based expansion on the fifth virtual memory space to obtain a seventh virtual memory space.
When the virtual memory space is expanded in step 410, the size of the expanded virtual memory space may be +256 (the size of each Item) of the size of the virtual memory space before the expansion.
Step 411, selecting an idle Item, and returning the address information of the user data of the Item to the Item allocation requester.
It should be noted that, in this embodiment, in addition to the address information of the user data of the Item determined to be allocated needs to be returned to the entry allocation requester, the Offset of the Item determined to be allocated needs to be returned to the entry allocation requester; for example, the Offset of the allocated Item can be returned to the entry allocation requester through the Offset Real field.
When the second Offset is an arbitrary value, the value of Offset Real is the Offset of the selected idle Item (usually the idle Item with the smallest Offset). When the second Offset is not an arbitrary value, the value of Offset Real is the second Offset.
It should be noted that, in this embodiment, when the expansion of the virtual memory space is triggered by the fact that the second Offset exceeds the size of the third shared memory or no free Item exists in the virtual memory space, the virtual memory space needs to be expanded, so as to ensure that the size of the virtual memory space is consistent with the size of the corresponding shared memory.
Referring to fig. 5, a flowchart of an entry releasing method according to an embodiment of the present invention is shown, and as shown in fig. 5, the entry releasing method may include the following steps:
step 501, receiving an entry release request, where the entry release request carries a fourth Handle and a third Offset.
Step 502, determining an eighth virtual memory space of the corresponding fourth process according to the fourth Handle.
Step 503, determine whether the size of the eighth virtual memory space is consistent with the size of the fourth shared memory. If yes, go to step 505; otherwise, go to step 504.
Step 504, performing RCU-based expansion on the eighth virtual memory space according to the size of the fourth shared memory to obtain a ninth virtual memory space, and go to step 505.
Step 505, determine whether the third Offset exceeds the size of the virtual memory space. If yes, returning the released item to the item release requester; otherwise, go to step 506.
In this embodiment, if the size of the eighth virtual memory space is consistent with the size of the fourth shared memory, it is determined whether the third Offset exceeds the size of the eighth virtual memory space.
If the size of the eighth virtual memory space is not consistent with the size of the fourth shared memory, it is determined whether the third Offset exceeds the size of the ninth virtual memory space.
And step 506, inquiring the corresponding Item according to the third Offset, and acquiring the state of the Item. If the status is the distribution status, go to step 507; otherwise, the released is returned to the entry release requester.
Step 507, setting the state of the Item to be an unallocated state, and returning a release success to the Item release requester.
In this embodiment, if the size of the eighth virtual memory space is consistent with the size of the fourth shared memory, the corresponding Item is queried in the eighth virtual memory space according to the third Offset.
And if the size of the eighth virtual memory space is not consistent with the size of the fourth shared memory, querying the corresponding Item in the ninth virtual memory space according to the third Offset.
Referring to fig. 6, a flowchart of a virtual memory space expansion method according to an embodiment of the present invention is shown, as shown in fig. 6, the virtual memory space expansion method may include the following steps:
step 601, when it is determined that the virtual memory space needs to be expanded, whether the shared memory needs to be expanded or not can be judged. If yes, go to step 602; otherwise, go to step 603.
In this embodiment, the virtual memory space expansion may include active expansion and passive expansion, wherein:
the active expansion may include expansion of a virtual memory space triggered by the fact that Offset carried in an Item allocation request is larger than the size of the virtual memory space in the Item allocation process, or the Offset carried in the Item allocation request is an arbitrary value and the virtual memory space expansion triggered by an idle Item does not exist in the current virtual memory space;
passive expansion may include virtual memory space expansion triggered by a virtual memory space size not coinciding with a corresponding shared memory size.
In this embodiment, for active expansion, in addition to the RCU-based expansion of the virtual memory space, the shared memory corresponding to the virtual memory space needs to be expanded; for passive expansion, the shared memory corresponding to the virtual memory space may not be expanded.
Step 602, expand the virtual memory space and the corresponding shared memory, and go to step 604.
In this embodiment, when the virtual memory space and the shared memory need to be expanded, the sizes of the expanded virtual memory space and the shared memory may be determined according to actual requirements, so as to ensure that the sizes of the expanded virtual memory space and the shared memory are consistent.
For example, when expanding the virtual memory space triggered by the fact that the Offset carried in the entry allocation request is larger than the size of the virtual memory space, it is necessary to ensure that the size of the expanded virtual memory space is equal to or larger than the Offset carried in the entry allocation request and is an integral multiple of 256;
for an extension in which Offset carried in the entry allocation request is an arbitrary value and no free Item exists in the current virtual memory space, the size of the virtual memory space after the extension may be +256 of the size of the virtual memory space before the extension.
Step 603, expanding the virtual memory space.
In this embodiment, for passive expansion, the virtual memory space may be expanded according to the size of the shared memory corresponding to the virtual memory space (recorded in the Meta field), so as to ensure that the size of the expanded virtual memory space is consistent with the size of the corresponding shared memory space.
It should be noted that, in the embodiment of the present invention, the expansion of the virtual memory space may be performed based on an RCU, and specific implementation of the expansion may be described in the foregoing method embodiment, which is not described herein again.
And step 604, recycling the virtual memory space before expansion based on the GC mechanism of the RCU.
In this embodiment, for the virtual memory space before the expansion, after the access to the virtual memory space before the expansion is finished, the virtual memory space before the expansion may be recovered based on a GC mechanism of the RCU, and specific implementation thereof is not described herein again.
As can be seen from the above description, in the technical solution provided in the embodiment of the present invention, the fixed-length data indexed by the integer is stored in a storage form combining the shared memory and the RCU, so as to realize support for the increase of the database; in addition, by adopting the array structure for data storage, compared with tree structure data storage, the data reading performance is improved.
Referring to fig. 7, a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention is provided, where the data processing apparatus may be applied to the foregoing method embodiment, and as shown in fig. 7, the data processing apparatus may include:
a receiving unit 710, configured to receive an entry reading request, where the entry reading request carries a first Handle and a first Offset;
a determining unit 720, configured to determine, according to the first Handle, a first virtual memory space of a corresponding first process;
a determining unit 730, configured to determine whether a size of the first virtual memory space is consistent with a size of a first shared memory corresponding to the first virtual memory space;
an extension unit 740, configured to, when the size of the first virtual memory space is smaller than the size of a first shared memory corresponding to the first virtual memory space, perform extension based on a read copy update RCU on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space;
a data processing unit 750, configured to query a corresponding first target entry from the second virtual memory space according to the first Offset, and return address information of user data of the first target entry to an entry reading requester.
In an optional embodiment, the receiving unit 710 is further configured to receive a shared memory mapping request, where the shared memory mapping request carries an identifier of a second process and an identifier of a second shared memory;
the data processing unit 750 is further configured to map a third virtual memory space in the second process according to the size of the second shared memory, and return a second Handle corresponding to the third virtual memory space to the shared memory mapping requester.
In an optional embodiment, the data processing unit 750 is specifically configured to, if the second shared memory does not exist, create a second shared memory according to a preset size, and map the third virtual memory space in the second process according to the preset size; and if the second shared memory exists, mapping the third virtual memory space in the second process according to the size of the second shared memory.
In an optional embodiment, the receiving unit 710 is further configured to receive an entry allocation request, where the entry allocation request carries a third Handle and a second Offset;
the determining unit 720 is further configured to determine a fourth virtual memory space of a corresponding third process according to the third Handle;
the determining unit 730 is further configured to determine whether the size of the fourth virtual memory space is consistent with the size of a third shared memory corresponding to the fourth virtual memory space;
the expansion unit 740 is further configured to perform RCU-based expansion on the fourth virtual memory space according to the size of the third shared memory to obtain a fifth virtual memory space;
the data processing unit 750 is further configured to query a corresponding second target entry from the fifth virtual memory space according to the second Offset, and return address information of user data of the second target entry to an entry allocation requester.
In an optional embodiment, the determining unit 730 is further configured to determine whether the second Offset exceeds the size of the third shared memory;
the expansion unit 740 is further configured to perform RCU-based expansion on the fifth virtual memory space to obtain a sixth virtual memory space if the second Offset exceeds the size of the third shared memory; wherein the size of the sixth virtual memory space is greater than or equal to the second Offset;
the data processing unit 750 is further configured to query a corresponding second target entry from the sixth virtual memory space according to the second Offset.
In an optional embodiment, the determining unit 730 is further configured to determine whether the second Offset is an arbitrary value if the second Offset does not exceed the size of the third shared memory;
the data processing unit 750 is further configured to traverse the fifth virtual memory space, select an idle entry, and determine the entry as the second target entry if the second Offset is an arbitrary value;
the data processing unit 750 is further configured to query a corresponding second target entry from the fifth virtual memory space according to the second Offset if the second Offset is not an arbitrary value.
In an optional embodiment, the data processing unit 750 is specifically configured to traverse the fifth virtual memory space to determine whether there is a free entry; if yes, selecting one item from the free items;
the expansion unit 740 is further configured to, if there is no free entry, perform RCU-based expansion on the fifth virtual memory space to obtain a seventh virtual memory space;
the data processing unit 750 is further configured to select a free entry from the seventh virtual memory space.
In an optional embodiment, the receiving unit 710 is further configured to receive an entry release request, where the entry release request carries a fourth Handle and a third Offset;
the determining unit 720 is further configured to determine, according to the fourth Handle, an eighth virtual memory space of the corresponding fourth process;
the determining unit 730 is further configured to determine whether the size of the eighth virtual memory space is consistent with the size of the fourth shared memory corresponding to the eighth virtual memory space;
the extension unit 740 is further configured to, when the size of the eighth virtual memory space is smaller than the size of a fourth shared memory corresponding to the eighth virtual memory space, perform RCU-based extension on the eighth virtual memory space according to the size of the fourth shared memory to obtain a ninth virtual memory space;
the data processing unit 750 is further configured to, when the third Offset is smaller than the size of the ninth virtual memory space, query a corresponding third target entry from the ninth virtual memory space according to the third Offset, and mark the third target entry as unallocated.
In an optional embodiment, the extension unit 740 is specifically configured to map the target shared memory in the target process according to the extended size to obtain an extended virtual memory space; the target process is a process to which the virtual memory space belongs, and the target shared memory is a shared memory corresponding to the virtual memory space;
the data processing unit 750 is further configured to modify the Handle corresponding to the virtual memory space to point to the extended virtual memory space.
Fig. 8 is a schematic diagram of a hardware structure of a data processing apparatus according to an embodiment of the present invention. The data processing apparatus may include a processor 801, a machine-readable storage medium 802 storing machine-executable instructions. The processor 801 and the machine-readable storage medium 802 may communicate via a system bus 803. Also, the processor 801 may perform the data processing methods described above by reading and executing machine-executable instructions in the machine-readable storage medium 802 corresponding to the data processing logic.
The machine-readable storage medium 802 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be volatile memory, non-volatile memory, or similar storage medium. In particular, the machine-readable storage medium 802 may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
Embodiments of the present invention also provide a machine-readable storage medium, such as machine-readable storage medium 802 in fig. 8, comprising machine-executable instructions that are executable by processor 801 in a data processing apparatus to implement the data processing method described above.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
It can be seen from the above embodiments that the fixed-length data indexed by integers is stored in a storage form combining a shared memory and an RCU, thereby realizing support for growth of a database; in addition, by adopting the array structure for data storage, compared with tree structure data storage, the data reading performance is improved.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (18)

1. A data processing method is characterized in that the method is applied to a data storage system based on a shared memory, wherein an array structure is adopted in the shared memory to store fixed-length data taking an integer as an index, and the method comprises the following steps:
receiving an entry reading request, wherein the entry reading request carries a first Handle and a first Offset;
determining a first virtual memory space of a corresponding first process according to the first Handle;
when the size of the first virtual memory space is smaller than that of a first shared memory corresponding to the first virtual memory space, performing read-copy update (RCU) -based expansion on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space;
and inquiring a corresponding first target item from the second virtual memory space according to the first Offset, and returning the address information of the user data of the first target item to an item reading requester.
2. The method of claim 1, further comprising:
receiving a shared memory mapping request, wherein the shared memory mapping request carries an identifier of a second process and an identifier of a second shared memory;
and mapping a third virtual memory space in the second process according to the size of the second shared memory, and returning a second Handle corresponding to the third virtual memory space to a shared memory mapping requester.
3. The method of claim 2, wherein mapping a third virtual memory space in the second process according to the size of the second shared memory comprises:
if the second shared memory does not exist, creating a second shared memory according to a preset size, and mapping the third virtual memory space in the second process according to the preset size;
and if the second shared memory exists, mapping the third virtual memory space in the second process according to the size of the second shared memory.
4. The method of claim 1, further comprising:
receiving an item allocation request, wherein the item allocation request carries a third Handle and a second Offset;
determining a fourth virtual memory space of a corresponding third process according to the third Handle;
when the size of the fourth virtual memory space is smaller than the size of a third shared memory corresponding to the fourth virtual memory space, performing RCU-based expansion on the fourth virtual memory space according to the size of the third shared memory to obtain a fifth virtual memory space;
and querying a corresponding second target item from the fifth virtual memory space according to the second Offset, and returning the address information of the user data of the second target item to an item allocation requester.
5. The method as claimed in claim 4, wherein said querying a corresponding second target entry from the fifth virtual memory space according to the second Offset comprises:
determining whether the second Offset exceeds the size of the third shared memory;
if so, performing RCU-based expansion on the fifth virtual memory space to obtain a sixth virtual memory space; wherein the size of the sixth virtual memory space is greater than or equal to the second Offset;
and querying a corresponding second target item from the sixth virtual memory space according to the second Offset.
6. The method as claimed in claim 5, wherein said determining whether the second Offset exceeds the size of the third shared memory further comprises:
if not, judging whether the second Offset is an arbitrary value;
if the second Offset is an arbitrary value, traversing the fifth virtual memory space, selecting an idle item, and determining the item as the second target item;
and if the second Offset is not an arbitrary value, querying a corresponding second target item from the fifth virtual memory space according to the second Offset.
7. The method of claim 6, wherein traversing the fifth virtual memory space to select a free entry comprises:
traversing the fifth virtual memory space to determine whether there is a free entry;
if yes, selecting one item from the free items;
and if the virtual memory space does not exist, expanding the fifth virtual memory space based on the RCU to obtain a seventh virtual memory space, and selecting an idle entry from the seventh virtual memory space.
8. The method of claim 1, further comprising:
receiving an item release request, wherein the item release request carries a fourth Handle and a third Offset;
determining an eighth virtual memory space of a corresponding fourth process according to the fourth Handle;
when the size of the eighth virtual memory space is smaller than the size of a fourth shared memory corresponding to the eighth virtual memory space, performing RCU-based expansion on the eighth virtual memory space according to the size of the fourth shared memory to obtain a ninth virtual memory space;
and when the third Offset is smaller than the size of the ninth virtual memory space, querying a corresponding third target entry from the ninth virtual memory space according to the third Offset, and marking the third target entry as unallocated.
9. The method of any one of claims 1 to 8, wherein performing an RCU-based extension of the virtual memory space comprises:
mapping the target shared memory in the target process according to the expanded size to obtain an expanded virtual memory space; the target process is a process to which the virtual memory space before expansion belongs, and the target shared memory is a shared memory corresponding to the virtual memory space before expansion;
and modifying the Handle corresponding to the virtual memory space before the expansion into the virtual memory space after the expansion.
10. A data processing device is applied to a data storage system based on a shared memory, wherein the fixed-length data using an integer as an index is stored in the shared memory by adopting an array structure, and the device comprises:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving an entry reading request, and the entry reading request carries a first Handle and a first Offset;
a determining unit, configured to determine, according to the first Handle, a first virtual memory space of a corresponding first process;
a determining unit, configured to determine whether a size of the first virtual memory space is consistent with a size of a first shared memory corresponding to the first virtual memory space;
an extension unit, configured to, when the size of the first virtual memory space is smaller than the size of a first shared memory corresponding to the first virtual memory space, perform extension based on a read copy update RCU on the first virtual memory space according to the size of the first shared memory to obtain a second virtual memory space;
and the data processing unit is used for inquiring a corresponding first target item from the second virtual memory space according to the first Offset and returning the address information of the user data of the first target item to an item reading requester.
11. The apparatus of claim 10,
the receiving unit is further configured to receive a shared memory mapping request, where the shared memory mapping request carries an identifier of a second process and an identifier of a second shared memory;
and the data processing unit is further configured to map a third virtual memory space in the second process according to the size of the second shared memory, and return a second Handle corresponding to the third virtual memory space to the shared memory mapping requester.
12. The apparatus of claim 11,
the data processing unit is specifically configured to create a second shared memory according to a preset size if the second shared memory does not exist, and map the third virtual memory space in the second process according to the preset size; and if the second shared memory exists, mapping the third virtual memory space in the second process according to the size of the second shared memory.
13. The apparatus of claim 10,
the receiving unit is further configured to receive an entry allocation request, where the entry allocation request carries a third Handle and a second Offset;
the determining unit is further configured to determine a fourth virtual memory space of a corresponding third process according to the third Handle;
the determining unit is further configured to determine whether the size of the fourth virtual memory space is consistent with the size of a third shared memory corresponding to the fourth virtual memory space;
the expansion unit is further configured to perform RCU-based expansion on the fourth virtual memory space according to the size of the third shared memory to obtain a fifth virtual memory space;
the data processing unit is further configured to query a corresponding second target entry from the fifth virtual memory space according to the second Offset, and return address information of user data of the second target entry to an entry allocation requester.
14. The apparatus of claim 13,
the determining unit is further configured to determine whether the second Offset exceeds the size of the third shared memory;
the expansion unit is further configured to perform RCU-based expansion on the fifth virtual memory space to obtain a sixth virtual memory space if the second Offset exceeds the size of the third shared memory; wherein the size of the sixth virtual memory space is greater than or equal to the second Offset;
the data processing unit is further configured to query a corresponding second target entry from the sixth virtual memory space according to the second Offset.
15. The apparatus of claim 14,
the determining unit is further configured to determine whether the second Offset is an arbitrary value if the second Offset does not exceed the size of the third shared memory;
the data processing unit is further configured to traverse the fifth virtual memory space, select an idle entry, and determine the entry as the second target entry if the second Offset is an arbitrary value;
the data processing unit is further configured to query a corresponding second target entry from the fifth virtual memory space according to the second Offset if the second Offset is not an arbitrary value.
16. The apparatus of claim 15,
the data processing unit is specifically configured to traverse the fifth virtual memory space to determine whether there is an idle entry; if yes, selecting one item from the free items;
the expansion unit is further configured to perform RCU-based expansion on the fifth virtual memory space if there is no free entry, so as to obtain a seventh virtual memory space;
the data processing unit is further configured to select an idle entry from the seventh virtual memory space.
17. The apparatus of claim 10,
the receiving unit is further configured to receive an entry release request, where the entry release request carries a fourth Handle and a third Offset;
the determining unit is further configured to determine, according to the fourth Handle, an eighth virtual memory space of a corresponding fourth process;
the determining unit is further configured to determine whether the size of the eighth virtual memory space is consistent with the size of a fourth shared memory corresponding to the eighth virtual memory space;
the expansion unit is further configured to, when the size of the eighth virtual memory space is smaller than the size of a fourth shared memory corresponding to the eighth virtual memory space, perform RCU-based expansion on the eighth virtual memory space according to the size of the fourth shared memory to obtain a ninth virtual memory space;
and the data processing unit is further configured to, when the third Offset is smaller than the size of the ninth virtual memory space, query a corresponding third target entry from the ninth virtual memory space according to the third Offset, and mark the third target entry as unallocated.
18. The apparatus of any one of claims 10-17,
the expansion unit is specifically configured to map the target shared memory in the target process according to the expanded size to obtain an expanded virtual memory space; the target process is a process to which the virtual memory space before expansion belongs, and the target shared memory is a shared memory corresponding to the virtual memory space before expansion;
and the data processing unit is also used for modifying the Handle corresponding to the virtual memory space before the expansion into the virtual memory space after the expansion.
CN201910290393.7A 2019-04-11 2019-04-11 Data processing method and device Active CN110096453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290393.7A CN110096453B (en) 2019-04-11 2019-04-11 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290393.7A CN110096453B (en) 2019-04-11 2019-04-11 Data processing method and device

Publications (2)

Publication Number Publication Date
CN110096453A CN110096453A (en) 2019-08-06
CN110096453B true CN110096453B (en) 2020-01-03

Family

ID=67444716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290393.7A Active CN110096453B (en) 2019-04-11 2019-04-11 Data processing method and device

Country Status (1)

Country Link
CN (1) CN110096453B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030135A (en) * 2006-02-28 2007-09-05 华为技术有限公司 Method and device for storing C++ object in shared memory
US7620766B1 (en) * 2001-05-22 2009-11-17 Vmware, Inc. Transparent sharing of memory pages using content comparison
CN104392171A (en) * 2014-11-27 2015-03-04 南京大学 Automatic memory evidence analyzing method based on data association
CN106096407A (en) * 2016-05-31 2016-11-09 华中科技大学 The defence method that a kind of code reuse is attacked
CN106201349A (en) * 2015-12-31 2016-12-07 华为技术有限公司 A kind of method and apparatus processing read/write requests in physical host
CN109032533A (en) * 2018-08-29 2018-12-18 新华三技术有限公司 A kind of date storage method, device and equipment
CN109298935A (en) * 2018-09-06 2019-02-01 华泰证券股份有限公司 A kind of method and application of the multi-process single-write and multiple-read without lock shared drive
CN109343979A (en) * 2018-09-28 2019-02-15 珠海沙盒网络科技有限公司 A kind of configuring management method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473181B (en) * 2007-01-26 2017-06-13 英特尔公司 Hierarchical immutable content-addressable memory processor
CN105975407B (en) * 2016-03-22 2020-10-09 华为技术有限公司 Memory address mapping method and device
CN109325023B (en) * 2018-07-20 2021-02-26 新华三技术有限公司 Data processing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620766B1 (en) * 2001-05-22 2009-11-17 Vmware, Inc. Transparent sharing of memory pages using content comparison
CN101030135A (en) * 2006-02-28 2007-09-05 华为技术有限公司 Method and device for storing C++ object in shared memory
CN104392171A (en) * 2014-11-27 2015-03-04 南京大学 Automatic memory evidence analyzing method based on data association
CN106201349A (en) * 2015-12-31 2016-12-07 华为技术有限公司 A kind of method and apparatus processing read/write requests in physical host
CN106096407A (en) * 2016-05-31 2016-11-09 华中科技大学 The defence method that a kind of code reuse is attacked
CN109032533A (en) * 2018-08-29 2018-12-18 新华三技术有限公司 A kind of date storage method, device and equipment
CN109298935A (en) * 2018-09-06 2019-02-01 华泰证券股份有限公司 A kind of method and application of the multi-process single-write and multiple-read without lock shared drive
CN109343979A (en) * 2018-09-28 2019-02-15 珠海沙盒网络科技有限公司 A kind of configuring management method and system

Also Published As

Publication number Publication date
CN110096453A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
JP6767115B2 (en) Safety device for volume operation
CN108733761B (en) Data processing method, device and system
CN106294190B (en) Storage space management method and device
KR101620773B1 (en) Data migration for composite non-volatile storage device
JP2017182803A (en) Memory deduplication method and deduplication DRAM memory module
JP2015515047A5 (en)
JP2017188096A (en) Deduplication memory module and memory deduplication method therefor
CN105100146A (en) Data storage method, device and system
US20100228914A1 (en) Data caching system and method for implementing large capacity cache
CN110109873B (en) File management method for message queue
CN109144413A (en) A kind of metadata management method and device
JP2017188094A (en) Memory module memory deduplication method, and memory module therefor
US20230306010A1 (en) Optimizing Storage System Performance Using Data Characteristics
CN110555001A (en) data processing method, device, terminal and medium
CN115269450A (en) Memory cooperative management system and method
CN108304144B (en) Data writing-in and reading method and system, and data reading-writing system
US11687489B2 (en) Method and system for identifying garbage data, electronic device, and storage medium
CN113553306A (en) Data processing method and data storage management system
CN110096453B (en) Data processing method and device
CN107102900B (en) Management method of shared memory space
EP3477463A1 (en) Method and system for using wear-leveling using a multi-gap progress field
CN110795031A (en) Data deduplication method, device and system based on full flash storage
US9063656B2 (en) System and methods for digest-based storage
US11875152B2 (en) Methods and systems for optimizing file system usage
CN107168645B (en) Storage control method and system of distributed system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 101, 1st floor, No. 1 Building, No. 8 Courtyard, Yongjiabei Road, Haidian District, Beijing 100094

Applicant after: Beijing Huasan Communication Technology Co., Ltd.

Address before: Room 119, 1st floor, Building 2, Pioneer Road, Haidian District, Beijing 100085

Applicant before: Beijing Huasan Communication Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant