CN108121813A - Data managing method, device, system, storage medium and electronic equipment - Google Patents
Data managing method, device, system, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN108121813A CN108121813A CN201711449172.7A CN201711449172A CN108121813A CN 108121813 A CN108121813 A CN 108121813A CN 201711449172 A CN201711449172 A CN 201711449172A CN 108121813 A CN108121813 A CN 108121813A
- Authority
- CN
- China
- Prior art keywords
- free memory
- free
- dynamic block
- space
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1009—Address translation using page tables, e.g. page table structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/109—Address translation for multiple virtual address spaces, e.g. segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This disclosure relates to a kind of data managing method, device, storage medium and electronic equipment, this method can include:The storage engines of database are in advance by the mapping space area of wide area information server File Mapping to database process virtual address space, mapping space area is made of dynamic block, the free memory segment of some or all dynamic blocks is present in a manner of linked list element in two or more chained lists, different chained lists are used to store the different free memory segment of memory size scope, chained list forms index by array, storage engines distribute the distribution request of free memory in response to receiving, the index formed according to array, the linked list element of the enough distribution requests of free memory clip size is searched in two or more chained lists;The corresponding free memory segment of the linked list element found out is distributed into distribution request and deletes linked list element from chained list where it, it is seen then that the data managing method that the disclosure provides can be quickly without the volatile a large amount of small datas of processing.
Description
Technical field
This disclosure relates to computer realm, and in particular, to a kind of data managing method, device, storage medium and electronics
Equipment.
Background technology
In the prior art, in order to which database does not lose data, generally use mechanical hard disk or this hardware of solid state disk are deposited
The file system of reservoir stores wide area information server file.Since the file system hierarchy structure of this hardware memory is answered
Miscellaneous, database is only managed the organizational form of data, for example, the relation of tables of data, relation of row, column etc. in table are established,
And distribution of memory mechanism of the data in the file system of this hardware memory such as memory space etc. is all by operating system
It completes inside.It is limited to the file system of this hardware memory hierarchical structure complexity, mating data management in operating system
The memory mechanism of mode is only good at processing bulk continuous data.
However, in some systems, such as automotive electronics electrical system, there is a large amount of random decimal evidences, these data
Mostly it is the byte level small data of several byte-sizeds, and is discontinuous and unordered to the access of these data.And existing skill
The memory mechanism of the data management mode of art is only good at processing bulk continuous data, for byte level data, random data speed
It is significantly slack-off, lead to not to a large amount of random decimals according to quickly being handled.
The content of the invention
The purpose of the disclosure is to provide a kind of data managing method, device, storage medium and electronic equipment, to realize to big
Random decimal is measured according to the purpose quickly handled.
In the first aspect of the embodiment of the present disclosure, a kind of data managing method is provided.Method is applied to database
Storage engines, this method include:In advance by the wide area information server File Mapping to database process virtual address space
Mapping space area, wherein, the database file is located in the file system of non-volatile random access memory, the mapping
Space region is made of several dynamic blocks, and each dynamic block includes one or more pages, and the page is assignable minimum unit, portion
Divide or the free memory segment of all dynamic blocks is present in as linked list element in two or more chained lists, a free time
Memory segment corresponds to a linked list element, and different chained lists are used to store free memory piece of the memory size in different range section
Section, the difference chained list form index by several array elements of array;It please in response to the distribution for receiving distribution free memory
It asks, according to the index that the array is formed, free memory clip size foot is searched in described two or more than two chained lists
The linked list element of enough distribution requests;The corresponding free memory segment of the linked list element found out is distributed to described point
It is deleted with request and by the linked list element from chained list where it.
Optionally, this method further includes:In response in the presence of the free memory segment for needing to discharge, being formed according to the array
Index, the free memory segment that will be discharged is as in linked list element insertion memory size range intervals and the free time to be discharged
Deposit the corresponding chained list of clip size.
Optionally, this method further includes:If it is searched in described two or more than two chained lists less than free memory
The linked list element of clip size distribution request described enough, then not yet add in the dynamic block of chained list in the mapping space, look into
Find out containing page distribution request described enough dynamic block, according to the distribution request requirement distribution free space, will look into
The correspondingly sized several pages of the dynamic block found out distribute to the distribution request;If the dynamic block found out still has after dispensing
Remaining free page, using the remaining free page as the free memory segment to be discharged, into the rope formed according to array
Draw, the free memory segment that will be discharged is as linked list element insertion memory size range intervals and the free memory piece to be discharged
The step of Duan great little corresponding chained lists.
Optionally, all dynamic blocks in the mapping space area are linked in sequence.The index formed according to array, will
The free memory segment of release is as linked list element insertion memory size range intervals and the free memory clip size to be discharged
Corresponding chained list includes:Judge the free memory clip size to be discharged whether be page integral multiple;It, will if not integral multiple
Dynamic block where the free memory segment to be discharged is split as free memory segment accounts for the segment space of one page first
Dynamic block and free memory segment account for the second dynamic block of the integral multiple of page, wherein, second dynamic block is located at described first
After dynamic block, second dynamic block is connected with the next dynamic block for the dynamic block being split;It is formed according to the array
Index, using the free memory segment of first dynamic block as linked list element insertion memory size range intervals and described first
In the corresponding chained list of free memory clip size of dynamic block;If integral multiple, by the free memory segment to be discharged
It is considered as the second dynamic block;Judge whether the free memory section follow-up dynamic block connected to it of second dynamic block forms continuously
Free space;If not forming continuous free space, according to the index that the array is formed, by the sky of second dynamic block
Free memory clip size of the not busy memory segment as linked list element insertion memory size range intervals and second dynamic block
In corresponding chained list;If forming continuous free space, after the formed continuous free space of second dynamic block
After dynamic merged block be the 3rd dynamic block;Judge the 3rd dynamic block it is whether removable be divided into meet different chained list memory models
The multiple dynamic blocks enclosed;If detachable, the 3rd dynamic block is split as to meet the multiple dynamic of different chained list memory ranges
State block, according to the index that the array is formed, using the free memory segment of each dynamic block split out as chain list cell
In element insertion memory size range intervals chained list corresponding with the free memory clip size of each dynamic block;It is if non-disconnectable
Point, according to the index that the array is formed, memory is inserted into using the free memory segment of the 3rd dynamic block as linked list element
In the chained list corresponding with the free memory clip size of the 3rd dynamic block of magnitude range section.
Optionally, an array element of the array is directed toward a chained list, and the n-th array element of the array is directed toward
N-th chained list, each linked list element of the n-th chained list are used to store memory size more than or equal to 2NAnd less than 2(N+1)'s
Free memory segment.
Optionally, in the database process virtual address space, head before the mapping space area, reserved
Portion space describes the data storage metadata of mapping space area storage organization and distribution condition, the head for storing
Space is fixed size.For storing daily record, the daily record describes the trailing space of the database process virtual address space
Storage engines operate relevant data context information, the daily record with the operation of the storage engines carry out it is corresponding more
Newly, the trailing space accordingly increases with the increase of the daily record.
Optionally, this method further includes:It is virtual according to the database process when system reboot where the database
The data storage metadata of the head space storage of address space recovers space distribution and the day stored according to the trailing space
The data context information that will includes carries out data reconstruction.
In the second aspect of the embodiment of the present disclosure, a kind of computer readable storage medium is provided, is stored thereon with meter
Calculation machine program is realized when the program is executed by processor such as the step of above-mentioned first aspect any embodiment the method.
At the 3rd aspect of the embodiment of the present disclosure, a kind of electronic equipment is provided, including:Such as above-mentioned the second aspect institute
The computer readable storage medium stated;And one or more processor, for performing the computer readable storage medium
In program.
At the 4th aspect of the embodiment of the present disclosure, a kind of data management system is provided, the data management system is used
In insertion in the car, the data management system includes:Electronic equipment as described in the disclosure third aspect and it is non-volatile with
Machine accesses memory;Wherein, database file, the number are saved in the file system of the non-volatile random access memory
Wide area information server file according to library file to be run on the electronic equipment as described in the disclosure third aspect;The data
Storehouse comes from the data of the electric system of the automobile for management.
At the 5th aspect of the embodiment of the present disclosure, a kind of data administrator is provided, which is configured at database
Storage engines, which includes:Preset module is configured as the wide area information server File Mapping to data in advance
The mapping space area of storehouse process virtual address space, wherein, the database file is located at non-volatile random access memory
In file system, the mapping space area is made of several dynamic blocks, and each dynamic block includes one or more pages, and the page is
Assignable minimum unit, the free memory segments of some or all dynamic blocks as linked list element be present in two or two with
On chained list in, free memory segment corresponds to a linked list element, and different chained lists are for storing memory size in not homotype
The free memory segment in section is enclosed, the difference chained list forms index by several array elements of array.Allocation space searches mould
Block is configured to respond to receive the distribution request of distribution free memory, according to the index that the array is formed, described two
The linked list element of free memory clip size distribution request described enough is searched in a or more than two chained lists.Distribution performs mould
Block is configured as the corresponding free memory segment of the linked list element that will be found out and distributes to the distribution request and incite somebody to action
The linked list element is deleted from chained list where it.
Optionally, which further includes:Recycling module is configured to respond to the presence of the free memory piece for needing to discharge
Section, according to the index that the array is formed, the free memory segment that will be discharged is inserted into memory size scope as linked list element
Section chained list corresponding with the free memory clip size to be discharged.
Optionally, which further includes:New block distribution module, if be configured as in described two or more than two chains
The linked list element less than free memory clip size distribution request described enough is searched in table, then is not yet added in the mapping space
In the dynamic block for entering chained list, find out containing page distribution request described enough dynamic block, according to the distribution request require
The correspondingly sized several pages of the dynamic block found out are distributed to the distribution request by the free space of distribution.Remaining page recycles mould
Block, if be configured as the dynamic block found out still has remaining free page after dispensing, using the remaining free page as releasing
The free memory segment put triggered the recycling module and enters the index formed according to array, in the free time that will be discharged
Segment is deposited as linked list element insertion memory size range intervals chained list corresponding with the free memory clip size to be discharged
Step.
Optionally, all dynamic blocks in the mapping space area are linked in sequence.The recycling module includes:Judge multiple
Module, be configured as judge the free memory clip size to be discharged whether be page integral multiple.Block decomposes submodule, is configured
If it is not integral multiple for the judgement multiple submodule judgement, by the dynamic block where the free memory segment to be discharged
Be split as free memory segment account for one page segment space the first dynamic block and free memory segment account for page integral multiple
Two dynamic blocks, wherein, second dynamic block is located at after first dynamic block, and second dynamic block is dynamic with being split
Next dynamic block connection of state block.First recycling submodule, is configured as the index formed according to the array, by described first
The free memory segment of dynamic block is as in the free time of linked list element insertion memory size range intervals and first dynamic block
It deposits in the corresponding chained list of clip size.Second piece of processing submodule, if being configured as the judgement multiple submodule judgement is
The free memory segment to be discharged is considered as the second dynamic block by integral multiple.Continuous space judging submodule is configured as sentencing
Whether the free memory section follow-up dynamic block connected to it of disconnected second dynamic block forms continuous free space.Second recycling
Submodule, if being configured as the continuous space judging submodule judgement does not form continuous free space, according to the number
Group formed index, using the free memory segment of second dynamic block as linked list element insertion memory size range intervals and
In the corresponding chained list of free memory clip size of second dynamic block.Merge submodule, if be configured as described continuous
Space judging submodule judges to form continuous free space, then will be after the formed continuous free space of second dynamic block
After dynamic merged block be the 3rd dynamic block.Judging submodule is split, is configured as judging that the 3rd dynamic block whether may be used
It is split as the multiple dynamic blocks for meeting different chained list memory ranges.3rd recycling submodule is sentenced if being configured as the fractionation
Disconnected submodule judges detachable, multiple dynamic blocks that the 3rd dynamic block is split as meeting different chained list memory ranges, root
According to the array formed index, using the free memory segment of each dynamic block split out as linked list element insertion in
It deposits in the chained list corresponding with the free memory clip size of each dynamic block of magnitude range section.4th recycling submodule, by with
If being set to the fractionation judging submodule judgement can not be split, according to the index that the array is formed, by the described 3rd dynamic
Free memory piece of the free memory segment of block as linked list element insertion memory size range intervals and the 3rd dynamic block
In the corresponding chained lists of Duan great little.
Optionally, an array element of the array is directed toward a chained list, and the n-th array element of the array is directed toward
N-th chained list, each linked list element of the n-th chained list are used to store memory size more than or equal to 2NAnd less than 2(N+1)'s
Free memory segment.
Optionally, in the database process virtual address space, head before the mapping space area, reserved
Portion space describes the data storage metadata of mapping space area storage organization and distribution condition, the head for storing
Space is fixed size.For storing daily record, the daily record describes the trailing space of the database process virtual address space
Storage engines operate relevant data context information, the daily record with the operation of the storage engines carry out it is corresponding more
Newly, the trailing space accordingly increases with the increase of the daily record.The device further includes:Database recovery module, by with
It is set to when system reboot where the database, is stored according to the head space of the database process virtual address space
Data storage metadata recover space distribution and according to the data context information that includes of daily record that the trailing space is stored into
Row data reconstruction.
Through the above technical solutions, since disclosure wide area information server file is located at non-volatile random access memory
File system in, this document system can realize the quick access to a large amount of small datas, and for this document system design, this
The mapping that database file is mapped to database process virtual address space by the storage engines of open proposed database is empty
Between area so that storage engines access virtual address space substantially access be non-volatile random access memory file system,
This process does not increase additional data copy and is done directly.And mapping space area is made of dynamic block, it is some or all dynamic
The free memory segment of state block is present in a manner of linked list element in two or more chained lists, and different chained lists are used to deposit
The different free memory segment of memory size scope is put, different chained lists form index by array.Therefore, free space is based on memory
The different chained list of magnitude range and linked list element therein form index to divide by array, are checked quickly soon convenient for storage engines
It looks for.So as to which storage engines distribute the distribution request in free memory space in response to receiving, and can be formed according to the array
Index, the linked list element that the free memory clip size distribution request enough is quickly searched in chained list carry out free memory segment
Distribution, it is seen then that the disclosure provide data managing method can be quickly without the volatile a large amount of small datas of processing.
Other feature and advantage of the disclosure will be described in detail in subsequent specific embodiment part.
Description of the drawings
Attached drawing is for providing further understanding of the disclosure, and a part for constitution instruction, with following tool
Body embodiment is together for explaining the disclosure, but do not form the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is according to the vehicle-mounted embedded type environment schematic shown in one exemplary embodiment of the disclosure.
Fig. 2 is the flow chart according to a kind of data managing method shown in one exemplary embodiment of the disclosure.
Fig. 3 is according to the data process virtual address space schematic diagram shown in one exemplary embodiment of the disclosure.
Fig. 4 is according to the array shown in one exemplary embodiment of the disclosure and chained list schematic diagram.
Fig. 5 is the flow chart according to a kind of data managing method shown in disclosure another exemplary embodiment.
Fig. 6 is the flow chart according to a kind of data managing method shown in disclosure another exemplary embodiment.
Fig. 7 is the flow chart according to a kind of data managing method shown in the another exemplary embodiment of the disclosure.
Fig. 8 is to be split to merge schematic diagram according to the dynamic block shown in one exemplary embodiment of the disclosure.
Fig. 9 is the block diagram according to a kind of electronic equipment shown in one exemplary embodiment of the disclosure.
Figure 10 is the block diagram according to a kind of data management system shown in one exemplary embodiment of the disclosure.
Figure 11 is the block diagram according to a kind of data administrator shown in one exemplary embodiment of the disclosure.
Figure 12 is the block diagram according to a kind of data administrator shown in disclosure another exemplary embodiment.
Specific embodiment
The specific embodiment of the disclosure is described in detail below in conjunction with attached drawing.It should be appreciated that this place is retouched
The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
Before data managing method, device, system, storage medium and the electronic equipment of disclosure offer are provided, first
The possible application scenarios of the data managing method are simply introduced.Vehicle-mounted embedded type environment schematic as shown in Figure 1.Vehicle
Application program 110 is carried to access to database 120.Database 120 can include but is not limited to the various numbers based on key-value pair
According to storehouse.Data management engine 1201 in database 120 is for being managed the organizational form of data, for example, establishing data
Relation of row, column etc. in the relation of table, table.Data management engine 1201 is according to the access that vehicular applications program 110 is sent to number
Respective request is sent according to the storage engines 1202 in storehouse 120, such as distribution request of distribution free memory, releasing idling memory are released
Put request, etc..Substructure of the storage engines 1202 as data base management system, handles memory mechanism of data etc.
Problem.The database file of database 120 is stored in the file system 140 of non-volatile random access memory 150.In this public affairs
In opening, the database file of database 120 is mapped to database process virtual address space 130 by storage engines 1202 in advance
Mapping space area.
In order to which the technical solution for providing the disclosure more easily understands, the disclosure is mentioned again below non-volatile random
It accesses memory and its file system illustrates.Non-volatile random access memory namely nonvolatile memory (NVRAM, Non-
Volatile Random Access Memory), refer to a kind of RAM for remaining to keep data after powering off.Non-volatile random access is deposited
Reservoir can include phase transition storage or have the characteristics that other memories of non-volatile random access memory, such as ferroelectricity storage
Device etc..The file system of non-volatile random access memory, for example, in user opening, closing and read/write can be asked to be accessed
During file, by opening, closing and read/write operation of the managing process of file system to the accessed file, inquiry file system
Mapping table between system and non-volatile random access memory, and judge the logical address of accessed file data with where it
With the presence or absence of mapping relations between physical address, accessed file data is addressed to according to the logical address of accessed file data
Physical address.This document system can give full play to the advantage of non-volatile random access memory, improve read and write access speed, real
Existing quick access of the process to file data.
In the disclosure, database file is stored in the file system of non-volatile random access memory, database file
In data be substantially be stored in non-volatile random access memory.Storage engines in advance reflect wide area information server file
Be mapped to the mapping space area of database process virtual address space so that storage engines access virtual address space be substantially by
The file system of non-volatile random access memory accesses non-volatile random access memory, therefore, depositing designed by the disclosure
The storage engine essence that accesses to database process virtual address space is quick access to non-volatile random access memory.
Fig. 2 is the flow chart according to a kind of data managing method shown in an exemplary embodiment.This method is applied to number
According to the storage engines in storehouse.As shown in Fig. 2, the data managing method can include:
In step 210, storage engines are virtual to database process by the wide area information server File Mapping in advance
The mapping space area of address space.
For example, (SuSE) Linux OS mmap systems method of calling may be employed by the data of the database in storage engines
Library file is mapped to the mapping space area of database process virtual address space.It is understood that the step 210 is in database
It is performed at the beginning of foundation, after database completes mapping, without carrying out repeating mapping again.
Wherein, the database file is located in the file system of non-volatile random access memory.The mapping space
Area is made of several dynamic blocks.Each dynamic block includes one or more pages.The page is assignable minimum unit.Part or
The free memory segment of all dynamic blocks is present in as linked list element in two or more chained lists, a free memory
Segment corresponds to a linked list element, and different chained lists are used to store free memory segment of the memory size in different range section.Institute
It states different chained lists and forms index by several array elements of array.
For example, data process virtual address space schematic diagram as shown in Figure 3, mapping space area 302 are dynamically distributes numbers
It according to area, is made of a series of continuously arranged pages, which is managed by minimum management unit of the page, such as the page can
Think 4KB sizes or other sizes.As shown in figure 3, dynamic block 310 can include storage allocation segment 3101, free memory
Segment 3102.Information is distributed for the ease of record, block stem metadata can be stored in the block stem of each dynamic block, the block is first
Portion's metadata can include having distributed the information such as how many space, idle how many space.It can be deposited in the stem of free memory segment
Free space metadata is vented, records the size of free space and the position in chained list.As it can be seen that in mapping space area, the page and
Dynamic block collectively forms secondary index distribution mechanism.
In one possible embodiment, in the database process virtual address space, positioned at the mapping space area it
Preceding, reserved head space can be used for that storage describes mapping space area storage organization and the data of distribution condition are deposited
Metadata is stored up, the head space is fixed size.For example, it is reserved in database process virtual address space as shown in Figure 3
Go out the head space 301 of fixed size.Head space is that fixed size namely mapping space area 302 always start with process void
Intend the constant offset amount of address space.Storage organization for example includes how many page including mapping space area, and it is dynamic which which page belongs to
State block etc..Distribution condition can have been distributed including which page, which page is unallocated etc..The database process virtual address space
Trailing space can be used for storing daily record, the daily record is described operates relevant data context information with storage engines,
The daily record is accordingly updated with the operation of the storage engines, trailing space phase with the increase of the daily record
It should increase.For example, trailing space 303 houses daily record in database process virtual address space as shown in Figure 3.When the number
According to the first number of data storage during system reboot, stored where storehouse according to the head space of the database process virtual address space
According to recovering the distribution of free memory space and according to the data context information that the daily record that the trailing space is stored includes into line number
According to reconstruction.In this embodiment, since head space is fixed size, for preserve the trailing space of daily record dynamically with
The increase of daily record and accordingly increase, so as to system restarting after, data are not lost, and storage engines can be from fixed position end to end
Data storage metadata and daily record are read out, rebuild data and recovers the distribution of free memory space.For example, for key value database
For, it can be traveled through according to daily record and rebuild all key assignments data.
In a step 220, storage engines distribute the distribution request of free memory in response to receiving, according to the array shape
Into index, in described two or more than two chained lists search free memory clip size distribution request described enough chain
Table element.
For example, an array element of the array can be directed toward a chained list, the n-th array element of the array refers to
To n-th chained list, each linked list element of the n-th chained list is used to store memory size more than or equal to 2NAnd less than 2(N+1)
Free memory segment.In this embodiment, each array is directed toward the free space chained list that linked list element is fixed size, often
A linked list element is located at the free space position of some dynamic block.Array is an index list to free memory segment.It compiles
It number stores free memory size for the chained list of N and is more than or equal to 2NAnd less than 2(N+1)Memory segment.For example, with reference to Fig. 4 institutes registration
Group and chained list schematic diagram, for the array of 26 array elements, the maximum assignable company of a linked list element of the 26th chained list
Continuous free memory segment is 64MB (226).The chained list that some array elements are directed toward can be sky.
For example, for the distribution request of the free memory of request 1 byte of distribution, it can be according to the rope of array formation
Draw, the corresponding free memory segment of one linked list element of taking-up is distributed to the distribution and asked in the chained list pointed by the 0th number group
It asks.For example, taking linked list element that can be taken since chained list head term every time, determine the corresponding free memory segment point of chained list head term
It, can be by linked list head entry deletion after dispensing distribution request.It is of course also possible to linked list element is taken from chained list centre or afterbody,
In this case, the linked list element progress order before the linked list element of taking-up is moved down.It is understood that from chained list
Head term starts to take, less to chain table handling, relatively easy, speed.
In step 230, the corresponding free memory segment of the linked list element found out is distributed to institute by storage engines
It states distribution request and deletes the linked list element from chained list where it.
It is understood that for the distribution request that data management engine is sent, free memory segment is distributed to described
Free memory segment is also distributed to data management engine by distribution request.
Information is distributed for the ease of record, can accordingly update block stem metadata to record the data block service condition.
Through the above technical solutions, since wide area information server file is located at the file of non-volatile random access memory
In system, this document system can realize the quick access to a large amount of small datas, and be the storage engines of this document system design
Database file is mapped to the mapping space area of database process virtual address space so that storage engines access virtual address
What space substantially accessed is the file system of non-volatile random access memory, this process do not increase additional data copy and
It is done directly.And mapping space area is made of dynamic block, the free memory segment of some or all dynamic blocks is with linked list element
Mode is present in two or more chained lists, and different chained lists are used to store the different free memory piece of memory size scope
Section, different chained lists form index by array.Therefore, free space is based on the different chained list of memory size scope and chained list therein
Member usually divides, and forms index by array, is quickly searched convenient for storage engines.So as to which, storage engines are in response to receiving point
Distribution request with free memory space, the index that can be formed according to the array, quickly searches free memory in chained list
The linked list element of the clip size distribution request enough carries out the distribution of free memory segment, therefore, the data that the disclosure provides
Management method can be quickly without the volatile a large amount of small datas of processing.
With distribution free memory correspondingly, the disclosure provide method design storage engines can also to releasing memory into
Row recycling.For example, a kind of possible embodiment, the storage engines of the disclosure can will be in the free time of data management engine release
It deposits segment and is retained in mapping space area as the free memory segment for not yet adding in chained list.But it so may result in chained list very
Sky is taken soon, needs frequently to go to apply for new dynamic block to mapping space area to add in chained list.
In order to solve the problems, such as this, the disclosure provides alternatively possible embodiment.Fig. 5 is shown according to the embodiment
A kind of flow chart of the data managing method gone out.This method is applied to the storage engines of database.As shown in figure 5, the data pipe
Reason method can include:
In step 510, storage engines are virtual to database process by the wide area information server File Mapping in advance
The mapping space area of address space.
The storage organization of database process virtual address space may refer to the explanation of embodiment illustrated in fig. 2 in detail, herein
No longer it is described in detail.
In step 520, the distribution request of free memory is distributed in response to receiving, the rope formed according to the array
Draw, the chain list cell of free memory clip size distribution request described enough is searched in described two or more than two chained lists
Element.
In step 530, the corresponding free memory segment of the linked list element found out is distributed to the distribution please
It asks and deletes the linked list element from chained list where it.
In step 540, in response in the presence of needing the free memory segment that discharges, according to the index that the array is formed,
The free memory segment that will be discharged is as linked list element insertion memory size range intervals and the free memory segment to be discharged
The corresponding chained list of size.
For example, whenever thering is memory block to be discharged back into Database Dynamic distribution data field, it can be according to this memory block
Size is placed in corresponding chained list, and is placed on chained list stem.
In this embodiment, the free memory segment of release is directly added into chained list, so that free memory segment
Quantity keeps the benign maintenance of dynamic in chained list, it is not necessary to frequently go to apply for new dynamic block to mapping space area, improve distribution
Efficiency.
It is carried out in view of the linked list element for being possible to can not find the enough distribution requests of free memory clip size in chained list
Distribution, need to apply for dynamic block again, and the dynamic block newly applied needs to recycle there is likely to be remaining free memory segment, the disclosure
Another possible embodiment is provided.Fig. 6 is the flow chart according to a kind of data managing method shown in the embodiment.It should
Method is applied to the storage engines of database.As shown in fig. 6, the data managing method can include:
In step 610, storage engines are virtual to database process by the wide area information server File Mapping in advance
The mapping space area of address space.
In step 620, the distribution request of free memory is distributed in response to receiving, the rope formed according to the array
Draw, the chain list cell of free memory clip size distribution request described enough is searched in described two or more than two chained lists
Element.
In act 630, the corresponding free memory segment of the linked list element found out is distributed to the distribution please
It asks and deletes the linked list element from chained list where it.
In step 631, if searched in described two or more than two chained lists less than free memory clip size
The linked list element of distribution request described enough, then not yet add in the dynamic block of chained list in the mapping space, find out containing
Page distribution request described enough dynamic block, it is dynamic by what is found out according to the free space of distribution request requirement distribution
The correspondingly sized several pages of state block distribute to the distribution request.
In step 632, if the dynamic block found out still has remaining free page after dispensing, by the remaining free page
As the free memory segment to be discharged.
In step 640, in response in the presence of needing the free memory segment that discharges, according to the index that the array is formed,
The free memory segment that will be discharged is as linked list element insertion memory size range intervals and the free memory segment to be discharged
The corresponding chained list of size.
In this embodiment, still remaining free memory segment is recovered to chained list after the dynamic block newly applied is distributed,
So that the quantity of free memory segment keeps the benign maintenance of dynamic in chained list, it is not necessary to frequently go to Shen to mapping space area
Dynamic block that please be new improves allocative efficiency.
Free memory segment in order to discharge more reasonably adds in the corresponding chained list of length, improves storage machine
System improves allocative efficiency.The disclosure provides another possible embodiment and carries out piecemeal and merging to the data of recycling.Specifically
Ground, referring to Fig. 7, Fig. 7 is the flow chart according to a kind of data managing method shown in the embodiment.This method is applied to data
The storage engines in storehouse.As shown in fig. 7, the data managing method can include:
In step 720, storage engines are virtual to database process by the wide area information server File Mapping in advance
The mapping space area of address space.
In step 720, the distribution request of free memory is distributed in response to receiving, the rope formed according to the array
Draw, the chain list cell of free memory clip size distribution request described enough is searched in described two or more than two chained lists
Element.
In step 730, the corresponding free memory segment of the linked list element found out is distributed to the distribution please
It asks and deletes the linked list element from chained list where it.
In step 721, if searched in described two or more than two chained lists less than free memory clip size
The linked list element of distribution request described enough, then not yet add in the dynamic block of chained list in the mapping space, find out containing
Page distribution request described enough dynamic block, it is dynamic by what is found out according to the free space of distribution request requirement distribution
The correspondingly sized several pages of state block distribute to the distribution request.
In step 722, if the dynamic block found out still has remaining free page after dispensing, by the remaining free page
As the free memory segment to be discharged.
In step 740, in response in the presence of the free memory segment for needing to discharge, judging the free memory segment to be discharged
Size whether be page integral multiple.
In step 741, if not integral multiple, the dynamic block where the free memory segment to be discharged is split
Account for the first dynamic block of the segment space of one page for free memory segment and free memory segment account for page integral multiple it is second dynamic
State block, wherein, second dynamic block is located at after first dynamic block, second dynamic block and the dynamic block being split
Next dynamic block connection.
In step 742, according to the array formed index, using the free memory segment of first dynamic block as
In linked list element insertion memory size range intervals chained list corresponding with the free memory clip size of first dynamic block.
In step 743, if integral multiple, the free memory segment to be discharged is considered as the second dynamic block.
In step 744, judge second dynamic block free memory section follow-up dynamic block connected to it whether shape
Into continuous free space.
In step 745, if not forming continuous free space, according to the index that the array is formed, by described the
Free time of the free memory segment of two dynamic blocks as linked list element insertion memory size range intervals and second dynamic block
In the corresponding chained list of memory clip size.
It is if forming continuous free space, second dynamic block is formed continuous idle empty in step 746
Between follow-up dynamic block merge into the 3rd dynamic block.
In step 747, judge the 3rd dynamic block it is whether removable be divided into meet the multiple of different chained list memory ranges
Dynamic block.
In step 748, it is if detachable, the 3rd dynamic block is split as meeting the more of different chained list memory ranges
A dynamic block, according to the index that the array is formed, using the free memory segment of each dynamic block split out as chain
In table element insertion memory size range intervals chained list corresponding with the free memory clip size of each dynamic block.
In step 749, if can not be split, according to the index that the array is formed, by the sky of the 3rd dynamic block
Free memory clip size of the not busy memory segment as linked list element insertion memory size range intervals and the 3rd dynamic block
In corresponding chained list.
For example, dynamic block as shown in Figure 8, which is split, merges schematic diagram.When in the presence of the free memory segment for needing to discharge,
The free memory segment to be discharged is arranged first, when the free memory segment to be discharged is not the integer multiple of the page,
Dynamic block where the free memory segment to be discharged is decomposed.As Fig. 8, dynamic block a resolve into a1, a2.For such as dynamic block a2,
The block stem metadata in its subsequent continuous dynamic block b is checked, as can the continuous free space of composition bigger then occurs such as a2
C is combined as with the dynamic block of b.Check whether c can be decomposed into multiple some chained list that just coincide and correspond to memory size scope area
Between dynamic block, c is thus decomposed into c1 and c2.For example, dynamic block c1=16k=214, then c1 enter the 14th array element
The chained list stem of the idle chained list of pointed the 14th, dynamic block c2=8k=213, then c2 enter the 13rd array element meaning
To the 13rd idle chained list chained list stem.And the free space of dynamic block a1 is 400 bytes, then the free space of a1 enters
The chained list stem of the 8th idle chained list pointed by 8th array element.
As it can be seen that by the embodiment, it is corresponding the free memory segment of release can more reasonably to be added in length
Chained list improves memory mechanism, improves allocative efficiency.
Fig. 9 is the block diagram according to a kind of electronic equipment 900 shown in an exemplary embodiment.As shown in figure 9, the electronics is set
Standby 900 can include:Processor 901, memory 902, multimedia component 903, input/output (I/O) interface 904, Yi Jitong
Believe component 905.
Wherein, processor 901 is used to control the integrated operation of the electronic equipment 900, to complete above-mentioned data management side
All or part of step in method.Memory 902 is used to store various types of data to support the behaviour in the electronic equipment 900
To make, these data can for example include for the instruction of any application program or method that are operated on the electronic equipment 900, with
And the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The memory 902
It can be realized by any kind of volatibility or non-volatile memory device or combination thereof, such as static random-access is deposited
Reservoir (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory
(Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable
Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory
(Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as
ROM), magnetic memory, flash memory, disk or CD.Multimedia component 903 can include screen and audio component.Wherein
Screen for example can be touch-screen, and audio component is for output and/or input audio signal.For example, audio component can include
One microphone, microphone are used to receive external audio signal.The received audio signal can be further stored in storage
Device 902 is sent by communication component 905.Audio component further includes at least one loud speaker, for exports audio signal.I/O
Interface 904 provides interface between processor 901 and other interface modules, other above-mentioned interface modules can be keyboard, mouse,
Button etc..These buttons can be virtual push button or entity button.Communication component 905 is for the electronic equipment 900 and other
Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field
Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication
Component 905 can include:Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 900 can be by one or more application application-specific integrated circuit
(Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital
Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device,
Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array
(Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member
Part is realized, for performing above-mentioned data managing method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction, example are additionally provided
Such as include the memory 902 of program instruction, above procedure instruction can be performed by the processor 901 of electronic equipment 900 in completion
The data managing method stated.
Figure 10 is the block diagram according to a kind of data management system 1000 shown in an exemplary embodiment.As shown in Figure 10,
The data management system 1000 can include:Electronic equipment 1010 and the storage of non-volatile random access as described in above-described embodiment
Device 1020.
Wherein, database file, the number are saved in the file system of the non-volatile random access memory 1020
Wide area information server file according to library file to be run on the electronic equipment 1010.The database comes from for managing
In the data of the electric system of the automobile.
By the embodiment, can quickly be handled in vehicle-mounted limited environment has the characteristics that vehicle-mounted byte level is extra large at random
It measures small data to access, flame-out data are not lost, and are reduced data copy number, and by nonvolatile memory extremely low processing are brought to postpone.
Figure 11 is the block diagram 1100 according to a kind of data administrator shown in an exemplary embodiment.The device is configured at
The storage engines of database.As shown in figure 11, which can include:Preset module 1110, allocation space are searched
Module 1120, distribution execution module 1130.
The preset module 1110, can be configured as in advance by the wide area information server File Mapping to database into
The mapping space area of journey virtual address space, wherein, the database file is located at the file of non-volatile random access memory
In system, the mapping space area is made of several dynamic blocks, and each dynamic block includes one or more pages, and the page is that can divide
The minimum unit matched somebody with somebody, the free memory segments of some or all dynamic blocks are present in two or more as linked list element
In chained list, a free memory segment corresponds to a linked list element, and different chained lists are used to store memory size in different range area
Between free memory segment, it is described difference chained lists form index by several array elements of array.
For example, in a possible embodiment, an array element of the array is directed toward a chained list, the array
N-th array element is directed toward n-th chained list, each linked list element of the n-th chained list for store memory size more than
Equal to 2NAnd less than 2(N+1)Free memory segment.
The allocation space searching module 1120, the distribution that can be configured as in response to receiving distribution free memory please
It asks, according to the index that the array is formed, free memory clip size foot is searched in described two or more than two chained lists
The linked list element of enough distribution requests.
The distribution execution module 1130 can be configured as the corresponding free memory piece of the linked list element that will be found out
Section distributes to the distribution request and deletes the linked list element from chained list where it.
As it can be seen that according to the technical solution that the disclosure provides, storage engines are in response to receiving distribution free memory space
Distribution request, the index that can be formed according to the array quickly search free memory clip size this point enough in chained list
Linked list element with request carries out the distribution of free memory segment, and therefore, the data managing method that the disclosure provides can be quick
Without the volatile a large amount of small datas of processing.
It is understood that, the storage engines for the method design that the disclosure provides also may be used with distribution free memory correspondingly
To be recycled to releasing memory.Referring to Figure 12, Figure 12 is according to a kind of data administrator shown in an exemplary embodiment
Block diagram 1200.As shown in figure 12, which can also include:Recycling module 1140, can be configured as needs in response to existing
The free memory segment to be discharged, according to the index that the array is formed, the free memory segment that will be discharged is as chain list cell
Element insertion memory size range intervals chained list corresponding with the free memory clip size to be discharged.
It is carried out in view of the linked list element for being possible to can not find the enough distribution requests of free memory clip size in chained list
Distribution, need to apply for dynamic block again, and the dynamic block newly applied needs to recycle there is likely to be remaining free memory segment, the disclosure
Another possible embodiment is provided.Referring to Figure 12, in the embodiment, which can also include:New block distribution module
1150, if it is enough less than free memory clip size to can be configured as the lookup in described two or more than two chained lists
The linked list element of the distribution request is then not yet added in the dynamic block of chained list in the mapping space, find out containing page
The dynamic block of distribution request described enough, according to the free space of distribution request requirement distribution, the dynamic block that will be found out
Correspondingly sized several pages distribute to the distribution request.Remaining page recycling module 1160, if can be configured as what is found out
Dynamic block still has remaining free page after dispensing, using the remaining free page as the free memory segment to be discharged, triggers institute
Recycling module 1140 is stated into the index formed according to array, the free memory segment that will be discharged is inserted as linked list element
The step of entering memory size range intervals corresponding with the free memory clip size to be discharged chained list.
In this embodiment, still remaining free memory segment is recovered to chained list after the dynamic block newly applied is distributed,
So that the quantity of free memory segment keeps the benign maintenance of dynamic in chained list, it is not necessary to frequently go to Shen to mapping space area
Dynamic block that please be new improves allocative efficiency.
Free memory segment in order to discharge more reasonably adds in the corresponding chained list of length, improves storage machine
System improves allocative efficiency.The disclosure provides another possible embodiment and carries out piecemeal and merging to the data of recycling.Specifically
Ground, referring to Figure 12, in the embodiment, recycling module 1140 can include:
Judge multiple submodule 1141, can be configured as and judge whether the free memory clip size to be discharged is page
Integral multiple.
Block decomposes submodule 1142, if can be configured as the judgement multiple submodule 1141 judges it is not integer
Times, dynamic block where the free memory segment to be discharged is split as the segment space that free memory segment accounts for one page
First dynamic block and free memory segment account for the second dynamic block of the integral multiple of page, wherein, second dynamic block is located at described
After first dynamic block, second dynamic block is connected with the next dynamic block for the dynamic block being split.
First recycling submodule 1143, can be configured as the index formed according to the array, by the described first dynamic
Free memory piece of the free memory segment of block as linked list element insertion memory size range intervals and first dynamic block
In the corresponding chained lists of Duan great little.
Second piece of processing submodule 1144, if can be configured as the judgement multiple submodule judgement is integral multiple,
The free memory segment to be discharged is considered as the second dynamic block.
Continuous space judging submodule 1145 can be configured as the free memory Duan Yuqi for judging second dynamic block
Whether the follow-up dynamic block of connection forms continuous free space.
Second recycling submodule 1146, if can be configured as the continuous space judging submodule 1145 judges non-shape
Into continuous free space, the then index formed according to the array, using the free memory segment of second dynamic block as chain
In table element insertion memory size range intervals chained list corresponding with the free memory clip size of second dynamic block.
Merge submodule 1147, judge to be formed continuously if can be configured as the continuous space judging submodule 1145
The follow-up dynamic block of the formed continuous free space of second dynamic block is then merged into one the 3rd dynamic by free space
Block.
Split judging submodule 1148, can be configured as judge the 3rd dynamic block it is whether removable be divided into meet difference
Multiple dynamic blocks of chained list memory range.
3rd recycling submodule 1149, if can be configured as it is described fractionation judging submodule 1148 judge it is detachable,
Multiple dynamic blocks that 3rd dynamic block is split as meeting different chained list memory ranges, the rope formed according to the array
Draw, using the free memory segment of each dynamic block split out as linked list element insertion memory size range intervals and respectively
In the corresponding chained list of free memory clip size of a dynamic block.
4th recycling submodule 1150, if can be configured as it is described fractionation judging submodule 1148 judge it is non-disconnectable
Point, according to the index that the array is formed, memory is inserted into using the free memory segment of the 3rd dynamic block as linked list element
In the chained list corresponding with the free memory clip size of the 3rd dynamic block of magnitude range section.
In another possible embodiment, in the database process virtual address space, positioned at the mapping space area
Before, reserved head space describe the data storage of mapping space area storage organization and distribution condition for storing
Metadata, the head space are fixed size.The trailing space of the database process virtual address space is used to store day
Will, the daily record describe storage engines and operate relevant data context information, and the daily record is with the storage engines
Operation is accordingly updated, and the trailing space accordingly increases with the increase of the daily record.In the embodiment, the dress
It puts and further includes:Database recovery module 1160 can be configured as when system reboot where the database, according to the number
The data storage metadata stored according to the head space of storehouse process virtual address space recovers space distribution and according to the afterbody
The data context information that the daily record of space storage includes carries out data reconstruction.
According to the embodiment, after system restarting, data are not lost, and storage engines can be read from fixed position end to end
Data storage metadata and daily record are taken out, rebuild data and recovers the distribution of free memory space.
The preferred embodiment of the disclosure is described in detail above in association with attached drawing, still, the disclosure is not limited to above-mentioned reality
The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure
Monotropic type, these simple variants belong to the protection domain of the disclosure.
It is further to note that the specific technical features described in the above specific embodiments, in not lance
In the case of shield, can be combined by any suitable means, in order to avoid unnecessary repetition, the disclosure to it is various can
The combination of energy no longer separately illustrates.
In addition, it can also be combined between a variety of embodiments of the disclosure, as long as it is without prejudice to originally
Disclosed thought should equally be considered as disclosure disclosure of that.
Claims (10)
1. a kind of data managing method, which is characterized in that the method is applied to the storage engines of database, the method bag
It includes:
In advance by the mapping space area of the wide area information server File Mapping to database process virtual address space,
In, the database file is located in the file system of non-volatile random access memory, and the mapping space area is by several dynamic
State block forms, and each dynamic block includes one or more pages, and the page is assignable minimum unit, some or all dynamic blocks
Free memory segment be present in as linked list element in two or more chained lists, free memory segment corresponds to one
A linked list element, different chained lists are used to store free memory segment of the memory size in different range section, the difference chained lists
Index is formed by several array elements of array;
In response to receiving the distribution request of distribution free memory, according to the index that the array is formed, described two or two
The linked list element of free memory clip size distribution request described enough is searched in a above chained list;
The corresponding free memory segment of the linked list element found out is distributed into the distribution request and by the chained list
Element is deleted from chained list where it.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
In response in the presence of needing the free memory segment that discharges, according to the index that the array is formed, in the free time that will be discharged
It deposits segment and is inserted into memory size range intervals chained list corresponding with the free memory clip size to be discharged as linked list element.
3. according to the method described in claim 2, it is characterized in that, the method further includes:
If it is searched in described two or more than two chained lists less than free memory clip size distribution request described enough
Linked list element, then not yet added in the mapping space in the dynamic block of chained list, find out containing page distribution described enough
The dynamic block of request, according to the free space of distribution request requirement distribution, if the dynamic block found out is correspondingly sized
Dry page distributes to the distribution request;
If the dynamic block found out still has remaining free page after dispensing, using the remaining free page as the free time to be discharged
Memory segment, into the index formed according to array, the free memory segment that will be discharged is as in linked list element insertion
The step of depositing corresponding with the free memory clip size the to be discharged chained list in magnitude range section.
4. according to the method in claim 2 or 3, which is characterized in that all dynamic blocks order in the mapping space area connects
It connects;
The index formed according to array, the free memory segment that will be discharged are inserted into memory size scope as linked list element
Section chained list corresponding with the free memory clip size to be discharged includes:
Judge the free memory clip size to be discharged whether be page integral multiple;
If not integral multiple, the dynamic block where the free memory segment to be discharged is split as free memory segment and is accounted for
The first dynamic block and free memory segment of the segment space of one page account for the second dynamic block of the integral multiple of page, wherein, described
Two dynamic blocks are located at after first dynamic block, and next dynamic block of second dynamic block and the dynamic block being split connects
It connects;
According to the index that the array is formed, memory is inserted into using the free memory segment of first dynamic block as linked list element
In the chained list corresponding with the free memory clip size of first dynamic block of magnitude range section;
If integral multiple, the free memory segment to be discharged is considered as the second dynamic block;
Judge whether the free memory section follow-up dynamic block connected to it of second dynamic block forms continuous free space;
It, will be in the free time of second dynamic block according to the index that the array is formed if not forming continuous free space
It is corresponding to the free memory clip size of second dynamic block as linked list element insertion memory size range intervals to deposit segment
Chained list in;
If forming continuous free space, the follow-up dynamic block of the formed continuous free space of second dynamic block is closed
And it is the 3rd dynamic block;
Judge whether the 3rd dynamic block is removable and be divided into the multiple dynamic blocks for meeting different chained list memory ranges;
If detachable, multiple dynamic blocks that the 3rd dynamic block is split as meeting different chained list memory ranges, according to institute
The index of array formation is stated, it is big using the free memory segment of each dynamic block split out as linked list element insertion memory
In the chained list corresponding with the free memory clip size of each dynamic block of small range section;
If can not be split, according to the index that the array is formed, using the free memory segment of the 3rd dynamic block as chain
In table element insertion memory size range intervals chained list corresponding with the free memory clip size of the 3rd dynamic block.
5. according to the method described in claim 1, it is characterized in that, the array an array element be directed toward a chained list,
The n-th array element of the array is directed toward n-th chained list, and each linked list element of the n-th chained list is used to store memory
Size is more than or equal to 2NAnd less than 2(N+1)Free memory segment.
6. according to the method described in claim 1, it is characterized in that, in the database process virtual address space, positioned at institute
Head space before stating mapping space area, reserved describes mapping space area storage organization and distribution feelings for storing
The data storage metadata of condition, the head space is fixed size;
For the trailing space of the database process virtual address space for storing daily record, the daily record describes storage engines behaviour
Make relevant data context information, the daily record is accordingly updated with the operation of the storage engines, and the afterbody is empty
Between accordingly increase with the increase of the daily record;
The method further includes:
When system reboot where the database, stored according to the head space of the database process virtual address space
Data storage metadata recover space distribution and according to the data context information that includes of daily record that the trailing space is stored into
Row data reconstruction.
7. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor
It is realized during row as any one of claim 1-6 the step of method.
8. a kind of electronic equipment, which is characterized in that including:
Computer readable storage medium as described in claim 7;And
One or more processor, for performing the program in the computer readable storage medium.
9. a kind of data management system, which is characterized in that the data management system is used for insertion in the car, the data pipe
Reason system includes:
Electronic equipment as claimed in claim 8 and non-volatile random access memory;
Wherein, database file, the database file are saved in the file system of the non-volatile random access memory
For the wide area information server file run on electronic equipment as claimed in claim 8;
The database comes from the data of the electric system of the automobile for management.
10. a kind of data administrator, which is characterized in that described device is configured at the storage engines of database, described device bag
It includes:
Preset module is configured as the wide area information server File Mapping to database process virtual address space in advance
Mapping space area, wherein, the database file is located in the file system of non-volatile random access memory, the mapping
Space region is made of several dynamic blocks, and each dynamic block includes one or more pages, and the page is assignable minimum unit, portion
Divide or the free memory segment of all dynamic blocks is present in as linked list element in two or more chained lists, a free time
Memory segment corresponds to a linked list element, and different chained lists are used to store free memory piece of the memory size in different range section
Section, the difference chained list form index by several array elements of array;
Allocation space searching module is configured to respond to receive the distribution request of distribution free memory, according to the array
The index of formation searches free memory clip size distribution request described enough in described two or more than two chained lists
Linked list element;
Execution module is distributed, the corresponding free memory segment of the linked list element that will be found out is configured as and distributes to described point
It is deleted with request and by the linked list element from chained list where it.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711449172.7A CN108121813B (en) | 2017-12-27 | 2017-12-27 | Data management method, device, system, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711449172.7A CN108121813B (en) | 2017-12-27 | 2017-12-27 | Data management method, device, system, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108121813A true CN108121813A (en) | 2018-06-05 |
CN108121813B CN108121813B (en) | 2020-09-18 |
Family
ID=62231826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711449172.7A Active CN108121813B (en) | 2017-12-27 | 2017-12-27 | Data management method, device, system, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108121813B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977078A (en) * | 2019-03-26 | 2019-07-05 | 广州荔支网络技术有限公司 | A kind of processing method of data, device, computer equipment and storage medium |
CN110764711A (en) * | 2019-10-29 | 2020-02-07 | 北京浪潮数据技术有限公司 | IO data classification deleting method and device and computer readable storage medium |
CN112667637A (en) * | 2020-12-31 | 2021-04-16 | 中移(杭州)信息技术有限公司 | Data management method, device and computer readable storage medium |
CN112784120A (en) * | 2021-01-25 | 2021-05-11 | 浪潮云信息技术股份公司 | KV memory database storage management method based on range fragmentation mode |
CN113050886A (en) * | 2021-02-23 | 2021-06-29 | 山东师范大学 | Nonvolatile memory storage method and system for embedded memory database |
CN113467716A (en) * | 2021-06-11 | 2021-10-01 | 苏州浪潮智能科技有限公司 | Data storage method, device, equipment and readable medium |
CN117539636A (en) * | 2023-12-06 | 2024-02-09 | 摩尔线程智能科技(北京)有限责任公司 | Memory management method and device for bus module, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101676906A (en) * | 2008-09-18 | 2010-03-24 | 中兴通讯股份有限公司 | Method of managing memory database space by using bitmap |
CN102402622A (en) * | 2011-12-27 | 2012-04-04 | 北京人大金仓信息技术股份有限公司 | Memory page managing and scheduling method for embedded memory database |
CN102411632A (en) * | 2011-12-27 | 2012-04-11 | 北京人大金仓信息技术股份有限公司 | Chain table-based memory database page type storage method |
US20150378992A1 (en) * | 2014-06-26 | 2015-12-31 | Altibase Corp. | Method and apparatus for moving data in database management system |
CN107016100A (en) * | 2017-04-10 | 2017-08-04 | 重庆大学 | A kind of metadata management method based on Nonvolatile memory file system |
-
2017
- 2017-12-27 CN CN201711449172.7A patent/CN108121813B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101676906A (en) * | 2008-09-18 | 2010-03-24 | 中兴通讯股份有限公司 | Method of managing memory database space by using bitmap |
CN102402622A (en) * | 2011-12-27 | 2012-04-04 | 北京人大金仓信息技术股份有限公司 | Memory page managing and scheduling method for embedded memory database |
CN102411632A (en) * | 2011-12-27 | 2012-04-11 | 北京人大金仓信息技术股份有限公司 | Chain table-based memory database page type storage method |
US20150378992A1 (en) * | 2014-06-26 | 2015-12-31 | Altibase Corp. | Method and apparatus for moving data in database management system |
CN107016100A (en) * | 2017-04-10 | 2017-08-04 | 重庆大学 | A kind of metadata management method based on Nonvolatile memory file system |
Non-Patent Citations (1)
Title |
---|
蒋智鹏: "内存数据库的存储管理", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111459884B (en) * | 2019-03-26 | 2023-05-16 | 广州荔支网络技术有限公司 | Data processing method and device, computer equipment and storage medium |
CN111459885B (en) * | 2019-03-26 | 2023-04-28 | 广州荔支网络技术有限公司 | Data processing method and device, computer equipment and storage medium |
CN109977078B (en) * | 2019-03-26 | 2020-06-02 | 广州荔支网络技术有限公司 | Data processing method and device, computer equipment and storage medium |
CN111459885A (en) * | 2019-03-26 | 2020-07-28 | 广州荔支网络技术有限公司 | Data processing method and device, computer equipment and storage medium |
CN109977078A (en) * | 2019-03-26 | 2019-07-05 | 广州荔支网络技术有限公司 | A kind of processing method of data, device, computer equipment and storage medium |
CN111459884A (en) * | 2019-03-26 | 2020-07-28 | 广州荔支网络技术有限公司 | Data processing method and device, computer equipment and storage medium |
CN110764711A (en) * | 2019-10-29 | 2020-02-07 | 北京浪潮数据技术有限公司 | IO data classification deleting method and device and computer readable storage medium |
CN110764711B (en) * | 2019-10-29 | 2022-03-22 | 北京浪潮数据技术有限公司 | IO data classification deleting method and device and computer readable storage medium |
CN112667637B (en) * | 2020-12-31 | 2023-09-19 | 中移(杭州)信息技术有限公司 | Data management method, device and computer readable storage medium |
CN112667637A (en) * | 2020-12-31 | 2021-04-16 | 中移(杭州)信息技术有限公司 | Data management method, device and computer readable storage medium |
CN112784120B (en) * | 2021-01-25 | 2023-02-21 | 浪潮云信息技术股份公司 | KV memory database storage management method based on range fragmentation mode |
CN112784120A (en) * | 2021-01-25 | 2021-05-11 | 浪潮云信息技术股份公司 | KV memory database storage management method based on range fragmentation mode |
CN113050886A (en) * | 2021-02-23 | 2021-06-29 | 山东师范大学 | Nonvolatile memory storage method and system for embedded memory database |
CN113467716B (en) * | 2021-06-11 | 2023-05-23 | 苏州浪潮智能科技有限公司 | Method, device, equipment and readable medium for data storage |
CN113467716A (en) * | 2021-06-11 | 2021-10-01 | 苏州浪潮智能科技有限公司 | Data storage method, device, equipment and readable medium |
CN117539636A (en) * | 2023-12-06 | 2024-02-09 | 摩尔线程智能科技(北京)有限责任公司 | Memory management method and device for bus module, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108121813B (en) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108121813A (en) | Data managing method, device, system, storage medium and electronic equipment | |
US6611898B1 (en) | Object-oriented cache management system and method | |
US8751763B1 (en) | Low-overhead deduplication within a block-based data storage | |
US10565125B2 (en) | Virtual block addresses | |
US9501421B1 (en) | Memory sharing and page deduplication using indirect lines | |
CN110018998B (en) | File management method and system, electronic equipment and storage medium | |
CN107066498B (en) | Key value KV storage method and device | |
CN105224478A (en) | A kind of formation of mapping table, renewal and restoration methods and electronic equipment | |
CN101617299A (en) | Data base management method | |
CN106919517B (en) | Flash memory and access method thereof | |
CN115543224B (en) | ZNS SSD-based file system control method, device and equipment | |
CN114090637B (en) | Data access method, device, equipment and storage medium | |
US20190034336A1 (en) | System and method for hardware-independent memory storage | |
WO2023124423A1 (en) | Storage space allocation method and apparatus, and terminal device and storage medium | |
CN110674052A (en) | Memory management method, server and readable storage medium | |
CN110949173A (en) | Charging method and device | |
CN113641629A (en) | File writing and reading method of FLASH memory | |
CN114327290B (en) | Structure, formatting method and access method of disk partition | |
US8024374B2 (en) | Computer object conversion using an intermediate object | |
CN112199042B (en) | Storage space management method, device, chip, equipment and storage medium | |
CN111813783A (en) | Data processing method, data processing device, computer equipment and storage medium | |
CN116954906A (en) | Node capacity expansion and contraction method, system, terminal and medium in graph database cluster | |
CN116340266A (en) | Fine-grained file system and file read-write method | |
CN106874457B (en) | Method for improving metadata cluster performance through virtual directory | |
CN113342819A (en) | Card number generation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |