CN114153378A - Database memory management system and method - Google Patents

Database memory management system and method Download PDF

Info

Publication number
CN114153378A
CN114153378A CN202111193539.XA CN202111193539A CN114153378A CN 114153378 A CN114153378 A CN 114153378A CN 202111193539 A CN202111193539 A CN 202111193539A CN 114153378 A CN114153378 A CN 114153378A
Authority
CN
China
Prior art keywords
buffer area
data
buffer
dynamic
log
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111193539.XA
Other languages
Chinese (zh)
Inventor
宋洪彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Chenchuang Technology Development Co ltd
Original Assignee
Guangzhou Chenchuang Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Chenchuang Technology Development Co ltd filed Critical Guangzhou Chenchuang Technology Development Co ltd
Priority to CN202111193539.XA priority Critical patent/CN114153378A/en
Publication of CN114153378A publication Critical patent/CN114153378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a database memory management system and a method, comprising the following steps: a static buffer for storing data complying with a first preset rule including that no update occurs within a first preset time; the dynamic buffer area is used for storing data which do not accord with the first preset rule; the log buffer area is used for giving data processed according to a first preset rule in the dynamic buffer area, generating a log and storing the log in the log buffer area; and the control module is used for traversing the dynamic buffer area according to the processing request when the processing request is acquired, generating a first buffer area corresponding to the request in the dynamic buffer area when the processing request does not inquire corresponding data in the dynamic buffer area, writing the data corresponding to the processing request in the static buffer area into the first buffer area respectively, and executing check points on the static buffer area.

Description

Database memory management system and method
Technical Field
The present invention relates to the field of database technologies, and in particular, to a database memory management system and method.
Background
At present, a database system has own memory management and scheduling, and user data processing and maintenance are completed in a memory; the whole memory space of the system is distributed during starting, and is divided into a plurality of parts according to different data storage and functions, and each part respectively uses an independent page table to carry out demand paging management and demand elimination page replacement management. A database system may read data in a disk using multiple page sizes.
The memory of the database system comprises two parts: SGA and PGA. The SGA is the SystemGlobalArea system global area. One database instance corresponds to one SGA, which is allocated when the database instance is started. The SGA, as a basic component of a database instance, is a very large memory space, which may even occupy 80% of the physical memory. PGA is the programglobalaarea program global area. A PGA is allocated when a service process is started. There may be many PGAs in OracleInstance, for example, 10 PGAs for 10 ServerProcess are started.
How to process the data accumulation of the buffer in the database system is a crucial problem for the operation efficiency of the database system.
Disclosure of Invention
The invention provides a database memory management system and a database memory management method, which are used for solving the technical problem of how to process data accumulation of a buffer area in a database system.
The invention provides a database memory management system, which comprises:
a static buffer for storing data complying with a first preset rule including that no update occurs within a first preset time;
the dynamic buffer area is used for storing data which do not accord with the first preset rule;
the log buffer area is used for giving data processed according to a first preset rule in the dynamic buffer area, generating a log and storing the log in the log buffer area;
and the control module is used for traversing the dynamic buffer area according to the processing request when the processing request is acquired, generating a first buffer area corresponding to the request in the dynamic buffer area when the processing request does not inquire corresponding data in the dynamic buffer area, writing the data corresponding to the processing request in the static buffer area into the first buffer area respectively, and executing check points on the static buffer area.
In some embodiments, the control module is further configured to write the data corresponding to the processing request in the static buffer into the first buffer, respectively, log a process of performing a checkpoint on the data corresponding to the processing request in the static buffer, and assign a mark to store the data in the log buffer.
In some embodiments, further comprising:
a write-in module, configured to determine whether the dynamic buffer has an idle buffer when a write-in request is obtained, determine whether the static buffer has an idle buffer if the dynamic buffer does not have an idle buffer, perform a check good point on the static buffer based on an LRU cache algorithm if the static buffer does not have an idle buffer, and generate an idle third buffer in the static buffer; and selecting the dynamic buffer area based on an LRU (least recently used) caching algorithm, writing partial data in the dynamic buffer area into the third buffer area, generating a free fourth buffer area in the dynamic buffer area, and writing data in a write request into the fourth buffer area.
In some embodiments, further comprising:
a dictionary buffer for storing data dictionary information;
the analysis module is used for judging that the SQL statement is converted into a digital code when the SQL statement is obtained, transmitting the digital code to a HASH function, returning a HASH value, searching whether the same HASH value exists in the dynamic cache region, and if so, executing the operation directly according to data corresponding to the HASH value; if not, carrying out syntactic analysis on the SQL sentence through the data dictionary information in the dictionary buffer area to generate a compiling code; and caching the SQL statement, the HASH value and the compiled code in the dynamic buffer area and the dictionary buffer area.
In some embodiments, the parsing module parses the SQL statement through the data dictionary information in the dictionary buffer as follows: checking the correctness of the SQL grammar through the data dictionary information, if the SQL grammar is correct, analyzing the object related in the SQL sentence, checking the name and the related structure of the object by comparing the data dictionary information, generating an execution plan according to whether the statistical data of the corresponding object exists in the data dictionary information and whether the storage outline is used, and generating a compiling code according to the execution plan.
In some embodiments, the objects in the dictionary buffer include tables, indices, and views.
In some embodiments, the generating of the compiled code according to the execution plan in the parsing module is: and checking the execution authority of the SQL statement on the corresponding object according to the execution plan and the data dictionary information to generate a compiled code.
In some embodiments, the control module is further configured to store a log generated by the parsing module for the parsing of the SQL statement in the log buffer.
In some embodiments, further comprising:
and the updating module is used for traversing the dynamic buffer area when an updating request is acquired, inquiring original data corresponding to the request in the dynamic buffer area, updating the data corresponding to the request in the dynamic buffer area, writing the original data in the static buffer area, and storing the log generated in the process and the log buffer area.
In some embodiments, further comprising:
and the deleting module is used for traversing the dynamic buffer area when a deleting request is acquired, and directly deleting the data and simultaneously executing a check point on the data if the data corresponding to the deleting request is inquired in the dynamic buffer area.
The embodiment of the application also provides a database memory management method, which is used for acquiring the processing request;
traversing the dynamic buffer according to the processing request;
when the processing request does not inquire corresponding data in the dynamic buffer area, generating a first buffer area corresponding to the request in the dynamic buffer area;
according to the first buffer area, respectively writing the data corresponding to the processing request in the static buffer area into the first buffer area, and executing a check point on the data corresponding to the processing request in the static buffer area;
the static buffer area is used for storing data which accords with a first preset rule, and the first preset rule comprises that no updating occurs within a first preset time;
the dynamic buffer area is used for storing data which does not accord with a first preset rule;
the log buffer area is used for giving data processed according to a first preset rule in the dynamic buffer area, generating a log and storing the log in the log buffer area.
According to the technical scheme, the embodiment of the invention has the following advantages:
the embodiment of the invention provides a database memory management system and a method, wherein a static buffer area can store data which are not updated within a first preset time, a dynamic buffer area can store the data which are updated within the first preset time, the data can be automatically classified and cached, and the caching process is automatically recorded in a log buffer area; when a processing request is acquired through a control module, traversing the dynamic buffer area, directly processing the data if the data corresponding to the processing request data is directly acquired, generating a first buffer area corresponding to the request in the dynamic buffer area when the processing request does not inquire the corresponding data in the dynamic buffer area, writing the data corresponding to the processing request in the static buffer area into the first buffer area respectively, and executing a check point on the data corresponding to the processing request in the static buffer area, so that the data in the static buffer area can be read into the dynamic buffer area on one hand, and the data can be written into a disk by executing the check point on the other hand, and a control file and a data file are updated, therefore, the original data is effectively stored, the original data can be prevented from being lost and updated, the data in the memory of the database is prevented from being continuously accumulated, no idle buffer area can be allocated to receive data, and the technical problem of how to process the data accumulation of the buffer area in the database system is effectively solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a system framework diagram of a database memory management system according to an embodiment of the present invention;
fig. 2 is a flowchart of a database memory management method according to an embodiment of the present invention.
Wherein:
100. a dynamic buffer area; 200. a static buffer area; 300. a log buffer; 400. a dictionary buffer; 500. a control module; 600. a write module; 700. an analysis module; 800. an update module; 900. and deleting the module.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which embodiments of the present invention belong. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions referred to in the embodiments of the present invention will be described, and the terms and expressions referred to in the embodiments of the present invention will be explained as follows.
Referring to fig. 1, fig. 1 is a system framework diagram of a database memory management system according to an embodiment of the present invention.
As shown in fig. 1, the present embodiment provides a database memory management system, including:
a static buffer 200 for storing data complying with a first preset rule including that no update occurs within a first preset time;
a dynamic buffer 100 for storing data that does not comply with a first preset rule;
a log buffer 300 for giving data processed according to a first preset rule in the dynamic buffer 100, generating a log, and storing the log in the log buffer 300;
a control module 500, configured to, when a processing request is obtained, traverse the dynamic buffer 100 according to the processing request, generate a first buffer corresponding to the processing request in the dynamic buffer 100 when the processing request does not query corresponding data in the dynamic buffer 100, write data corresponding to the processing request in the static buffer 200 into the first buffer, and perform a check point on the static buffer 200.
The embodiment of the present invention provides a database memory management system, where the static buffer 200 can store data that is not updated within a first preset time, the dynamic buffer 100 can store data that is updated within the first preset time, can automatically perform classification caching on the data, and the caching process is automatically recorded in the log buffer 300; when a processing request is acquired through the control module 500, the dynamic buffer area 100 is traversed first, if data corresponding to the processing request data is directly acquired, the data is processed directly, when the processing request does not inquire corresponding data in the dynamic buffer area 100, a first buffer area corresponding to the request is generated in the dynamic buffer area 100, the data corresponding to the processing request in the static buffer area 200 is written into the first buffer area respectively, and a check point is executed on the data corresponding to the processing request in the static buffer area 200, so that on one hand, the data in the static buffer area 200 can be read into the dynamic buffer area 100, on the other hand, the data is written into a disk by executing the check point, and a control file and a data file are updated, therefore, the original data is effectively stored, the original data can be prevented from being lost, meanwhile, the data can be updated, and the situation that the data in the memory of the database is continuously accumulated, so that no idle buffer area can distribute the received data is avoided.
The checkpoint is a database event that writes modified data from the cache to disk and updates the control file and data file.
The data buffer of the database memory management system (the System Global Area (SGA) control structure of the database management system instance) includes at least a static buffer 200, a dynamic buffer 100, a log buffer 300, and a dictionary buffer 400.
In this embodiment, in order to avoid the accumulation of the database memory, when the data in the static buffer 200 is not processed within a preset time, the data may be automatically stored in a disk, so as to automatically reduce the data occupying the database memory.
The processing request includes writing new data, updating data, deleting data, and
in some embodiments, the control module 500 is further configured to write the data corresponding to the processing request in the static buffer 200 into the first buffer respectively, log the process of performing checkpoint on the data corresponding to the processing request in the static buffer 200, and assign a mark to store in the log buffer 300.
The control module 500 can record all events occurring in the database memory management system and generate a corresponding log, and marks such as a timestamp and a number are given to the log, so that the log of data can be conveniently searched subsequently.
In some embodiments, further comprising:
a write module 600, configured to determine whether the dynamic buffer 100 has a free buffer area when a write request is obtained, if not, determine whether the static buffer 200 has a free buffer area, and if not, perform a check point on the static buffer 200 based on an LRU cache algorithm to generate a free third buffer area in the static buffer 200; based on the LRU caching algorithm, the dynamic buffer 100 is selected, a part of data in the dynamic buffer 100 is written into the third buffer, a free fourth buffer is generated in the dynamic buffer 100, and data in a write request is written into the fourth buffer.
The write module 600 can perform a priority determination on the dynamic buffer 100 according to a write request, since new data is typically processed through the dynamic buffer 100, the dynamic buffer 100 is the most active working area, the newly written request is also the requested data processed within the first preset time, it is determined whether there is a free buffer or a clean buffer in the dynamic buffer 100, if so, it is processed again, otherwise, the static buffer 200 is determined, and if the static buffer 200 has a free buffer, selecting the dynamic buffer area 100 through an LRU caching algorithm, writing part of data of the dynamic buffer area 100 into the static buffer area 200, and correspondingly caching the data of the write request in the dynamic buffer area 100; if there is no corresponding free buffer in the static buffer 200, because some data may be continuously accumulated and the first preset rule is not reached, so that the static buffer 200 performs the check point, when there is no corresponding free buffer in the static buffer 200, the static buffer 200 performs the check point based on the LRU caching algorithm, and a free third buffer is generated in the static buffer 200, so that the original data can be stored, and at the same time, a free third buffer can be generated in the static buffer 200, when the dynamic buffer 100 is selected by the LRU caching algorithm, part of the data in the dynamic buffer 100 is written into the third buffer, the data in the write request is correspondingly cached in the dynamic buffer 100, so that the data in the write request can be cached in the dynamic buffer 100, and the original data can be saved, and the loss of the original data is avoided.
In some embodiments, further comprising:
a dictionary buffer 400 for holding data dictionary information;
the analysis module 700 is configured to, when an SQL statement is acquired, determine that the SQL statement is converted into a digital code, transmit the digital code to a HASH function, return a HASH value, and search whether the same HASH value exists in the dynamic cache region, and if yes, directly execute the HASH function according to data corresponding to the HASH value; if not, carrying out syntactic analysis on the SQL sentence through the data dictionary information in the dictionary buffer area 400 to generate a compiling code; and caches the SQL statement, the HASH value, and the compiled code in the dynamic buffer 100 and the dictionary buffer 400.
In the parsing module 700, the characters of the SQL statement are converted into ASCII equivalent digital codes, then the ASCII code is passed to a HASH function, and a HASH value is returned, and then the dynamic buffer 100 is searched for whether the same HASH value exists, and if the same HASH value exists, the parsed execution plan version of the SQL statement cached in the dynamic buffer 100 is used for execution, so that the parsing efficiency of the SQL statement can be improved.
The parsing module 700 parses the SQL statement through the data dictionary information in the dictionary buffer 400 by: checking the correctness of the SQL grammar through the data dictionary information, if the SQL grammar is correct, analyzing the object related in the SQL sentence, checking the name and the related structure of the object by comparing the data dictionary information, generating an execution plan according to whether the statistical data of the corresponding object exists in the data dictionary information and whether the storage outline is used, and generating a compiling code according to the execution plan.
If the same HASH value does not exist in the dynamic buffer 100, when the dictionary buffer 400 is used to check structures such as tables, views, and the like from the data dictionary in the SQL analysis stage, the data dictionary needs to be read into the dictionary buffer 400 from the disk, and therefore, before reading, a latch (library cache pin) of the dictionary buffer 400 is also used to apply for caching the data dictionary. This SQL statement has been compiled to executable code so far.
Wherein the objects in the dictionary buffer 400 include tables, indices, and views.
The generation of the compiled code according to the execution plan in the parsing module 700 is as follows: and checking the execution authority of the SQL statement on the corresponding object according to the execution plan and the data dictionary information to generate a compiled code.
In some embodiments, the control module 500 is further configured to store a log generated by the parsing module 700 through the parsing process of the SQL statement in the log buffer 300.
For the process executed in the parsing module 700, the control module 500 records the process, generates a log, and stores the log in the log buffer 300, so as to perform record management on the log buffer 300 for subsequent query.
In some embodiments, further comprising:
an updating module 800, configured to traverse the dynamic buffer 100 when an update request is obtained, query the dynamic buffer 100 for original data corresponding to the request, update the data corresponding to the request into the dynamic buffer 100, write the original data into the static buffer 200, and store a log generated by the process in the log buffer 300.
The updating module can update the corresponding data in the dynamic buffer area after receiving an updating request, and then write the original data of the data into the static buffer area, so that the original data can be stored in the static buffer area for subsequent calling, and meanwhile, a log generated by the process is cached in the log buffer area.
In some embodiments, further comprising:
a deleting module 900, configured to traverse the dynamic buffer 100 when a deleting request is obtained, and if data corresponding to the deleting request is queried in the dynamic buffer 100, directly execute deleting and execute a checkpoint on the data.
When the deletion module receives the deletion request, the deletion module can directly delete the data, and stores the data into the disk while deleting the data, so that the data can be recovered through the disk when the data are deleted by mistake and under the condition that the repeated buffer area can not be recovered, and the data loss caused by the deletion by mistake is avoided.
As shown in fig. 2, an embodiment of the present application further provides a database memory management method, including:
s1: acquiring a processing request;
s2: traversing the dynamic buffer according to the processing request;
s3: when the processing request does not inquire corresponding data in the dynamic buffer area, generating a first buffer area corresponding to the request in the dynamic buffer area;
s4: according to the first buffer area, respectively writing the data corresponding to the processing request in the static buffer area into the first buffer area, and executing a check point on the data corresponding to the processing request in the static buffer area;
the static buffer area is used for storing data which accords with a first preset rule, and the first preset rule comprises that no updating occurs within a first preset time;
the dynamic buffer area is used for storing data which does not accord with a first preset rule;
the log buffer area is used for giving data processed according to a first preset rule in the dynamic buffer area, generating a log and storing the log in the log buffer area.
The static buffer area can store data which are not updated within a first preset time, the dynamic buffer area can store data which are updated within the first preset time, the data can be automatically classified and cached, and the caching process is automatically recorded in the log buffer area; when a processing request is acquired through a control module, traversing the dynamic buffer area, directly processing the data if the data corresponding to the processing request data is directly acquired, generating a first buffer area corresponding to the request in the dynamic buffer area when the processing request does not inquire the corresponding data in the dynamic buffer area, writing the data corresponding to the processing request in the static buffer area into the first buffer area respectively, and executing a check point on the data corresponding to the processing request in the static buffer area, so that the data in the static buffer area can be read into the dynamic buffer area on one hand, and the data can be written into a disk by executing the check point on the other hand, and a control file and a data file are updated, therefore, the original data is effectively stored, the original data can be prevented from being lost and updated, the data in the memory of the database is prevented from being continuously accumulated, no idle buffer area can be allocated to receive data, and the technical problem of how to process the data accumulation of the buffer area in the database system is effectively solved.
The checkpoint is a database event that writes modified data from the cache to disk and updates the control file and data file.
The data buffer of the database memory management system (the System Global Area (SGA) control structure of the database management system instance) includes at least a static buffer 200, a dynamic buffer 100, a log buffer 300, and a dictionary buffer 400.
In this embodiment, in order to avoid the accumulation of the database memory, when the data in the static buffer 200 is not processed within a preset time, the data may be automatically stored in a disk, so as to automatically reduce the data occupying the database memory.
The processing request includes writing new data, updating data, deleting data, and
in some embodiments, the data corresponding to the processing request in the static buffer 200 is written into the first buffer respectively, and the process of performing checkpoint on the data corresponding to the processing request in the static buffer 200 is logged and given a mark to be stored in the log buffer 300.
All matters occurring in the database memory management system are recorded and corresponding logs are generated, and marks such as time stamps and serial numbers are given to the logs, so that the logs of data can be conveniently searched subsequently.
In some embodiments, further comprising:
when a write-in request is acquired, judging whether the dynamic buffer area 100 has a free buffer area, if not, judging whether the static buffer area 200 has a free buffer area, if not, executing a check point on the static buffer area 200 based on an LRU (least recently used) cache algorithm, and generating a free third buffer area in the static buffer area 200; based on the LRU caching algorithm, the dynamic buffer 100 is selected, a part of data in the dynamic buffer 100 is written into the third buffer, a free fourth buffer is generated in the dynamic buffer 100, and data in a write request is written into the fourth buffer.
According to the write request, the dynamic buffer 100 is preferentially judged, because new data is generally processed through the dynamic buffer 100, the dynamic buffer 100 is the most active working area, the new write request is also the request data processed within the first preset time, the dynamic buffer 100 is firstly judged whether to have a free buffer, or to be a clean buffer, if so, the processing is directly carried out again, if not, the static buffer 200 is judged, if the static buffer 200 has a free buffer, the dynamic buffer 100 is selected through a caching algorithm, part of data of the dynamic buffer 100 is written into the static buffer 200, and the data of the write request is cached in the dynamic buffer 100; if there is no corresponding free buffer in the static buffer 200, because some data may be continuously accumulated and the first preset rule is not reached, so that the static buffer 200 performs the check point, when there is no corresponding free buffer in the static buffer 200, the static buffer 200 performs the check point based on the LRU caching algorithm, and a free third buffer is generated in the static buffer 200, so that the original data can be stored, and at the same time, a free third buffer can be generated in the static buffer 200, when the dynamic buffer 100 is selected by the LRU caching algorithm, part of the data in the dynamic buffer 100 is written into the third buffer, the data in the write request is correspondingly cached in the dynamic buffer 100, so that the data in the write request can be cached in the dynamic buffer 100, and the original data can be saved, and the loss of the original data is avoided.
In some embodiments, further comprising:
a dictionary buffer 400 for holding data dictionary information;
when an SQL statement is obtained, judging that the SQL statement is converted into a digital code, transmitting the digital code to a HASH function, returning a HASH value, searching whether the same HASH value exists in a dynamic cache region, and if so, executing data corresponding to the HASH value directly; if not, carrying out syntactic analysis on the SQL sentence through the data dictionary information in the dictionary buffer area 400 to generate a compiling code; and caches the SQL statement, the HASH value, and the compiled code in the dynamic buffer 100 and the dictionary buffer 400.
The characters of the SQL statement are converted into ASCII equivalent digital codes, the ASCII code is then passed to a HASH function, a HASH value is returned, the dynamic buffer 100 is searched for the presence of the same HASH value, and if the HASH value is present, the parsed execution plan version of the SQL statement cached in the dynamic buffer 100 is used for execution, thereby improving the parsing efficiency of the SQL statement.
The syntax analysis of the SQL statement through the data dictionary information in the dictionary buffer 400 is as follows: checking the correctness of the SQL grammar through the data dictionary information, if the SQL grammar is correct, analyzing the object related in the SQL sentence, checking the name and the related structure of the object by comparing the data dictionary information, generating an execution plan according to whether the statistical data of the corresponding object exists in the data dictionary information and whether the storage outline is used, and generating a compiling code according to the execution plan.
If the same HASH value does not exist in the dynamic buffer 100, when the dictionary buffer 400 is used to check structures such as tables, views, and the like from the data dictionary in the SQL analysis stage, the data dictionary needs to be read into the dictionary buffer 400 from the disk, and therefore, before reading, a latch (library cache pin) of the dictionary buffer 400 is also used to apply for caching the data dictionary. This SQL statement has been compiled to executable code so far.
Wherein the objects in the dictionary buffer 400 include tables, indices, and views.
Generating a compiled code according to the execution plan as follows: and checking the execution authority of the SQL statement on the corresponding object according to the execution plan and the data dictionary information to generate a compiled code.
In some embodiments, a log generated by the parsing module 700 for the SQL statement is stored in the log buffer 300.
The log is recorded by the control module 500, and a log is generated and stored in the log buffer 300, so that the log buffer 300 can manage the log for subsequent query.
In some embodiments, further comprising:
when an update request is obtained, traversing the dynamic buffer area 100, querying the original data corresponding to the request in the dynamic buffer area 100, updating the data corresponding to the request in the dynamic buffer area 100, writing the original data in the static buffer area 200, and storing the log generated in the process with the log buffer area 300.
After receiving an update request, updating corresponding data in the dynamic buffer area, and then writing original data of the data into the static buffer area, so that the original data can be stored in the static buffer area for subsequent calling, and meanwhile, a log generated by the process is cached in the log buffer area.
In some embodiments, further comprising:
when a deletion request is acquired, the dynamic buffer area 100 is traversed, and if data corresponding to the deletion request is queried in the dynamic buffer area 100, deletion is directly performed, and a checkpoint is performed on the data.
When a deletion request is received, the data can be directly deleted, and the data is stored in the disk while deleted, so that the data can be recovered through the disk when the data are deleted by mistake and under the condition that the repeated buffer area can not be recovered, and the data loss caused by the deletion by mistake is avoided.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above described systems, apparatuses and units may refer to the corresponding processes in the foregoing system embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment of the invention sends a request for acquiring the image of the annotation task to the server of the annotation platform through the client; after the server side of the labeling platform receives the request, the distributed buffer area performs registration service to the coordination module;
after the registration service is completed, the coordination module reads and writes the picture index information with the timestamp in the index library and sends the picture index information to the distributed buffer area; the distributed buffer zone feeds back the concurrent access amount to the coordination module;
the coordination module adjusts and distributes the picture index information to the distributed buffer area according to the concurrent access amount; the client reads the picture index information of the distributed buffer area and downloads the tagging task picture according to the picture index information;
and the client submits the annotation information to the annotation platform server, and the annotation platform server updates the annotation state of the index information of the annotation task picture. The embodiment of the invention solves the problem of high concurrent reading and writing by increasing the memory buffer area under the distributed storage of big data, provides a high-throughput concurrent labeling service method, and solves the technical problem that the data labeling service has high concurrent reading and writing due to the fact that a high concurrent reading and writing lock strategy is not provided during data labeling in the prior art.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A database memory management system, comprising:
a static buffer for storing data complying with a first preset rule including that no update occurs within a first preset time;
the dynamic buffer area is used for storing data which do not accord with the first preset rule;
the log buffer area is used for giving data processed according to a first preset rule in the dynamic buffer area, generating a log and storing the log in the log buffer area;
the control module is used for traversing the dynamic buffer area according to the processing request when the processing request is acquired, generating a first buffer area corresponding to the request in the dynamic buffer area when the processing request does not inquire corresponding data in the dynamic buffer area, writing the data corresponding to the processing request in the static buffer area into the first buffer area respectively, and executing check points on the data corresponding to the processing request in the static buffer area.
2. The database memory management system according to claim 1, wherein the control module is further configured to write the data in the static buffer corresponding to the processing request into the first buffer, respectively, perform a checkpoint process on the data in the static buffer corresponding to the processing request to form a log, and provide a tag to the log and store the log in the log buffer.
3. The database memory management system according to claim 2, further comprising:
a write-in module, configured to determine whether the dynamic buffer has a free buffer when a write-in request is obtained, determine whether the static buffer has a free buffer if the dynamic buffer does not have a free buffer, execute a checkpoint on the static buffer based on an LRU cache algorithm if the static buffer does not have a free buffer, and generate a free third buffer in the static buffer; and selecting the dynamic buffer area based on an LRU (least recently used) caching algorithm, writing partial data in the dynamic buffer area into the third buffer area, generating a free fourth buffer area in the dynamic buffer area, and writing data in a write request into the fourth buffer area.
4. The database memory management system according to claim 1, further comprising:
a dictionary buffer for storing data dictionary information;
the analysis module is used for judging that the SQL statement is converted into a digital code when the SQL statement is obtained, transmitting the digital code to a HASH function, returning a HASH value, searching whether the same HASH value exists in the dynamic cache region, and if so, executing the operation directly according to data corresponding to the HASH value; if not, carrying out syntactic analysis on the SQL sentence through the data dictionary information in the dictionary buffer area to generate a compiling code; and caching the SQL statement, the HASH value and the compiled code in the dynamic buffer area and the dictionary buffer area.
5. The database memory management system according to claim 4, wherein the parsing module parses the SQL statement through the data dictionary information in the dictionary buffer as follows: checking the correctness of the SQL grammar through the data dictionary information, if the SQL grammar is correct, analyzing the object related in the SQL sentence, checking the name and the related structure of the object by comparing the data dictionary information, generating an execution plan according to whether the statistical data of the corresponding object exists in the data dictionary information and whether the storage outline is used, and generating a compiling code according to the execution plan.
6. The database memory management system according to claim 5, wherein the generation of the compiled code according to the execution plan in the parsing module is: and checking the execution authority of the SQL statement on the corresponding object according to the execution plan and the data dictionary information to generate a compiled code.
7. The database memory management system according to claim 6, wherein the control module is further configured to store a log generated by the parsing module parsing the SQL statement in the log buffer.
8. The database memory management system according to claim 1, further comprising:
and the updating module is used for traversing the dynamic buffer area when an updating request is acquired, inquiring original data corresponding to the request in the dynamic buffer area, updating the data corresponding to the request in the dynamic buffer area, writing the original data in the static buffer area, and storing the log generated in the process and the log buffer area.
9. The database memory management system according to claim 1, further comprising:
and the deleting module is used for traversing the dynamic buffer area when a deleting request is acquired, and directly deleting the data and simultaneously executing a check point on the data if the data corresponding to the deleting request is inquired in the dynamic buffer area.
10. A database memory management method is characterized by comprising the following steps:
acquiring a processing request;
traversing the dynamic buffer according to the processing request;
when the processing request does not inquire corresponding data in the dynamic buffer area, generating a first buffer area corresponding to the request in the dynamic buffer area;
according to the first buffer area, respectively writing the data corresponding to the processing request in the static buffer area into the first buffer area, and executing a check point on the data corresponding to the processing request in the static buffer area;
the static buffer area is used for storing data which accords with a first preset rule, and the first preset rule comprises that no updating occurs within a first preset time;
the dynamic buffer area is used for storing data which does not accord with a first preset rule;
the log buffer area is used for giving data processed according to a first preset rule in the dynamic buffer area, generating a log and storing the log in the log buffer area.
CN202111193539.XA 2021-10-13 2021-10-13 Database memory management system and method Pending CN114153378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111193539.XA CN114153378A (en) 2021-10-13 2021-10-13 Database memory management system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111193539.XA CN114153378A (en) 2021-10-13 2021-10-13 Database memory management system and method

Publications (1)

Publication Number Publication Date
CN114153378A true CN114153378A (en) 2022-03-08

Family

ID=80462438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111193539.XA Pending CN114153378A (en) 2021-10-13 2021-10-13 Database memory management system and method

Country Status (1)

Country Link
CN (1) CN114153378A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009775A (en) * 2022-12-20 2023-04-25 广州辰创科技发展有限公司 Database memory management system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009775A (en) * 2022-12-20 2023-04-25 广州辰创科技发展有限公司 Database memory management system and method
CN116009775B (en) * 2022-12-20 2024-04-02 广州辰创科技发展有限公司 Database memory management system and method

Similar Documents

Publication Publication Date Title
CN107247808B (en) Distributed NewSQL database system and picture data query method
CN111414389B (en) Data processing method and device, electronic equipment and storage medium
US8924373B2 (en) Query plans with parameter markers in place of object identifiers
CN110597859B (en) Method and device for querying data in pages
CN101046821A (en) Generic database manipulator
US11567681B2 (en) Method and system for synchronizing requests related to key-value storage having different portions
US8793288B2 (en) Online access to database snapshots
CN105373541A (en) Processing method and system for data operation request of database
US20070233638A1 (en) Method and system for providing cost model data for tuning of query cache memory in databases
CN109815240B (en) Method, apparatus, device and storage medium for managing index
US11748357B2 (en) Method and system for searching a key-value storage
CN108062314B (en) Dynamic sub-table data processing method and device
CN114116762A (en) Offline data fuzzy search method, device, equipment and medium
US10747773B2 (en) Database management system, computer, and database management method
US10083192B2 (en) Deleted database record reuse
CN111831691B (en) Data reading and writing method and device, electronic equipment and storage medium
CN114153378A (en) Database memory management system and method
CN111694853B (en) Data increment collection method and device based on lineage, storage medium and electronic equipment
CN111639087A (en) Data updating method and device in database and electronic equipment
CN112069172B (en) Power grid data processing method and device, electronic equipment and storage medium
CN110851437A (en) Storage method, device and equipment
CN114443722A (en) Cache management method and device, storage medium and electronic equipment
CN112685431B (en) Asynchronous caching method, device, system, electronic equipment and storage medium
CN118132598B (en) Database data processing method and device based on multi-level cache
US8423532B1 (en) Managing data indexed by a search engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication