A kind of memory database method of supporting mass memory
Technical field
The present invention relates to a kind of memory database data administrative skill, particularly relate to a kind of memory database method of supporting mass memory.
Background technology
In recent years, along with the fast development of infotech, it is more and more important that performance becomes, and all used memory database in many systems.Along with the increase of quantity of information, and the internal memory of memory database place main frame is not enough, has produced some problems.
The basic functional principle of memory database and target are that information is kept in the internal memory, improve response speed, because the speed of internal memory operation is more a lot of soon than the speed of disk operating.Some memory database systems are when starting, the information of all physical databases all is loaded into memory database, as long as the user is the content of operation memory database later on, only need background work process and physical data database data to carry out getting final product synchronously, improved response speed so greatly.As shown in Figure 1, still when physical database has lot of data to be loaded into internal memory, can there be the situation of low memory.The solution of some systems is that a loading section data is in internal memory, the internal memory that uses is before cemented out, and other parts are still in physical database, as shown in Figure 2, so, each user capture still during the data in physical database, all will be reloaded, and so just can not reach the purpose of the fast access of memory database.
Summary of the invention
The technical problem to be solved in the present invention is the defective that can not all be loaded in order to overcome the prior art physical database mass data to be arranged in the memory database, a kind of memory database method of supporting mass memory is provided, the efficient performance that it ensures the memory database of mass memory makes speeding up of access memory database.
The present invention solves above-mentioned technical matters by following technical proposals: a kind of memory database method of supporting mass memory, it is characterized in that, it may further comprise the steps: memory database the user will be visited and not the record in memory database be written into from a physical database, reach in limited time when memory database uses internal memory, regularly remove and use probability to be lower than the record of setting value in the memory database.
Preferably, it is to finish by two threads that work alone that the record that uses probability to be lower than setting value in the memory database is removed in described timing, and two threads that work alone begin to scan one by one from the front end and the tail end of the data of memory database respectively.
Preferably, behind the record in one section memory database of described each thread that works alone scanning, discharge a synchrolock, adopt the mode switch threads of dormancy thread.
Preferably, the record that memory database, exists synchronously from physical database of described memory database.
Preferably, add each access time and access times in the described record, according to the use probability of access time and access times calculating record.
Positive progressive effect of the present invention is: by method of the present invention, over time, user's the information of rolling off the production line can be owing to use probability to diminish and remove from internal memory, all be the user that reaches the standard grade basically in the internal memory, search efficiency than the user that reaches the standard grade has faster so just been arranged, again because backstage cleaning thread can be behind the cleaning one piece of data, the meeting dormancy, so just can not take too many CPU (microprocessor) at short notice, can allow other application program that the chance of access memory data is arranged again, improve response speed.
Description of drawings
The synoptic diagram of low memory may appear in Fig. 1 when loading the physical database content for existing memory database.
Fig. 2 is the existing processing synoptic diagram of memory database under the low memory situation.
Fig. 3 uses probability to be lower than the synoptic diagram of the record of setting value for two-wire journey of the present invention from the memory database two ends toward middle removing.
Fig. 4 is the synoptic diagram of the selection method of synchronization between memory database of the present invention and physical database.
Embodiment
Provide preferred embodiment of the present invention below in conjunction with accompanying drawing, to describe technical scheme of the present invention in detail.
The present invention can be before memory database system starts, the minimum use probability of setting data record.When user application access memory database, if be recorded in the memory database, the access times that it is corresponding add 1 and record time of using, the back is visit directly, if record is not in memory database, to from physical database, be written into this earlier and record in the memory database, and write down the information such as time of adding access times and use for this reason, carry out accessing operation thereafter again.
The record that memory database only exists memory database synchronously from physical database, during as certain bar record modification of physical database,, then refresh and be changed to memory database if when also having this record in the memory database, otherwise needn't refresh, as shown in Figure 4.When memory database is revised data,, then add this record to physical database automatically, promptly be synchronized to physical database automatically as adding record.
Meanwhile, two threads that work alone are arranged, the use probability of every record of dedicated calculation, concrete grammar is as follows: they begin scanning and regularly cleaning from the two ends of data respectively toward the centre, as shown in Figure 3, with access times divided by its for the first time and last service time the interval, if the use probability of calculating is lower than setting value, all content association of the record of its correspondence are removed from memory database, and each thread is after having visited a certain amount of data recording, discharge synchrolock, adopt the mode of dormancy, make and other The Application of Thread access memory database of can having an opportunity continue later on again, memory database just has response speed faster like this.
Use the two-wire journey to be from the benefit of two ends visit data: one, the two-wire journey can be accelerated access speed than the single-threaded more time sheet of getting; Its two, begin from two ends its probability that will visit same data simultaneously to be diminished toward middle reverse direction scanning, it is robbed the synchrolock probability and just diminishes, so the speeding up of traversal.
This scheme is as being used in the instant communication software project, under the very many situations of user, all data in the physical database all can not be loaded into memory database, memory database on many application servers loading data from the same physical database, every application server only will current use probability be loaded into the internal memory from physical database than higher user profile record on current server.
Owing to adopted this scheme, after a period of time, user's the information of rolling off the production line can be owing to use probability to diminish and remove from internal memory, basically all be the user that reaches the standard grade in the internal memory, the search efficiency than the user that reaches the standard grade has faster so just been arranged, again because backstage cleaning thread can be behind the cleaning one piece of data, the meeting dormancy, so just can not take too many CPU (microprocessor) at short notice, can allow other application program that the chance of access memory data is arranged again, improve response speed.
Though more than described the specific embodiment of the present invention, it will be understood by those of skill in the art that these only illustrate, under the prerequisite that does not deviate from principle of the present invention and essence, can make numerous variations or modification to these embodiments.Therefore, protection scope of the present invention is limited by appended claims.