CN104504158A - Memory caching method and device for rapidly updating business - Google Patents

Memory caching method and device for rapidly updating business Download PDF

Info

Publication number
CN104504158A
CN104504158A CN201510026026.8A CN201510026026A CN104504158A CN 104504158 A CN104504158 A CN 104504158A CN 201510026026 A CN201510026026 A CN 201510026026A CN 104504158 A CN104504158 A CN 104504158A
Authority
CN
China
Prior art keywords
cache
data
memory cache
memory
key assignments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510026026.8A
Other languages
Chinese (zh)
Inventor
刘伟
金洪殿
辛国茂
亓开元
曹连超
卢军佐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201510026026.8A priority Critical patent/CN104504158A/en
Publication of CN104504158A publication Critical patent/CN104504158A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1847File system types specifically adapted to static storage, e.g. adapted to flash memory or SSD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Abstract

Provided are a memory caching method and a memory caching device for rapidly updating business. The memory caching method for rapidly updating the business includes: seeking in a memory cache based on cache seek key value of read external data, if seeking out the cache seek key value in the memory cache, performing business processing according to initial data stored in the memory cache and corresponding to the cache seek key value and the external data, updating a result into a position in the memory cache, where the initial data is stored, after the business processing is completed, and setting a modification mark corresponding to the position in a modified state. The memory caching method and the memory caching device for rapidly updating the business reduce frequency of operation of frequently submitting transactions for inquiry and updating, performed by a database when the data is rapidly updated.

Description

A kind of method and apparatus of the quick more memory cache of new business
Technical field
The present invention relates to computer information storage technology field, be specifically related to a kind of method and apparatus of the quick more memory cache of new business.
Background technology
Along with human society enters the information age comprehensively, data become the strategic resource of equal importance with water, oil, database has become Organization of Data and has stored most important mode, nearly all in a disguised form database can be used, so the response speed of database also becomes the important bottleneck of application program execution speed to the access of all application programs.In order to improve the response speed of database, the relational database of current main-stream both provides various different resolution policy, as buffer memory executive plan, affairs delay submission, Connection Pool, Materialized View etc.
In database caches, most of database has buffer zone function, and for carrying out fragment cache memory to data, but this buffer memory is fine granularity, low layer, to the insensitive cache mode of practical business, can not completes the overall buffer memory to upper strata practical business.The current caching process to practical business mainly concentrates on the buffer memory of query results, as distributed memory target cache system Memcached, the inquiry request result set can submitted to by buffer memory front end in internal memory reduces the number of times of reading database, and this kind of database has been widely used in Dynamic Web Applications to alleviate database loads.
But for needing the service application of database being carried out to renewal fast, especially need to calculate based on data with existing the service application upgraded again, as the various statistic of classifications of daily record data, need to carry out a large amount of data with existing inquiries and Data Update affairs submission after calculating, data base buffer cannot provide optimization to it, and current not good business applicable models, if only usage data storehouse, service feature can be caused very low.
Summary of the invention
In order to solve the technical matters of above-mentioned existence, the present invention proposes a kind of method and apparatus of the quick more memory cache of new business, can provide simple, fast and efficient processing scheme for the business upgraded fast.Described method comprises:
S1: reading external data;
S2: the cache lookup key assignments based on the external data read is searched in memory cache;
S3: if find described cache lookup key assignments in described memory cache, then carry out business processing according to the raw data corresponding with described cache lookup key assignments stored in memory cache and described external data, process and result has been updated to the position storing described raw data in described memory cache afterwards, and amendment corresponding for this position mark is set to has revised.
Especially:
S4: if do not find described cache lookup key assignments in described memory cache, then determine the memory cache position needing displacement, judge that the replaced amendment corresponding to memory cache position marks whether as unmodified, if unmodified, directly empty this memory cache position, otherwise the raw data that this memory cache position stores is written back to database, then described database is searched according to the cache lookup key assignments of described external data, if find described cache lookup key assignments, then from described database, take out the raw data corresponding with this cache lookup key assignments, business processing is carried out according to described raw data and this external data, process rear memory cache position result being updated to described needs displacement, and the amendment of correspondence mark is set to and revises, if there is not described cache lookup key assignments in described database, then carry out business processing based on this external data and result be updated to the described memory cache position needing displacement, the amendment of correspondence mark being set to simultaneously and revising.
Especially:
S5: according to the time cycle of configuration, the data in memory cache are all committed to database, and reset the amendment that in all memory caches, data storage location is corresponding and be labeled as unmodified.
Especially:
S0: pre-set configuration file, based on described configuration file Extraction parts data being loaded in described memory cache from described database.
Especially:
Described memory cache comprises: cache lookup key field, data field, amendment tag field and Data Update time field.
A device for the quick more memory cache of new business, is characterized in that, comprising:
Data reception module, for receiving external data;
Data processing module, for searching in memory cache based on the cache lookup key assignments of the external data read; If find described cache lookup key assignments in described memory cache, then carry out business processing according to the raw data corresponding with described cache lookup key assignments stored in memory cache and described external data, process and result has been updated to the position storing described raw data in described memory cache afterwards, and amendment corresponding for this position mark is set to has revised.
Especially:
Described data processing module also for: if do not find described cache lookup key assignments in described memory cache, then determine need displacement memory cache position, judge that the replaced amendment corresponding to memory cache position marks whether as unmodified, if unmodified, directly empty this memory cache position, otherwise the raw data that this memory cache position stores is written back to database, then described database is searched according to the cache lookup key assignments of described external data, if find described cache lookup key assignments, then from described database, take out the raw data corresponding with this cache lookup key assignments, business processing is carried out according to described raw data and this external data, process rear memory cache position result being updated to described needs displacement, and the amendment of correspondence mark is set to and revises, if there is not described cache lookup key assignments in described database, then carry out business processing based on this external data and result be updated to the described memory cache position needing displacement, the amendment of correspondence mark being set to simultaneously and revising.
Especially:
Buffer memory submits module to, according to the time cycle of configuration, the data in memory cache is all committed to database, and resets the amendment that in all memory caches, data storage location is corresponding and be labeled as unmodified.
Especially:
Configuration module, for pre-setting configuration file;
Loading module, based on described configuration file Extraction parts data being loaded in described memory cache from described database.
Especially:
Journal output module: for exporting according to the daily record output directory configured and daily record output level the daily record produced in data loading, data processing and buffer memory submission process.
The invention has the beneficial effects as follows: the method and apparatus that the present invention proposes, database can be reduced and submit to affairs to submit to and query manipulation continually.
Accompanying drawing explanation
Accompanying drawing 1 is the equipment framework being applied to the quick more memory cache of new business of database that the present invention proposes.
The memory cache structure that the scheme that accompanying drawing 2 proposes for the present invention adopts.
The memory cache initialization that accompanying drawing 3 proposes for the present invention program.
The cache hit example that accompanying drawing 4 proposes for the present invention program.
The cache miss example that accompanying drawing 5 proposes for the present invention program.
The new data no record example in a database that accompanying drawing 6 proposes for the present invention program.
The buffer memory that accompanying drawing 7 proposes for the present invention program submits example to.
Embodiment
Describe embodiments of the present invention in detail below with reference to accompanying drawing, to the present invention, how application technology means solve technical matters whereby, and the implementation procedure reaching technique effect absolutely proves.It should be noted that, if do not conflicted, each feature in the embodiment of the present invention and embodiment mutually all within protection scope of the present invention.
The design of memory cache model overall architecture as shown in Figure 1.Implementation step is as follows:
1, XML configuration file is write and parsing.An XML profile instance file is as follows:
The each configuration parameter of configuration file illustrates sees that, as following table 1, configuration file is resolved according to filename by Command Line Parsing module.
The each configuration parameter of table 1 configuration file
configuration item configuration instruction
db_conn the database connected can be that database name or JDBC connect, by routine processes connection procedure
table_name the table name of extracted data
key using which field in the buffer as key to carry out cache lookup key, can be multiple field combination
columns in extraction table, which field enters buffer memory
action data loading mode, as loaded front n bar data, last n bar, middle n bar etc.
type caching container type, as array, list, map or self-defined
size memory cache size, herein large little finger of toe array or list equal length
find the lookup algorithm of memory cache
replace the replacement algorithm that memory cache is miss is
time buffer memory submits the cycle to
dir daily record output directory
level daily record output level
2, data initialization loads and memory cache structure.
Memory cache structure as shown in Figure 2, it by key, values, dirty and tags tetra-part form.
The cache lookup key configured in key and configuration file; Values is all fields beyond the key of loading from table; Whether dirty was revised by external data for identifying this buffer memory, and 0 represents unmodified, and 1 expression is revised; Tags replaces for coordinating cache replacement algorithm to complete buffer memory, and as used first-in first-out algorithm (FIFO), tags can be set to the final time that buffer memory loads or upgrades.
Data loading module is according to configuration file initialization memory cache, connection data storehouse, the field of specifying is extracted in the extraction mode (action) of configuration file configuration from specifying table, using the key part being loaded on memory cache as the field of key, residue field is loaded on values part, arranging all data dirty positions is 0, and arranges tags part according to the cache replacement algorithm of configuration.Example asks for an interview accompanying drawing 3 memory cache initialization example.
3, data receiver and process.
The interface that data reception module is provided by program carries out digital independent, and the data of reading are generally through simple process, and as removed hashed field, add field and surround character, null field process etc., are then passed to data handling procedure.
After data processing module receives data, key field is sent to algoritic module interface, carry out memory cache key value by algorithm processing module and search coupling, search procedure uses the cache lookup algorithm of configuration, and conventional has sequential search, binary chop when orderly (key be numeral and), hash etc., if search successfully, return data position, otherwise return failure;
If search successfully, i.e. cache hit, data processing module carries out business processing according to the data of memory cache and new data, has processed rear result to be updated in the values of memory cache, and juxtaposition dirty is 1.Example is shown in accompanying drawing 4 cache hit example, and this example is using the time as tags field, and the cache replacement algorithm of use is not for use algorithm LRU at most recently, lower same.
If cache miss, call algorithm processing module, use the cache replacement algorithm of configuration, determine the cache location that will replace.If replaced buffer memory dirty position is 0, directly empty this section of buffer memory, if be not 0, need data cachedly to be written back to database replaced.Then database is searched according to new data key, if there is key identical recordings in database, take out this record and new data carries out business processing, processed and rear result has been updated in the values of memory cache, juxtaposition dirty is 1, and example is shown in accompanying drawing 5 cache miss example.If data do not exist in data, directly process new data and be updated in the values of memory cache, put dirty is 1 simultaneously, and example is shown in accompanying drawing 6 new data, database no record example.
4, buffer memory is submitted to.According to the time cycle of configuration, buffer memory submits to module timing that memory cache data are all committed to database, and the dirty position resetting all memory cache data is 0.Example is shown in that accompanying drawing 7 buffer memory submits example to.
5, daily record exports.
Journal output module is according to the daily record output directory of configuration and daily record output level output journal.Daily record rank can divide 3 to 7 grades, as 3 grades of classification can be divided into mistake, detailed three grades.Error level only exports operating mistake, and general rank can export data handling procedure summary info, and Level of Detail exports the detail statistics information of data processing.The daily record of Level of Detail can be used for debugging and the optimization of configuration.
Above-mentioned modules can use the hardware implementing such as FPGA device, special IC.
Certainly; the present invention also can have other various embodiments; when not deviating from the present invention's spirit and essence thereof; those of ordinary skill in the art are when making various corresponding change and distortion according to the present invention, but these change accordingly and are out of shape the protection domain that all should belong to claim of the present invention.

Claims (10)

1. a method for the quick more memory cache of new business, is characterized in that, comprising:
S1: reading external data;
S2: the cache lookup key assignments based on the external data read is searched in memory cache;
S3: if find described cache lookup key assignments in described memory cache, then carry out business processing according to the raw data corresponding with described cache lookup key assignments stored in memory cache and described external data, process and result has been updated to the position storing described raw data in described memory cache afterwards, and amendment corresponding for this position mark is set to has revised.
2. the method for claim 1, is characterized in that, also comprises:
S4: if do not find described cache lookup key assignments in described memory cache, then determine the memory cache position needing displacement, judge that the replaced amendment corresponding to memory cache position marks whether as unmodified, if unmodified, directly empty this memory cache position, otherwise the raw data that this memory cache position stores is written back to database, then described database is searched according to the cache lookup key assignments of described external data, if find described cache lookup key assignments, then from described database, take out the raw data corresponding with this cache lookup key assignments, business processing is carried out according to described raw data and this external data, process rear memory cache position result being updated to described needs displacement, and the amendment of correspondence mark is set to and revises, if there is not described cache lookup key assignments in described database, then carry out business processing based on this external data and result be updated to the described memory cache position needing displacement, the amendment of correspondence mark being set to simultaneously and revising.
3. method as claimed in claim 2, is characterized in that, also comprise:
S5: according to the time cycle of configuration, the data in memory cache are all committed to database, and reset the amendment that in all memory caches, data storage location is corresponding and be labeled as unmodified.
4. the method as described in any one of claims 1 to 3, is characterized in that, also comprises before step S1:
S0: pre-set configuration file, based on described configuration file Extraction parts data being loaded in described memory cache from described database.
5. method as claimed in claim 4, is characterized in that:
Described memory cache comprises: cache lookup key field, data field, amendment tag field and Data Update time field.
6. a device for the quick more memory cache of new business, is characterized in that, comprising:
Data reception module, for receiving external data;
Data processing module, for searching in memory cache based on the cache lookup key assignments of the external data read; If find described cache lookup key assignments in described memory cache, then carry out business processing according to the raw data corresponding with described cache lookup key assignments stored in memory cache and described external data, process and result has been updated to the position storing described raw data in described memory cache afterwards, and amendment corresponding for this position mark is set to has revised.
7. device as claimed in claim 6, is characterized in that:
Described data processing module also for: if do not find described cache lookup key assignments in described memory cache, then determine need displacement memory cache position, judge that the replaced amendment corresponding to memory cache position marks whether as unmodified, if unmodified, directly empty this memory cache position, otherwise the raw data that this memory cache position stores is written back to database, then described database is searched according to the cache lookup key assignments of described external data, if find described cache lookup key assignments, then from described database, take out the raw data corresponding with this cache lookup key assignments, business processing is carried out according to described raw data and this external data, process rear memory cache position result being updated to described needs displacement, and the amendment of correspondence mark is set to and revises, if there is not described cache lookup key assignments in described database, then carry out business processing based on this external data and result be updated to the described memory cache position needing displacement, the amendment of correspondence mark being set to simultaneously and revising.
8. device as claimed in claim 6, is characterized in that, also comprise:
Buffer memory submits module to, according to the time cycle of configuration, the data in memory cache is all committed to database, and resets the amendment that in all memory caches, data storage location is corresponding and be labeled as unmodified.
9. the device as described in any one of claim 6 to 8, is characterized in that, also comprises:
Configuration module, for pre-setting configuration file;
Loading module, based on described configuration file Extraction parts data being loaded in described memory cache from described database.
10. device as claimed in claim 9, is characterized in that, also comprise:
Journal output module: for exporting according to the daily record output directory configured and daily record output level the daily record produced in data loading, data processing and buffer memory submission process.
CN201510026026.8A 2015-01-19 2015-01-19 Memory caching method and device for rapidly updating business Pending CN104504158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510026026.8A CN104504158A (en) 2015-01-19 2015-01-19 Memory caching method and device for rapidly updating business

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510026026.8A CN104504158A (en) 2015-01-19 2015-01-19 Memory caching method and device for rapidly updating business

Publications (1)

Publication Number Publication Date
CN104504158A true CN104504158A (en) 2015-04-08

Family

ID=52945555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510026026.8A Pending CN104504158A (en) 2015-01-19 2015-01-19 Memory caching method and device for rapidly updating business

Country Status (1)

Country Link
CN (1) CN104504158A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055640A (en) * 2016-05-31 2016-10-26 乐视控股(北京)有限公司 Buffer memory management method and system
CN106156334A (en) * 2016-07-06 2016-11-23 益佳科技(北京)有限责任公司 Internal storage data processing equipment and internal storage data processing method
CN106326499A (en) * 2016-10-14 2017-01-11 广州市千钧网络科技有限公司 Data processing method and device
WO2017032240A1 (en) * 2015-08-24 2017-03-02 阿里巴巴集团控股有限公司 Data storage method and apparatus for mobile terminal
CN106897280A (en) * 2015-12-17 2017-06-27 阿里巴巴集团控股有限公司 Data query method and device
CN107301215A (en) * 2017-06-09 2017-10-27 北京奇艺世纪科技有限公司 A kind of search result caching method and device, searching method and device
CN107679218A (en) * 2017-10-17 2018-02-09 九州通医疗信息科技(武汉)有限公司 Searching method and device based on internal memory
CN107710203A (en) * 2015-06-29 2018-02-16 微软技术许可有限责任公司 Transaction database layer on distributed key/value thesaurus
CN109299125A (en) * 2018-10-31 2019-02-01 中国银行股份有限公司 Database update method and device
CN109933312A (en) * 2019-03-25 2019-06-25 南京邮电大学 A method of containerization relevant database I/O consumption is effectively reduced

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161546A1 (en) * 2005-01-18 2006-07-20 Callaghan Mark D Method for sorting data
CN101131673A (en) * 2006-08-22 2008-02-27 中兴通讯股份有限公司 General caching method
WO2013041852A2 (en) * 2011-09-19 2013-03-28 Cloudtran, Inc. Scalable distributed transaction processing system
CN103246696A (en) * 2013-03-21 2013-08-14 宁波公众信息产业有限公司 High-concurrency database access method and method applied to multi-server system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161546A1 (en) * 2005-01-18 2006-07-20 Callaghan Mark D Method for sorting data
CN101131673A (en) * 2006-08-22 2008-02-27 中兴通讯股份有限公司 General caching method
WO2013041852A2 (en) * 2011-09-19 2013-03-28 Cloudtran, Inc. Scalable distributed transaction processing system
CN103246696A (en) * 2013-03-21 2013-08-14 宁波公众信息产业有限公司 High-concurrency database access method and method applied to multi-server system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710203A (en) * 2015-06-29 2018-02-16 微软技术许可有限责任公司 Transaction database layer on distributed key/value thesaurus
US11301457B2 (en) 2015-06-29 2022-04-12 Microsoft Technology Licensing, Llc Transactional database layer above a distributed key/value store
US10776323B2 (en) 2015-08-24 2020-09-15 Alibaba Group Holding Limited Data storage for mobile terminals
WO2017032240A1 (en) * 2015-08-24 2017-03-02 阿里巴巴集团控股有限公司 Data storage method and apparatus for mobile terminal
CN106897280B (en) * 2015-12-17 2020-07-14 菜鸟智能物流控股有限公司 Data query method and device
CN106897280A (en) * 2015-12-17 2017-06-27 阿里巴巴集团控股有限公司 Data query method and device
CN106055640A (en) * 2016-05-31 2016-10-26 乐视控股(北京)有限公司 Buffer memory management method and system
CN106156334B (en) * 2016-07-06 2019-11-22 益佳科技(北京)有限责任公司 Internal storage data processing equipment and internal storage data processing method
CN106156334A (en) * 2016-07-06 2016-11-23 益佳科技(北京)有限责任公司 Internal storage data processing equipment and internal storage data processing method
CN106326499B (en) * 2016-10-14 2019-10-18 广州市千钧网络科技有限公司 A kind of data processing method and device
CN106326499A (en) * 2016-10-14 2017-01-11 广州市千钧网络科技有限公司 Data processing method and device
CN107301215A (en) * 2017-06-09 2017-10-27 北京奇艺世纪科技有限公司 A kind of search result caching method and device, searching method and device
CN107301215B (en) * 2017-06-09 2020-12-18 北京奇艺世纪科技有限公司 Search result caching method and device and search method and device
CN107679218A (en) * 2017-10-17 2018-02-09 九州通医疗信息科技(武汉)有限公司 Searching method and device based on internal memory
CN109299125A (en) * 2018-10-31 2019-02-01 中国银行股份有限公司 Database update method and device
CN109933312A (en) * 2019-03-25 2019-06-25 南京邮电大学 A method of containerization relevant database I/O consumption is effectively reduced
CN109933312B (en) * 2019-03-25 2021-06-01 南京邮电大学 Method for effectively reducing I/O consumption of containerized relational database

Similar Documents

Publication Publication Date Title
CN104504158A (en) Memory caching method and device for rapidly updating business
US11119997B2 (en) Lock-free hash indexing
CN107273522B (en) Multi-application-oriented data storage system and data calling method
CN105868228B (en) In-memory database system providing lock-free read and write operations for OLAP and OLTP transactions
CN105630864B (en) Forced ordering of a dictionary storing row identifier values
CN105630860B (en) Database system with transaction control block index
CN105630863B (en) Transaction control block for multi-version concurrent commit status
US10552402B2 (en) Database lockless index for accessing multi-version concurrency control data
CN105630865B (en) N-bit compressed versioned column data array for memory columnar storage
US9875024B2 (en) Efficient block-level space allocation for multi-version concurrency control data
US10671594B2 (en) Statement based migration for adaptively building and updating a column store database from a row store database based on query demands using disparate database systems
CN104298760B (en) A kind of data processing method and data processing equipment applied to data warehouse
CN104685498B (en) The hardware implementation mode of polymerization/division operation:Hash table method
CN102799634B (en) Data storage method and device
US10657116B2 (en) Create table for exchange
US20160147786A1 (en) Efficient Database Undo / Redo Logging
US20110137875A1 (en) Incremental materialized view refresh with enhanced dml compression
CN105408895A (en) Latch-free, log-structured storage for multiple access methods
CN103678556A (en) Method for processing column-oriented database and processing equipment
US10860562B1 (en) Dynamic predicate indexing for data stores
US10437688B2 (en) Enhancing consistent read performance for in-memory databases
CN104021145A (en) Mixed service concurrent access method and device
US8386445B2 (en) Reorganizing database tables
CN110109910A (en) Data processing method and system, electronic equipment and computer readable storage medium
CN108536745B (en) Shell-based data table extraction method, terminal, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150408