CN107180082B - Data updating system and method based on multi-level cache mechanism - Google Patents

Data updating system and method based on multi-level cache mechanism Download PDF

Info

Publication number
CN107180082B
CN107180082B CN201710304976.1A CN201710304976A CN107180082B CN 107180082 B CN107180082 B CN 107180082B CN 201710304976 A CN201710304976 A CN 201710304976A CN 107180082 B CN107180082 B CN 107180082B
Authority
CN
China
Prior art keywords
data
cache
server
memory cache
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710304976.1A
Other languages
Chinese (zh)
Other versions
CN107180082A (en
Inventor
杜易霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201710304976.1A priority Critical patent/CN107180082B/en
Publication of CN107180082A publication Critical patent/CN107180082A/en
Application granted granted Critical
Publication of CN107180082B publication Critical patent/CN107180082B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/16General purpose computing application
    • G06F2212/163Server or database system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a data updating system and method based on a multi-level cache mechanism, wherein the system comprises: the GPRS module is used for receiving request data sent by the air conditioning unit and reporting the request data to the server; the server is used for storing the request data reported by the GPRS module into a memory cache; when the memory cache is full of data, storing the data in the memory cache into a file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into a Redis database; when the update time node is reached, writing data updated by at least one data update instruction into the Redis database. The technical scheme provided by the invention can keep the stability of data updating under the condition of accommodating large data.

Description

Data updating system and method based on multi-level cache mechanism
Technical Field
The invention relates to the technical field of data processing, in particular to a data updating system and method based on a multi-level cache mechanism.
Background
With the rapid development of the modern internet, the bandwidth is increased and huge data is concurrent, so that higher requirements are required on the performance of the server, the maintenance cost is increased, more and more applications face the bottleneck problem of data access, and how to store and process data becomes a serious challenge.
Currently, data concurrency may cause a situation that a large number of requesters simultaneously request to read and write one data, which may result in deadlock of the database and cause crash of the application. Therefore, as the amount of data increases, it is important to provide a system capable of accommodating an excessively large amount of data and performing data update stably.
Disclosure of Invention
The embodiment of the invention provides a data updating system and method based on a multi-level cache mechanism, which can keep the stability of data updating under the condition of accommodating big data.
In order to achieve the above object, an aspect of the present invention provides a data updating system based on a multi-level cache mechanism, where the system includes a GPRS module and a server, and the server includes a memory cache, a file cache, a Hadoop file system, and a Redis database, where: the GPRS module is used for receiving request data sent by an air conditioning unit and reporting the request data to the server; the server is used for storing the request data reported by the GPRS module into the memory cache; when the memory cache is full of data, storing the data in the memory cache into the file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into the Redis database, so that when the server is restarted, the data in the disk is loaded into the memory; when the server receives at least one data updating instruction, whether an updating time node is reached is judged; when the update time node is reached, writing the data updated by the at least one data update instruction into the Redis database.
Further, the GPRS module is further configured to determine whether the current requested data volume is accumulated to a preset data volume after receiving the requested data sent by the air conditioning unit; and when the request data arrive, reporting the request data to a server.
Further, after the file cache is full of data, emptying the data in the file cache for the next data storage.
Further, when performing read-write operation on data in the server, the server is further configured to receive an operation request and determine a type of the operation request; taking out the current data in the memory cache, and judging whether the taken out data is dirty data; if the extracted data is dirty data, reading target data corresponding to the operation request from the Redis database, and writing the target data into the memory cache; and performing operation adaptive to the type of the operation request on the target data in the memory cache.
Further, the server is further configured to write the fetched data back to the Redis database if the fetched data is non-dirty data.
Further, when performing an operation adapted to the type of the operation request on the target data in the memory cache, the server is further configured to mark the target data as non-dirty data and return the target data to a requester that sends the read operation when the operation request is a read operation; and when the operation request is write operation, updating the target data in the memory cache, and marking the updated data as dirty data.
The application also provides a data updating method based on a multi-level cache mechanism, which comprises the following steps: receiving request data sent by an air conditioning unit through a GPRS module, and reporting the request data to a server; the server stores the request data reported by the GPRS module into a memory cache; when the memory cache is full of data, storing the data in the memory cache into a file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into a Redis database, so that when the server is restarted, the data in the disk is loaded into a memory; when the server receives at least one data updating instruction, whether an updating time node is reached is judged; when the update time node is reached, writing the data updated by the at least one data update instruction into the Redis database.
Further, the method further comprises: when reading and writing data in the server, the server receives an operation request and determines the type of the operation request; taking out the current data in the memory cache, and judging whether the taken out data is dirty data; if the extracted data is dirty data, reading target data corresponding to the operation request from the Redis database, and writing the target data into the memory cache; and performing operation adaptive to the type of the operation request on the target data in the memory cache.
Further, the method further comprises: and if the fetched data is non-dirty data, the server writes the fetched data back to the Redis database.
Further, the operation of adapting to the type of the operation request on the target data in the memory cache includes: when the operation request is read operation, the server marks the target data as non-dirty data and returns the target data to a requester sending the read operation; and when the operation request is write operation, updating the target data in the memory cache, and marking the updated data as dirty data.
Therefore, the server can be provided with the multi-level cache, so that the pressure of the server in storing mass data is relieved. In addition, when the server receives the data updating instructions, the data updating instructions are not immediately updated aiming at each data updating instruction, but are processed in batch when the updating time node arrives, so that frequent locking of the database is avoided, and the stability of the data updating is further ensured.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a multi-level caching mechanism in the present application;
FIG. 2 is a schematic diagram of data update in the present application;
FIG. 3 is a diagram illustrating a read/write operation in the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a data updating system based on a multi-level cache mechanism, which comprises a GPRS (General Packet Radio Service) module and a server. Referring to fig. 1 and fig. 2, the Server includes a memory cache, a file cache, a Hadoop file system, and a Remote Dictionary service (Remote Dictionary Server) database, where:
the GPRS module is used for receiving request data sent by an air conditioning unit and reporting the request data to the server;
the server is used for storing the request data reported by the GPRS module into the memory cache; when the memory cache is full of data, storing the data in the memory cache into the file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into the Redis database, so that when the server is restarted, the data in the disk is loaded into the memory;
when the server receives at least one data updating instruction, whether an updating time node is reached is judged; when the update time node is reached, writing the data updated by the at least one data update instruction into the Redis database.
Specifically, when the data updating command is received, the data can be updated and stored in the cache, and when the update time node arrives, the data is written into the database.
In an actual application scenario, when the air conditioner unit sends a fault or requests to actively report data, the unit main control transmits the data to the GPRS module, at the moment, the GPRS module preprocesses and stores the unit data, and when the data are stored to a set amount, the data are reported to the server.
In this embodiment, a thread may be a level one cache (memory cache): the GPRS module communicates with the access server via a TCP (Transmission Control Protocol), and when the GPRS module reports data to the server, the server puts the data into a memory level cache. The buffer space of the memory level is very small, so that huge data volume cannot be met; and the time for the memory level cache to store data is short, so the memory level cache cannot meet the requirements.
In this embodiment, since the memory cache cannot meet the requirement, all caches requiring a larger space, i.e., the file cache, are required. The file caching only plays a transitional role in the whole multi-level caching process, and when a data stream is cached to the file caching in a server through a memory, a corresponding file is generated on a server disk and used for storing data.
In the embodiment, the Hadoop file system (third-level cache) is a distributed system infrastructure, and can perform high-speed operation and storage by fully utilizing the advantages of the cluster. Hadoop realizes a Distributed File System (Hadoop Distributed File System), HDFS for short, and provides storage for massive data. Because the file cache requires high requirements on server performance and memory space, when massive data is concurrently flushed into a server, the file cache can easily reach the limit, so when the file cache generates a storage file, the Hadoop synchronization program can also generate a corresponding file in the Hadoop, and the data is debugged and stored in the Hadoop file. And when the file cache stores the data completely, the data is emptied, and the storage space is released for the next data storage.
In this embodiment, Redis is a memory type database, and supports data persistence, so that data in the memory can be kept in a disk, and can be loaded again for use when restarted, thereby implementing data caching and avoiding data loss. Redis has outstanding performance when processing relational databases, can realize read-write separation, and has high read-write speed.
For example, the GPRS module and the server perform data transmission through TCP protocol communication, for example, 100 ten thousand engineering GPRS modules are installed throughout the country, the GPRS module is connected to the server only when the air conditioning unit fails or actively monitors, for example, 10000 modules are connected to the service to perform data transmission, and then the concurrency is 10000. The access server firstly stores the data into the memory after receiving the data sent by the module, but the memory is very small, so that huge data volume cannot be met; at this time, the data needs to be further stored in the file cache, the Hadoop synchronization program is called while the file cache is performed, the data stored in the file cache is backed up on the Hadoop, when the file cache finishes data storage, the data is emptied, a space is made for storing data reported next time, and then the data stored in the Hadoop is analyzed and analyzed by the analysis program to obtain a corresponding result. When the analysis program analyzes the Hadoop data, a user can store part of original data according to needs to avoid the rechecking of analysis or analysis errors; and finally, storing the data into a redis cache and persisting the data.
In this embodiment, the GPRS module is further configured to determine whether the current requested data amount reaches a preset data amount in an accumulated manner after receiving the requested data sent by the air conditioning unit; and when the request data arrive, reporting the request data to a server.
In this embodiment, after the file cache is full of data, the data in the file cache is emptied for the next data storage.
Referring to fig. 3, in this embodiment, when performing read-write operation on data in the server, the server is further configured to receive an operation request and determine a type of the operation request; taking out the current data in the memory cache, and judging whether the taken out data is dirty data; if the extracted data is dirty data, reading target data corresponding to the operation request from the Redis database, and writing the target data into the memory cache; and performing operation adaptive to the type of the operation request on the target data in the memory cache.
In this embodiment, the server is further configured to write the fetched data back to the Redis database if the fetched data is non-dirty data.
In this embodiment, when performing an operation adapted to the type of the operation request on the target data in the memory cache, the server is further configured to mark the target data as non-dirty data and return the target data to a requester that sends the read operation when the operation request is a read operation; and when the operation request is write operation, updating the target data in the memory cache, and marking the updated data as dirty data.
The application also provides a data updating method based on a multi-level cache mechanism, which comprises the following steps:
receiving request data sent by an air conditioning unit through a GPRS module, and reporting the request data to a server;
the server stores the request data reported by the GPRS module into a memory cache; when the memory cache is full of data, storing the data in the memory cache into a file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into a Redis database, so that when the server is restarted, the data in the disk is loaded into a memory;
when the server receives at least one data updating instruction, whether an updating time node is reached is judged; when the update time node is reached, writing the data updated by the at least one data update instruction into the Redis database.
In this embodiment, the method further comprises:
when reading and writing data in the server, the server receives an operation request and determines the type of the operation request; taking out the current data in the memory cache, and judging whether the taken out data is dirty data; if the extracted data is dirty data, reading target data corresponding to the operation request from the Redis database, and writing the target data into the memory cache; and performing operation adaptive to the type of the operation request on the target data in the memory cache.
In this embodiment, the method further comprises:
and if the fetched data is non-dirty data, the server writes the fetched data back to the Redis database.
In this embodiment, the performing, to the target data in the memory cache, an operation adapted to the type of the operation request includes:
when the operation request is read operation, the server marks the target data as non-dirty data and returns the target data to a requester sending the read operation; and when the operation request is write operation, updating the target data in the memory cache, and marking the updated data as dirty data.
Therefore, the server can be provided with the multi-level cache, so that the pressure of the server in storing mass data is relieved. In addition, when the server receives the data updating instructions, the data updating instructions are not immediately updated aiming at each data updating instruction, but are processed in batch when the updating time node arrives, so that frequent locking of the database is avoided, and the stability of the data updating is further ensured.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A data updating system based on a multi-level cache mechanism is characterized by comprising a GPRS module and a server, wherein the server comprises a memory cache, a file cache, a Hadoop file system and a Redis database, wherein:
the GPRS module is used for receiving request data sent by an air conditioning unit and reporting the request data to the server;
the server is used for storing the request data reported by the GPRS module into the memory cache; when the memory cache is full of data, storing the data in the memory cache into the file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into the Redis database, so that when the server is restarted, the data in the disk is loaded into the memory;
when the server receives at least one data updating instruction, updating and storing data in the memory cache, the file cache and the Hadoop file system, and judging whether an updating time node is reached; when the update time node is reached, processing the at least one data update instruction in batch, and writing the data updated by the at least one data update instruction into the Redis database;
when the data in the server is subjected to read-write operation, the server is also used for receiving an operation request and determining the type of the operation request; judging whether target data corresponding to the operation request exists in a memory cache, if not, taking out the current data in the memory cache, and judging whether the taken out data is dirty data; if the extracted data is dirty data, reading target data corresponding to the operation request from the Redis database, and writing the target data into the memory cache; performing operation adaptive to the type of the operation request on the target data in the memory cache; wherein the current data in the memory cache at least comprises: and requesting corresponding target data in the last operation.
2. The data updating system of claim 1, wherein the GPRS module is further configured to determine whether the current requested data amount is accumulated to a preset data amount after receiving the requested data sent by the air conditioning unit; and when the request data arrive, reporting the request data to a server.
3. The data update system of claim 1, wherein after the file cache is full of data, the data in the file cache is emptied for the next data storage.
4. The data update system of claim 1, wherein the server is further configured to write the fetched data back into the Redis database if the fetched data is non-dirty data.
5. The data updating system according to claim 1, wherein when performing an operation adapted to the type of the operation request on the target data in the memory cache, the server is further configured to mark the target data as non-dirty data and return the target data to a requester that sends the read operation when the operation request is a read operation; and when the operation request is write operation, updating the target data in the memory cache, and marking the updated data as dirty data.
6. A data updating method based on a multi-level cache mechanism is characterized by comprising the following steps:
receiving request data sent by an air conditioning unit through a GPRS module, and reporting the request data to a server;
the server stores the request data reported by the GPRS module into a memory cache; when the memory cache is full of data, storing the data in the memory cache into a file cache; when data are generated in the file cache, the Hadoop file system backs up the data generated in the file cache; when the data backed up in the Hadoop file system is analyzed, storing the analyzed data into a Redis database, so that when the server is restarted, the data in the disk is loaded into a memory;
when the server receives at least one data updating instruction, updating and storing data in the memory cache, the file cache and the Hadoop file system, and judging whether an updating time node is reached; when the update time node is reached, processing the at least one data update instruction in batch, and writing the data updated by the at least one data update instruction into the Redis database;
the method further comprises the following steps: when reading and writing data in the server, the server receives an operation request and determines the type of the operation request; judging whether target data corresponding to the operation request exists in a memory cache, if not, taking out the current data in the memory cache, and judging whether the taken out data is dirty data; if the extracted data is dirty data, reading target data corresponding to the operation request from the Redis database, and writing the target data into the memory cache; performing operation adaptive to the type of the operation request on the target data in the memory cache; wherein the current data in the memory cache at least comprises: and requesting corresponding target data in the last operation.
7. The data updating method of claim 6, wherein the method further comprises:
and if the fetched data is non-dirty data, the server writes the fetched data back to the Redis database.
8. The data updating method according to claim 6, wherein performing the operation on the target data in the memory cache according to the type of the operation request comprises:
when the operation request is read operation, the server marks the target data as non-dirty data and returns the target data to a requester sending the read operation; and when the operation request is write operation, updating the target data in the memory cache, and marking the updated data as dirty data.
CN201710304976.1A 2017-05-03 2017-05-03 Data updating system and method based on multi-level cache mechanism Expired - Fee Related CN107180082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710304976.1A CN107180082B (en) 2017-05-03 2017-05-03 Data updating system and method based on multi-level cache mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710304976.1A CN107180082B (en) 2017-05-03 2017-05-03 Data updating system and method based on multi-level cache mechanism

Publications (2)

Publication Number Publication Date
CN107180082A CN107180082A (en) 2017-09-19
CN107180082B true CN107180082B (en) 2020-12-18

Family

ID=59832360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710304976.1A Expired - Fee Related CN107180082B (en) 2017-05-03 2017-05-03 Data updating system and method based on multi-level cache mechanism

Country Status (1)

Country Link
CN (1) CN107180082B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108107792B (en) * 2017-12-29 2019-11-22 美的集团股份有限公司 Loading method, terminal and the computer readable storage medium of LUA script
CN109358805B (en) * 2018-09-03 2021-11-30 中新网络信息安全股份有限公司 Data caching method
CN111338560A (en) * 2018-12-19 2020-06-26 北京奇虎科技有限公司 Cache reconstruction method and device
CN112380067B (en) * 2020-11-30 2023-08-22 四川大学华西医院 Metadata-based big data backup system and method in Hadoop environment
CN116257493A (en) * 2022-12-29 2023-06-13 北京京桥热电有限责任公司 OPC (optical clear control) network gate penetrating interface based on caching mechanism

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699660A (en) * 2013-12-26 2014-04-02 清华大学 Large-scale network streaming data cache-write method
CN104156361A (en) * 2013-05-13 2014-11-19 阿里巴巴集团控股有限公司 Method and system for achieving data synchronization
CN104169892A (en) * 2012-03-28 2014-11-26 华为技术有限公司 Concurrently accessed set associative overflow cache
CN105897915A (en) * 2016-05-23 2016-08-24 珠海格力电器股份有限公司 Data receiving server and data processing system
CN106161644A (en) * 2016-08-12 2016-11-23 珠海格力电器股份有限公司 The distributed system of data process and data processing method thereof
CN106528792A (en) * 2016-11-10 2017-03-22 福州智永信息科技有限公司 Big data acquisition and high-speed processing method and system based on multi-layer caching mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077231B (en) * 2013-01-07 2015-08-19 阔地教育科技有限公司 A kind of method and system of database synchronization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169892A (en) * 2012-03-28 2014-11-26 华为技术有限公司 Concurrently accessed set associative overflow cache
CN104156361A (en) * 2013-05-13 2014-11-19 阿里巴巴集团控股有限公司 Method and system for achieving data synchronization
CN103699660A (en) * 2013-12-26 2014-04-02 清华大学 Large-scale network streaming data cache-write method
CN105897915A (en) * 2016-05-23 2016-08-24 珠海格力电器股份有限公司 Data receiving server and data processing system
CN106161644A (en) * 2016-08-12 2016-11-23 珠海格力电器股份有限公司 The distributed system of data process and data processing method thereof
CN106528792A (en) * 2016-11-10 2017-03-22 福州智永信息科技有限公司 Big data acquisition and high-speed processing method and system based on multi-layer caching mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于物联网的反应釜节能减排监控系统关键技术研究";徐兴玉;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150415;论文正文第5.2.2节 *

Also Published As

Publication number Publication date
CN107180082A (en) 2017-09-19

Similar Documents

Publication Publication Date Title
CN107180082B (en) Data updating system and method based on multi-level cache mechanism
US10642861B2 (en) Multi-instance redo apply
US20110060724A1 (en) Distributed database recovery
EP2673711B1 (en) Method and system for reducing write latency for database logging utilizing multiple storage devices
US9384072B2 (en) Distributed queue pair state on a host channel adapter
US20140372489A1 (en) In-database sharded queue for a shared-disk database
US9875259B2 (en) Distribution of an object in volatile memory across a multi-node cluster
US20180060145A1 (en) Message cache management for message queues
US9798745B2 (en) Methods, devices and systems for caching data items
CN107231395A (en) Date storage method, device and system
CN112084258A (en) Data synchronization method and device
US10528590B2 (en) Optimizing a query with extrema function using in-memory data summaries on the storage server
US11611617B2 (en) Distributed data store with persistent memory
CN105354046B (en) Database update processing method and system based on shared disk
US9990392B2 (en) Distributed transaction processing in MPP databases
US9679004B2 (en) Planned cluster node maintenance with low impact on application throughput
CN112307119A (en) Data synchronization method, device, equipment and storage medium
CN114490141A (en) High-concurrency IPC data interaction method based on shared memory
US20080301372A1 (en) Memory access control apparatus and memory access control method
EP2568386A1 (en) Method for accessing cache and fictitious cache agent
CN113094430A (en) Data processing method, device, equipment and storage medium
CN111371585A (en) Configuration method and device for CDN node
US9223799B1 (en) Lightweight metadata sharing protocol for location transparent file access
CN113311994A (en) Data caching method based on high concurrency
CN114490744B (en) Data caching method, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201218

CF01 Termination of patent right due to non-payment of annual fee