CN111506261B - Caching method, device, equipment and storage medium based on double cache areas - Google Patents

Caching method, device, equipment and storage medium based on double cache areas Download PDF

Info

Publication number
CN111506261B
CN111506261B CN202010211118.4A CN202010211118A CN111506261B CN 111506261 B CN111506261 B CN 111506261B CN 202010211118 A CN202010211118 A CN 202010211118A CN 111506261 B CN111506261 B CN 111506261B
Authority
CN
China
Prior art keywords
cache
preset
entity
thread
mapping structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010211118.4A
Other languages
Chinese (zh)
Other versions
CN111506261A (en
Inventor
艾可德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010211118.4A priority Critical patent/CN111506261B/en
Publication of CN111506261A publication Critical patent/CN111506261A/en
Application granted granted Critical
Publication of CN111506261B publication Critical patent/CN111506261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of clouds, and discloses a caching method, a device, equipment and a storage medium based on double cache areas, which are used for caching data by utilizing a caching mode shared by threads and a caching mode isolated by the threads, so that the flexibility of data caching is improved. The caching method based on the double cache areas comprises the following steps: obtaining a cache request of a target terminal; judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request; if the cache request belongs to the write-in cache request, writing the write-in mapping structure cache entity into a preset cache area by adopting a thread sharing write-in mode or a thread isolation write-in mode; if the cache request belongs to the read cache request, acquiring a target preset mapping structure cache entity from a preset cache region by adopting a thread sharing read mode or a thread isolation read mode; if the cache request belongs to the cache deletion request, deleting the preset cache request entity to be deleted by adopting a thread sharing deletion mode or a thread isolation deletion mode.

Description

Caching method, device, equipment and storage medium based on double cache areas
Technical Field
The present invention relates to the field of cloud technologies, and in particular, to a caching method, device, equipment and storage medium based on dual cache regions.
Background
Along with the development of science and technology, people can generate various data caches when surfing the internet, and because too many web pages are browsed, more data caches are generated, so how to better cache the data becomes a problem which needs to be solved urgently. Currently, hash mapping (HashMap) in java and some open-source local cache frames are adopted as local caches.
In the prior art, the local global cache is not suitable for being used in a scene of thread isolation sharing, the limitation is large, an open source cache frame is too heavy, and the flexibility of the cache is low.
Disclosure of Invention
The invention mainly aims to solve the problems of larger limitation and lower flexibility in data caching.
The first aspect of the present invention provides a caching method based on dual cache regions, including: obtaining a cache request of a target terminal; judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request; if the cache request belongs to the write-in cache request, a write-in mapping structure cache entity is obtained, and the write-in mapping structure cache entity is written into a preset cache region by adopting a thread sharing write-in mode or a thread isolation write-in mode, wherein the preset cache region comprises a preset thread sharing cache region and a preset thread isolation cache region; if the cache request belongs to the read cache request, a thread sharing read mode or a thread isolation read mode is adopted to obtain a target preset mapping structure cache entity from a preset cache region, and the target preset mapping structure cache entity is read, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread sharing cache region through the thread sharing read mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread isolation cache region through the thread isolation read mode; if the cache request belongs to the cache deletion request, a thread sharing deletion mode or a thread isolation deletion mode is adopted to acquire a plurality of preset cache entities to be deleted in a preset cache region, and delete the preset cache entities to be deleted, wherein the preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted, the first preset cache entities to be deleted are preset cache entities cached in the preset thread sharing cache region, and the second preset cache entities to be deleted are preset cache entities cached in the preset thread isolation cache region.
Optionally, in a first implementation manner of the first aspect of the present invention, if the cache request belongs to the write cache request, a write mapping structure cache entity is obtained, and a thread sharing write mode or a thread isolation write mode is adopted to write the write mapping structure cache entity into a preset cache area, where the preset cache area includes a preset thread sharing cache area and a preset thread isolation cache area, and the preset thread sharing cache area includes: if the cache request belongs to the write-in cache request, judging whether a preset cache region exists, if the preset cache region does not exist, firstly establishing a preset thread isolation cache region, and then establishing a preset thread sharing cache region to obtain the preset cache region; acquiring an initial write mapping structure caching entity according to the write caching request, and determining a write logic type according to the initial write mapping structure caching entity; judging whether the writing mode of the writing cache request is a thread sharing writing mode or a thread isolation writing mode according to the writing logic type; if the writing mode of the writing cache request is judged to be a thread sharing writing mode, the state of the preset thread sharing cache area is adjusted to be a temporary reading and writing state, the initial writing mapping structure cache entity is adjusted to obtain the writing mapping structure cache entity, and the writing mapping structure cache entity is written into the preset thread sharing cache area; and if the writing mode of the writing cache request is judged to be a thread isolation writing mode, adjusting the initial writing mapping structure cache entity to obtain the writing mapping structure cache entity, and writing the writing mapping structure cache entity into a preset thread isolation cache region.
Optionally, in a second implementation manner of the first aspect of the present invention, the obtaining an initial write mapping structure buffer entity according to the write buffer request, and determining a write logic type according to the initial write mapping structure buffer entity includes: determining a cache handle according to the write cache request, and matching a corresponding first mapping structure cache entity according to the cache handle; adding a key value in the first mapping structure buffer entity to generate a second mapping structure buffer entity, wherein the key value is used for storing the content value of the first mapping structure buffer entity; adding effective duration in the second mapping structure buffer entity to generate a third mapping structure buffer entity; and adding an update time in the third mapping structure buffer entity, generating an initial writing mapping structure buffer entity, determining a writing logic type according to the initial writing mapping structure buffer entity, and judging whether the initial writing mapping structure is overtime invalid or not by the effective time and the update time.
Optionally, in a third implementation manner of the first aspect of the present invention, if it is determined that the writing manner of the write cache request is a thread isolation writing manner, adjusting the initial write mapping structure cache entity to obtain a write mapping structure cache entity, and writing the write mapping structure cache entity into a preset thread isolation cache area includes: if the writing mode of the writing cache request is judged to be a thread isolation writing mode, matching the initial writing mapping structure cache entity with a plurality of preset writing threads in the preset thread isolation cache area to obtain a target writing thread; judging whether the target writing thread has a corresponding writing sub-buffer area in the preset thread isolation buffer area; if the write sub-buffer corresponding to the target write thread does not exist in the preset thread isolation buffer, establishing the write sub-buffer corresponding to the target write thread to obtain a target write sub-buffer; and adjusting the updating time of the initial writing mapping structure buffer entity to be the current time to obtain the writing mapping structure buffer entity, and storing the writing mapping structure buffer entity into the target writing sub-buffer area which is arranged in the preset thread isolation buffer area.
Optionally, in a fourth implementation manner of the first aspect of the present invention, if the cache request belongs to the read cache request, a thread sharing read mode or a thread isolation read mode is adopted to obtain a target preset mapping structure cache entity from a preset cache region, and the target preset mapping structure cache entity is read, where the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread sharing cache region through the thread sharing read mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread isolation cache region through the thread isolation read mode, where the preset mapping structure cache entity includes: if the cache request belongs to the read cache request, extracting a read handle based on the cache request; based on the read handle, determining a read logic type, and judging whether the read mode of the read cache request is a thread sharing read mode or a thread isolation read mode based on the read logic type; if the reading mode of the reading cache request is the thread sharing reading mode, the state of the preset thread sharing cache region is adjusted to be a temporary reading and writing state, and a first target preset mapping structure cache entity is obtained from the preset thread sharing cache region and is read; and if the reading mode of the reading cache request is the thread isolation reading mode, acquiring a second target preset mapping structure cache entity from the preset thread isolation cache region, and reading the second target preset mapping structure cache entity.
Optionally, in a fifth implementation manner of the first aspect of the present invention, if the read mode of the read cache request is the thread isolation read mode, acquiring a second target preset mapping structure cache entity from the preset thread isolation cache region, and reading the second target preset mapping structure cache entity includes: if the reading mode of the reading cache request is the thread isolation reading mode, determining a target reading thread from a plurality of preset reading threads in the preset thread isolation cache region; extracting a target reading sub-thread according to the target reading thread, and reading a second target preset mapping structure cache entity from the target reading sub-thread; acquiring preset effective time length, preset updating time and current time of the second target preset mapping structure caching entity, summing the preset effective time length and the preset updating time to obtain failure time, and judging whether the current time is greater than the failure time; and if the current time is greater than the failure time, setting a second target preset mapping structure buffer entity as a null value.
Optionally, in a sixth implementation manner of the first aspect of the present invention, if the cache request of the target terminal belongs to the deletion cache request, a thread sharing deletion manner or a thread isolation deletion manner is adopted, a plurality of preset cache entities to be deleted are obtained in a preset cache area, and the plurality of preset cache entities to be deleted are deleted, where the plurality of preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted, the plurality of first preset cache entities to be deleted are preset cache entities cached in the preset thread sharing cache area, and the plurality of second preset cache entities to be deleted are preset cache entities cached in the preset thread isolation cache area, where the plurality of preset cache entities to be deleted include: if the cache request belongs to the cache deletion request and a preset cache request entity to be deleted is deleted in a thread sharing deletion mode, the state of the preset thread sharing cache area is adjusted to be a temporary read-write state, a plurality of preset cache entities in the preset thread sharing cache area are traversed based on the update time, the effective time and the current time, a plurality of first preset cache entities to be deleted are determined, and the plurality of first preset cache entities to be deleted are deleted; if the cache request belongs to the cache deletion request and a preset cache request entity to be deleted is deleted in a thread isolation deletion mode, traversing a plurality of sub-cache areas in the preset thread isolation cache area, determining a second preset cache entity to be deleted in the plurality of sub-cache areas based on the update time, the effective time and the current time, and deleting the plurality of second preset cache entities to be deleted.
The second aspect of the present invention provides a buffer device based on dual buffer areas, including: the acquisition module is used for acquiring the cache request of the target terminal; the judging module is used for judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request; the writing module is used for acquiring a writing mapping structure buffer entity if the buffer request belongs to the writing buffer request, and writing the writing mapping structure buffer entity into a preset buffer zone by adopting a thread sharing writing mode or a thread isolation writing mode, wherein the preset buffer zone comprises a preset thread sharing buffer zone and a preset thread isolation buffer zone; the reading module is used for acquiring a target preset mapping structure cache entity from a preset cache region by adopting a thread sharing reading mode or a thread isolation reading mode and reading the target preset mapping structure cache entity if the cache request belongs to the reading cache request, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity acquired from the preset thread sharing cache region by adopting the thread sharing reading mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity acquired from the preset thread isolation cache region by adopting the thread isolation reading mode; the deleting module is used for acquiring a plurality of preset to-be-deleted cache entities in a preset cache region by adopting a thread sharing deleting mode or a thread isolating deleting mode, and deleting the plurality of preset to-be-deleted cache entities, wherein the plurality of preset to-be-deleted cache entities are a plurality of first preset to-be-deleted cache entities or a plurality of second preset to-be-deleted cache entities, the plurality of first preset to-be-deleted cache entities are preset cache entities cached in the preset thread sharing cache region, and the plurality of second preset to-be-deleted cache entities are preset cache entities cached in the preset thread isolating cache region.
Optionally, in a first implementation manner of the second aspect of the present invention, the writing module specifically includes: the first judging unit is used for judging whether a preset cache area exists or not if the cache request belongs to the write-in cache request, and if the preset cache area does not exist, a preset thread isolation cache area is built firstly, and then a preset thread sharing cache area is built, so that the preset cache area is obtained; the acquisition unit is used for acquiring an initial write mapping structure caching entity according to the write caching request and determining a write logic type according to the initial write mapping structure caching entity; the second judging unit is used for judging whether the writing mode of the writing cache request is a thread sharing writing mode or a thread isolation writing mode according to the writing logic type; the shared writing unit is used for adjusting the state of the preset thread shared cache area to be a temporary read-write state if the writing mode of the writing cache request is judged to be a thread shared writing mode, adjusting the initial writing mapping structure cache entity to obtain a writing mapping structure cache entity, and writing the writing mapping structure cache entity into the preset thread shared cache area; and the isolated writing unit is used for adjusting the initial writing mapping structure buffer entity to obtain the writing mapping structure buffer entity if the writing mode of the writing buffer request is judged to be a thread isolated writing mode, and writing the writing mapping structure buffer entity into a preset thread isolated buffer area.
Optionally, in a second implementation manner of the second aspect of the present invention, the acquiring unit is specifically configured to: determining a cache handle according to the write cache request, and matching a corresponding first mapping structure cache entity according to the cache handle; adding a key value in the first mapping structure buffer entity to generate a second mapping structure buffer entity, wherein the key value is used for storing the content value of the first mapping structure buffer entity; adding effective duration in the second mapping structure buffer entity to generate a third mapping structure buffer entity; and adding an update time in the third mapping structure buffer entity, generating an initial writing mapping structure buffer entity, determining a writing logic type according to the initial writing mapping structure buffer entity, and judging whether the initial writing mapping structure is overtime invalid or not by the effective time and the update time.
Optionally, in a third implementation manner of the second aspect of the present invention, the isolated writing unit is specifically configured to: if the writing mode of the writing cache request is judged to be a thread isolation writing mode, matching the initial writing mapping structure cache entity with a plurality of preset writing threads in the preset thread isolation cache area to obtain a target writing thread; judging whether the target writing thread has a corresponding writing sub-buffer area in the preset thread isolation buffer area; if the write sub-buffer corresponding to the target write thread does not exist in the preset thread isolation buffer, establishing the write sub-buffer corresponding to the target write thread to obtain a target write sub-buffer; and adjusting the updating time of the initial writing mapping structure buffer entity to be the current time to obtain the writing mapping structure buffer entity, and storing the writing mapping structure buffer entity into the target writing sub-buffer area which is arranged in the preset thread isolation buffer area.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the reading module specifically includes: a reading unit, configured to extract a read handle based on the cache request if the cache request belongs to the read cache request; the third judging unit is used for determining a reading logic type based on the reading handle and judging whether the reading mode of the reading cache request is a thread sharing reading mode or a thread isolation reading mode based on the reading logic type; the shared reading unit is used for adjusting the state of the preset thread shared cache area to a temporary read-write state if the reading mode of the read cache request is the thread shared reading mode, and acquiring a first target preset mapping structure cache entity from the preset thread shared cache area and reading the first target preset mapping structure cache entity; and the isolation reading unit is used for acquiring a second target preset mapping structure cache entity from the preset thread isolation cache region and reading the second target preset mapping structure cache entity if the reading mode of the reading cache request is the thread isolation reading mode.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the isolated reading unit is specifically configured to: if the reading mode of the reading cache request is the thread isolation reading mode, determining a target reading thread from a plurality of preset reading threads in the preset thread isolation cache region; extracting a target reading sub-thread according to the target reading thread, and reading a second target preset mapping structure cache entity from the target reading sub-thread; acquiring preset effective time length, preset updating time and current time of a second target preset mapping structure caching entity, summing the preset effective time length and the preset updating time to obtain failure time, and judging whether the current time is larger than the failure time; and if the current time is greater than the failure time, setting a second target preset mapping structure buffer entity as a null value.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the deletion module is specifically configured to: if the cache request belongs to the cache deletion request and a preset cache request entity to be deleted is deleted in a thread sharing deletion mode, the state of the preset thread sharing cache area is adjusted to be a temporary read-write state, a plurality of preset cache entities in the preset thread sharing cache area are traversed based on the update time, the effective time and the current time, a plurality of first preset cache entities to be deleted are determined, and the plurality of first preset cache entities to be deleted are deleted; if the cache request belongs to the cache deletion request and a preset cache request entity to be deleted is deleted in a thread isolation deletion mode, traversing a plurality of sub-cache areas in the preset thread isolation cache area, determining a second preset cache entity to be deleted in the plurality of sub-cache areas based on the update time, the effective time and the current time, and deleting the plurality of second preset cache entities to be deleted.
A third aspect of the present invention provides a dual-buffer-zone based buffer apparatus, including: a memory and at least one processor, the memory having instructions stored therein, the memory and the at least one processor being interconnected by a line; the at least one processor invokes the instructions in the memory to cause the dual-cache-based caching device to perform the dual-cache-based caching method described above.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described dual-cache-region based caching method.
In the technical scheme provided by the invention, a cache request of a target terminal is acquired; judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request; if the cache request belongs to the write-in cache request, a write-in mapping structure cache entity is obtained, and the write-in mapping structure cache entity is written into a preset cache region by adopting a thread sharing write-in mode or a thread isolation write-in mode, wherein the preset cache region comprises a preset thread sharing cache region and a preset thread isolation cache region; if the cache request belongs to the read cache request, acquiring a target preset mapping structure cache entity from a preset cache region by adopting a thread sharing read mode or a thread isolation read mode, and reading the target preset mapping structure cache entity, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity; if the cache request belongs to the cache deletion request, a thread sharing deletion mode or a thread isolation deletion mode is adopted to acquire a plurality of preset cache entities to be deleted in a preset cache region, and delete the plurality of preset cache entities to be deleted, wherein the plurality of preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted. In the embodiment of the invention, the cache entity is cached by using the cache mode shared by the threads and the cache mode isolated by the threads, and the overtime mechanism and the cache mechanism are arranged, so that the data can be stored in different cache areas according to different cache requests and different cache logic types, the flexibility of the cache is improved, the limitation of the data cache is reduced, and the overtime cache entity can be automatically cleared.
Drawings
FIG. 1 is a diagram illustrating an embodiment of a dual buffer based caching method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a dual-buffer based caching method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a dual-buffer based cache apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of a dual-buffer based cache apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of a dual-buffer based cache apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a caching method, a device, equipment and a storage medium based on double cache areas, which are used for caching a caching entity by utilizing a thread sharing caching mode and a thread isolation caching mode, so that the limitation of data caching is reduced, and the flexibility of data caching is improved.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and an embodiment of a dual-buffer-based caching method in an embodiment of the present invention includes:
101. Obtaining a cache request of a target terminal;
the server obtains the cache request of the target terminal, and the cache request of the target terminal can be a write cache request, a read cache request or a delete cache request.
For example, the write cache request may be understood as that when the target terminal writes a word document, the server stores the text; the cache reading request can be understood as that when the target terminal browses the webpage, the server reads the text or the picture; the deletion of the cache request can be understood as that when the target terminal clears the memory of the mobile phone, the server clears the data generated by browsing the web page.
It can be understood that the execution body of the present invention may be a buffer device based on dual buffers, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
102. Judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request;
The server judges that the cache request from the target terminal is concretely a write cache request and a read cache request, and the later one deletes the cache request.
Caching is a technique that can store data that we typically need to use, which can temporarily store the commonly used data in a local hard disk or memory for later retrieval. The technology effectively improves the access speed when a plurality of users access one site at the same time or one site is accessed by one user for a plurality of times. The cache may be divided into a write cache request, a read cache request and a delete cache request, where the write cache request may be understood as temporarily writing data that is not stored locally into a local hard disk or a memory; a read cache request may be understood as reading out data that has been stored locally; deleting a cache request may be understood as deleting data that has been stored locally. In this embodiment, the server determines whether the cache request belongs to a write cache request, a read cache request, or a delete cache request, and the server performs different cache processing for different cache requests.
103. If the cache request belongs to the write-in cache request, acquiring a write-in mapping structure cache entity, and writing the write-in mapping structure cache entity into a preset cache region by adopting a thread sharing write-in mode or a thread isolation write-in mode, wherein the preset cache region comprises a preset thread sharing cache region and a preset thread isolation cache region;
If the buffer request belongs to the write-in buffer request, the server acquires a write-in mapping structure buffer entity, and writes the write-in mapping structure buffer entity into a preset thread sharing buffer area by adopting a thread sharing writing mode, or writes the write-in mapping structure buffer entity into a preset thread isolation buffer area by adopting a thread isolation writing mode.
For example, assume that multiple persons use online banking to present, at this time, a thread sharing writing mode is adopted to write a writing mapping structure buffer entity generated by the multiple persons using online banking to present into a preset thread sharing buffer area; assuming that a person obtains experience in a game which grows independently, a thread isolation writing mode is adopted to write a writing mapping structure buffer entity generated by the personal game experience into a preset thread isolation buffer area.
104. If the cache request belongs to a read cache request, acquiring a target preset mapping structure cache entity from a preset cache region by adopting a thread sharing read mode or a thread isolation read mode, and reading the target preset mapping structure cache entity, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity acquired from the preset thread sharing cache region by adopting the thread sharing read mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity acquired from the preset thread isolation cache region by adopting the thread isolation read mode;
If the cache request from the target terminal belongs to the read cache request, the server acquires and reads the first target preset mapping structure cache entity from the preset thread shared cache area in a thread sharing and reading mode, or acquires and reads the second target preset mapping structure cache entity from the preset thread isolation cache area in a thread isolation and reading mode.
For example, assuming that the game experience of an individual is read from a team game, a thread sharing reading mode is adopted to read a first target preset mapping structure buffer entity of the game experience in a preset thread sharing buffer area; and if the presentation record is acquired from the online presentation record of the individual, reading a second target preset mapping mechanism cache entity of the presentation record in a preset thread isolation cache area in a thread isolation mode.
105. If the cache request belongs to the cache deletion request, a thread sharing deletion mode or a thread isolation deletion mode is adopted to acquire a plurality of preset cache entities to be deleted in a preset cache region, and delete the plurality of preset cache entities to be deleted, wherein the plurality of preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted, the plurality of first preset cache entities to be deleted are preset cache entities cached in the preset thread sharing cache region, and the plurality of second preset cache entities to be deleted are preset cache entities cached in the preset thread isolation cache region.
If the cache request from the target terminal belongs to the cache deletion request, the server adopts a thread sharing deletion mode to delete a plurality of first preset cache entities to be deleted from the preset thread sharing cache area, or adopts a thread isolation deletion mode to delete a plurality of second preset cache entities to be deleted from the preset thread isolation cache area.
For example, assuming that some comment information is deleted from a plurality of comments of news, deleting a plurality of first preset to-be-deleted cache entities of the comment information from a preset thread sharing cache area in a thread sharing deletion mode; and if several presentation records are deleted from a plurality of online presentation records of the individual, deleting a second preset to-be-deleted cache entity of the presentation records from the preset thread isolation cache area in a thread isolation deleting mode.
In the embodiment of the invention, the cache entity is cached by using the cache mode shared by the threads and the cache mode isolated by the threads, and the overtime mechanism and the cache mechanism are arranged, so that the data can be stored in different cache areas according to different cache requests and different cache logic types, the flexibility of the cache is improved, the limitation of the data cache is reduced, and the overtime cache entity can be automatically cleared.
Referring to fig. 2, another embodiment of a dual-buffer based caching method according to an embodiment of the present invention includes:
201. Obtaining a cache request of a target terminal;
the server obtains the cache request of the target terminal, and the cache request of the target terminal can be a write cache request, a read cache request or a delete cache request.
For example, the write cache request may be understood as that when the target terminal writes a word document, the server stores the text; the cache reading request can be understood as that when the target terminal browses the webpage, the server reads the text or the picture; the deletion of the cache request can be understood as that when the target terminal clears the memory of the mobile phone, the server clears the data generated by browsing the web page.
202. Judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request;
The server judges that the cache request from the target terminal is concretely a write cache request and a read cache request, and the later one deletes the cache request.
Caching is a technique that can store data that we typically need to use, which can temporarily store the commonly used data in a local hard disk or memory for later retrieval. The technology effectively improves the access speed when a plurality of users access one site at the same time or one site is accessed by one user for a plurality of times. The cache may be divided into a write cache request, a read cache request and a delete cache request, where the write cache request may be understood as temporarily writing data that is not stored locally into a local hard disk or a memory; a read cache request may be understood as reading out data that has been stored locally; deleting a cache request may be understood as deleting data that has been stored locally. In this embodiment, the server determines whether the cache request belongs to a write cache request, a read cache request, or a delete cache request, and the server performs different cache processing for different cache requests.
203. If the cache request belongs to the write-in cache request, acquiring a write-in mapping structure cache entity, and writing the write-in mapping structure cache entity into a preset cache region by adopting a thread sharing write-in mode or a thread isolation write-in mode, wherein the preset cache region comprises a preset thread sharing cache region and a preset thread isolation cache region;
For example, assume that multiple persons use online banking to present, at this time, a thread sharing writing mode is adopted to write a writing mapping structure buffer entity generated by the multiple persons using online banking to present into a preset thread sharing buffer area; assuming that a person obtains experience in a game which grows independently, a thread isolation writing mode is adopted to write a writing mapping structure buffer entity generated by the personal game experience into a preset thread isolation buffer area.
Specifically, if the cache request belongs to the write-in cache request, the server firstly judges whether a preset cache area exists, if the preset cache area does not exist, the server firstly establishes a preset thread isolation cache area, and then establishes a preset thread sharing cache area, so that the preset cache area is obtained, and data can be written into the preset cache area; secondly, the server acquires an initial write mapping structure buffer entity according to the write buffer request, wherein the initial write mapping structure buffer entity can be understood as initial buffer data to be written; secondly, the server acquires a write logic type according to the initial write mapping structure buffer entity, wherein the write logic type can be a thread isolated write type or a thread shared write type, and the server adjusts the initial write mapping structure buffer entity so as to obtain the write mapping buffer entity; if the write logic type is a thread isolation write type, the server writes the write mapping structure buffer entity into a preset thread isolation buffer area in a thread isolation write mode; if the write logic type is a thread sharing write type, the server adopts a thread sharing mode to write the write mapping structure buffer entity into a preset thread sharing buffer area.
The specific steps for acquiring the caching entity according to the caching request are as follows:
the server extracts a buffer handle through the buffer request, and then acquires a corresponding buffer entity of the first mapping structure according to the buffer handle, and sequentially adds a key value, effective duration and update time to the buffer entity of the first mapping structure, so that an initial write-in buffer entity of the mapping structure is obtained.
It should be noted that, the key value is used for storing the content value of the first mapping structure buffer entity, specific data of the first mapping structure buffer entity can be obtained according to the content value, the effective duration and the update time are used for judging whether the current initial writing mapping structure buffer entity is invalid due to timeout, if the current time is greater than the sum of the update time and the effective duration, the initial writing mapping structure entity is invalid due to timeout.
The specific steps of writing the writing mapping structure buffer entity into the preset thread isolation buffer area are as follows:
If the writing mode of the writing cache request is judged to be a thread isolation writing mode, comparing the initial writing mapping structure cache entity with a plurality of preset writing threads in a preset thread isolation cache area to obtain a target writing thread; judging whether a corresponding writing sub-buffer area exists in the target writing thread or not; if the write sub-buffer corresponding to the target write thread does not exist in the preset thread isolation buffer, establishing a target write sub-buffer; and adjusting the updating time of the initial writing mapping structure buffer entity to be the current time to obtain the writing mapping structure buffer entity, and storing the writing mapping structure buffer entity into a target writing sub-buffer area arranged in the preset thread isolation buffer area.
204. If the cache request belongs to the read cache request, extracting a read handle based on the cache request;
if the cache request belongs to a read cache request, the server extracts a corresponding read handle from the cache request, where the read handle is used to identify different objects in the server and different instances in the same class, e.g., a window, button, icon, scroll bar, output device, control, or file, etc. The server can access information of the corresponding object through the handle.
205. Based on the read handle, determining a read logic type, and based on the read logic type, judging whether the read mode of the read cache request is a thread sharing read mode or a thread isolation read mode;
The server acquires a read logic type corresponding to the cache request from the read handle, and reads a first preset mapping structure cache entity in a preset thread sharing cache area by adopting a thread sharing read mode or reads a second preset mapping structure cache entity in a preset thread isolating cache area by adopting a thread isolating mode through logic type judgment.
206. If the reading mode of the reading cache request is a thread sharing reading mode, the state of the preset thread sharing cache region is adjusted to be a temporary reading and writing state, and a first target preset mapping structure cache entity is obtained from the preset thread sharing cache region and is read;
If the preset mapping structure caching entity is read in a thread sharing reading mode according to the reading logic type, firstly, the server adjusts the state of the preset thread sharing caching area to be a temporary reading and writing state, the first target preset mapping structure caching entity is obtained by matching a caching request in the preset thread sharing caching area, and the first target preset mapping structure caching entity is read.
207. If the reading mode of the reading cache request is a thread isolation reading mode, acquiring a second target preset mapping structure cache entity from the preset thread isolation cache region, and reading the second target preset mapping structure cache entity;
specifically, if the read mode of the read cache request is a thread isolation read mode, the server matches the read mode in the preset thread isolation cache region to obtain a target read thread; the server extracts a target reading sub-thread according to the target reading thread, and reads a second target preset mapping structure buffer entity from the target reading sub-thread; the server obtains the preset effective time length, the preset updating time and the current time of the second target preset mapping structure caching entity, adds the preset effective time length and the preset updating time to obtain the failure time, and judges whether the current time is larger than the failure time; and if the current time is greater than the failure time, indicating that the second target preset mapping structure buffer entity has expired and failed, and setting the second target preset mapping structure buffer entity as a null value.
208. If the cache request belongs to the cache deletion request, a thread sharing deletion mode or a thread isolation deletion mode is adopted to acquire a plurality of preset cache entities to be deleted in a preset cache region, and delete the plurality of preset cache entities to be deleted, wherein the plurality of preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted.
If the cache request from the target terminal belongs to the cache deletion request, the server adopts a thread sharing deletion mode to delete a plurality of first preset cache entities to be deleted from the preset thread sharing cache area, or adopts a thread isolation deletion mode to delete a plurality of second preset cache entities to be deleted from the preset thread isolation cache area.
For example, assuming that some comment information is deleted from a plurality of comments of news, deleting a plurality of first preset to-be-deleted cache entities of the comment information from a preset thread sharing cache area in a thread sharing deletion mode; and if several presentation records are deleted from a plurality of online presentation records of the individual, deleting a second preset to-be-deleted cache entity of the presentation records from the preset thread isolation cache area in a thread isolation deleting mode.
Specifically, when deleting a plurality of preset to-be-cleared buffer entities in the preset thread shared buffer area, the server firstly adjusts the state of the preset thread shared buffer area to be a temporary read-write state, the server traverses the plurality of preset buffer entities in the preset thread shared buffer area, judges whether the current moment is greater than a plurality of failure moments corresponding to the plurality of preset buffer entities, takes the plurality of preset buffer entities with the current moment greater than the failure moments as a plurality of first preset to-be-cleared buffer entities, and deletes the plurality of first preset to-be-cleared buffer entities. When the preset to-be-cleaned buffer entities are deleted in the preset thread isolation buffer zone, the server traverses a plurality of sub-buffer zones in the preset thread isolation buffer zone, the server respectively takes a plurality of preset buffer entities with the current time being larger than the failure time in the plurality of sub-buffer zones as a plurality of second preset to-be-cleaned buffer entities, and finally the server deletes the plurality of second preset to-be-cleaned buffer entities.
It should be noted that, the update time of the preset buffer entity will change all the time, and when the server determines whether the preset buffer entity is a clear buffer entity, the server determines whether the preset buffer entity is invalid based on the last update time.
In the embodiment of the invention, the cache entity is cached by using the cache mode shared by the threads and the cache mode isolated by the threads, and the overtime mechanism and the cache mechanism are arranged, so that the data can be stored in different cache areas according to different cache requests and different cache logic types, the flexibility of the cache is improved, the limitation of the data cache is reduced, and the overtime cache entity can be automatically cleared.
The above description is made on the method for caching based on the dual cache area in the embodiment of the present invention, and the following description is made on the device for caching based on the dual cache area in the embodiment of the present invention, referring to fig. 3, one embodiment of the device for caching based on the dual cache area in the embodiment of the present invention includes:
an obtaining module 301, configured to obtain a cache request of a target terminal;
a judging module 302, configured to judge whether the cache request belongs to a write cache request, a read cache request, or a delete cache request;
The writing module 303 is configured to obtain a writing mapping structure buffer entity if the buffer request belongs to a writing buffer request, and write the writing mapping structure buffer entity into a preset buffer area by adopting a thread sharing writing mode or a thread isolation writing mode, where the preset buffer area includes a preset thread sharing buffer area and a preset thread isolation buffer area;
The reading module 304 is configured to obtain, if the cache request belongs to a read cache request, a target preset mapping structure cache entity from a preset cache area in a thread sharing read mode or a thread isolation read mode, and read the target preset mapping structure cache entity, where the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread sharing cache area in the thread sharing read mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread isolation cache area in the thread isolation read mode;
The deletion module 305 is configured to obtain a plurality of preset to-be-deleted cached entities in a preset cache region by adopting a thread sharing deletion manner or a thread isolation deletion manner if the cache request belongs to the deletion cache request, and delete the plurality of preset to-be-deleted cached entities, where the plurality of preset to-be-deleted cached entities are a plurality of first preset to-be-deleted cached entities or a plurality of second preset to-be-deleted cached entities, the plurality of first preset to-be-deleted cached entities are preset cached entities cached in the preset thread sharing cache region, and the plurality of second preset to-be-deleted cached entities are preset cached entities cached in the preset thread isolation cache region.
In the embodiment of the invention, the cache entity is cached by using the cache mode shared by the threads and the cache mode isolated by the threads, and the overtime mechanism and the cache mechanism are arranged, so that the data can be stored in different cache areas according to different cache requests and different cache logic types, the flexibility of the cache is improved, the limitation of the data cache is reduced, and the overtime cache entity can be automatically cleared.
Referring to fig. 4, another embodiment of a dual-buffer-based buffer device according to an embodiment of the present invention includes:
an obtaining module 301, configured to obtain a cache request of a target terminal;
a judging module 302, configured to judge whether the cache request belongs to a write cache request, a read cache request, or a delete cache request;
The writing module 303 is configured to obtain a writing mapping structure buffer entity if the buffer request belongs to a writing buffer request, and write the writing mapping structure buffer entity into a preset buffer area by adopting a thread sharing writing mode or a thread isolation writing mode, where the preset buffer area includes a preset thread sharing buffer area and a preset thread isolation buffer area;
the reading module 304 is configured to obtain a target preset mapping structure cache entity from the preset cache area by using a thread sharing reading mode or a thread isolating reading mode and read the target preset mapping structure cache entity if the cache request belongs to the read cache request, where the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity;
The deletion module 305 is configured to obtain a plurality of preset to-be-deleted cache entities in the preset cache region by adopting a thread sharing deletion mode or a thread isolation deletion mode if the cache request belongs to the deletion cache request, and delete the plurality of preset to-be-deleted cache entities, where the plurality of preset to-be-deleted cache entities are a plurality of first preset to-be-deleted cache entities or a plurality of second preset to-be-deleted cache entities.
Optionally, the writing module 303 includes:
A first judging unit 3031, configured to judge whether a preset buffer area exists if the buffer request belongs to the write buffer request, and if the preset buffer area does not exist, first establish a preset thread isolation buffer area, and then establish a preset thread sharing buffer area to obtain the preset buffer area;
An obtaining unit 3032, configured to obtain an initial write mapping structure buffer entity according to the write buffer request, and determine a write logic type according to the initial write mapping structure buffer entity;
a second judging unit 3033, configured to judge, according to the write logic type, whether the write mode of the write cache request is a thread sharing write mode or a thread isolation write mode;
The shared writing unit 3034 is configured to adjust the state of the preset thread shared cache region to a temporary read-write state if the writing mode of the write cache request is determined to be a thread shared writing mode, adjust the initial write mapping structure cache entity to obtain the write mapping structure cache entity, and write the write mapping structure cache entity into the preset thread shared cache region;
And the isolation writing unit 3035 is configured to adjust the initial write mapping structure buffer entity if the writing mode of the write buffer request is determined to be a thread isolation writing mode, obtain the write mapping structure buffer entity, and write the write mapping structure buffer entity into the preset thread isolation buffer area.
Optionally, the obtaining unit 3032 may be further specifically configured to:
Determining a cache handle according to the cache request, and matching a corresponding first mapping structure cache entity according to the cache handle;
Adding a key value in the first mapping structure buffer entity to generate a second mapping structure buffer entity, wherein the key value is used for storing the content value of the first mapping structure buffer entity;
adding effective duration in the second mapping structure buffer entity to generate a third mapping structure buffer entity;
And adding an update time in the third mapping structure buffer entity, generating an initial writing mapping structure buffer entity, determining a writing logic type according to the initial writing mapping structure buffer entity, and judging whether the initial writing mapping structure is overtime invalid or not by using the effective time and the update time.
Optionally, the isolated write unit 3035 is specifically configured to:
if the writing mode of the writing cache request is judged to be a thread isolation writing mode, matching the initial writing mapping structure cache entity with a plurality of preset writing threads in a preset thread isolation cache area to obtain a target writing thread;
Judging whether a target writing thread has a corresponding writing sub-buffer area in a preset thread isolation buffer area;
If the write sub-buffer corresponding to the target write thread does not exist in the preset thread isolation buffer, establishing the write sub-buffer corresponding to the target write thread to obtain the target write sub-buffer;
And adjusting the updating time of the initial writing mapping structure buffer entity to be the current time to obtain the writing mapping structure buffer entity, and storing the writing mapping structure buffer entity into a target writing sub-buffer area which is arranged in a preset thread isolation buffer area.
Optionally, the reading module 304 includes:
a reading unit 3041, configured to extract a read handle based on the cache request if the cache request belongs to the read cache request;
A third judging unit 3042, configured to determine a read logic type based on the read handle, and judge whether the read mode of the read cache request is a thread sharing read mode or a thread isolation read mode based on the read logic type;
The shared reading unit 3043 is configured to adjust a state of the preset thread shared buffer area to a temporary read-write state if the read mode of the read buffer request is a thread shared read mode, and obtain a first target preset mapping structure buffer entity from the preset thread shared buffer area and read the first target preset mapping structure buffer entity;
The isolation reading unit 3044 is configured to obtain the second target preset mapping structure buffer entity from the preset thread isolation buffer area and read the second target preset mapping structure buffer entity if the reading mode of the read buffer request is a thread isolation reading mode.
Optionally, the isolated reading unit 3044 is specifically further configured to:
If the reading mode of the reading cache request is a thread isolation reading mode, determining a target reading thread from a plurality of preset reading threads in a preset thread isolation cache region;
extracting a target reading sub-thread according to the target reading thread, and reading a second target preset mapping structure buffer entity from the target reading sub-thread;
Acquiring preset effective time length, preset updating time and current time of a second target preset mapping structure caching entity, summing the preset effective time length and the preset updating time to obtain failure time, and judging whether the current time is greater than the failure time;
and if the current time is greater than the failure time, setting the second target preset mapping structure buffer entity as a null value.
Optionally, the deletion module 305 is specifically further configured to:
If the cache request belongs to a cache deletion request and a preset cache request entity to be deleted is deleted in a thread sharing deletion mode, the state of a preset thread sharing cache area is adjusted to be a temporary read-write state, a plurality of preset cache entities in the preset thread sharing cache area are traversed based on the update time, the effective time and the current time, a plurality of first preset cache entities to be deleted are determined, and the plurality of first preset cache entities to be deleted are deleted;
If the cache request belongs to a cache deletion request and a preset cache request entity to be deleted is deleted in a thread isolation deleting mode, traversing a plurality of sub-cache areas in the preset thread isolation cache area, determining a second preset cache entity to be deleted in the plurality of sub-cache areas based on the updating time, the effective time and the current time, and deleting the plurality of second preset cache entities to be deleted.
In the embodiment of the invention, the cache entity is cached by using the cache mode shared by the threads and the cache mode isolated by the threads, and the overtime mechanism and the cache mechanism are arranged, so that the data can be stored in different cache areas according to different cache requests and different cache logic types, the flexibility of the cache is improved, the limitation of the data cache is reduced, and the overtime cache entity can be automatically cleared.
The above-mentioned fig. 3 and fig. 4 describe the dual-buffer-based cache device in the embodiment of the present invention in detail from the point of view of modularized functional entities, and the dual-buffer-based cache device in the embodiment of the present invention is described in detail from the point of view of hardware processing below.
Fig. 5 is a schematic structural diagram of a dual-buffer-based cache apparatus 500 according to an embodiment of the present invention, where the dual-buffer-based cache apparatus 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) storing application programs 533 or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the dual-cache based cache device 500. Still further, the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the dual-cache based cache device 500.
The dual-cache-based caching device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input/output interfaces 560, and/or one or more operating systems 531, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the dual-cache-based cache device structure illustrated in fig. 5 is not limiting of the dual-cache-based cache device and may include more or fewer components than illustrated, or may combine certain components, or a different arrangement of components.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the dual cache area based caching method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. The caching method based on the double cache areas is characterized by comprising the following steps of:
Obtaining a cache request of a target terminal;
judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request;
If the cache request belongs to the write-in cache request, a write-in mapping structure cache entity is obtained, and the write-in mapping structure cache entity is written into a preset cache region by adopting a thread sharing write-in mode or a thread isolation write-in mode, wherein the preset cache region comprises a preset thread sharing cache region and a preset thread isolation cache region;
if the cache request belongs to the write cache request, acquiring a write mapping structure cache entity, and writing the write mapping structure cache entity into a preset cache region by adopting a thread sharing write mode or a thread isolation write mode, wherein the preset cache region comprises a preset thread sharing cache region and a preset thread isolation cache region and comprises:
If the cache request belongs to the write-in cache request, judging whether a preset cache region exists, if the preset cache region does not exist, firstly establishing a preset thread isolation cache region, and then establishing a preset thread sharing cache region to obtain the preset cache region;
Acquiring an initial write mapping structure caching entity according to the write caching request, and determining a write logic type according to the initial write mapping structure caching entity;
judging whether the writing mode of the writing cache request is a thread sharing writing mode or a thread isolation writing mode according to the writing logic type;
If the writing mode of the writing cache request is judged to be a thread sharing writing mode, the state of the preset thread sharing cache area is adjusted to be a temporary reading and writing state, the initial writing mapping structure cache entity is adjusted to obtain the writing mapping structure cache entity, and the writing mapping structure cache entity is written into the preset thread sharing cache area;
If the writing mode of the writing cache request is judged to be a thread isolation writing mode, the initial writing mapping structure cache entity is adjusted to obtain a writing mapping structure cache entity, and the writing mapping structure cache entity is written into a preset thread isolation cache area;
If the cache request belongs to the read cache request, a thread sharing read mode or a thread isolation read mode is adopted to obtain a target preset mapping structure cache entity from a preset cache region, and the target preset mapping structure cache entity is read, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread sharing cache region through the thread sharing read mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread isolation cache region through the thread isolation read mode;
If the cache request belongs to the read cache request, a thread sharing read mode or a thread isolation read mode is adopted to obtain a target preset mapping structure cache entity from a preset cache region, and the target preset mapping structure cache entity is read, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread sharing cache region through the thread sharing read mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity obtained from the preset thread isolation cache region through the thread isolation read mode, and the method comprises the following steps:
if the cache request belongs to the read cache request, extracting a read handle based on the cache request;
Based on the read handle, determining a read logic type, and judging whether the read mode of the read cache request is a thread sharing read mode or a thread isolation read mode based on the read logic type;
if the reading mode of the reading cache request is the thread sharing reading mode, the state of the preset thread sharing cache region is adjusted to be a temporary reading and writing state, and a first target preset mapping structure cache entity is obtained from the preset thread sharing cache region and is read;
If the reading mode of the reading cache request is the thread isolation reading mode, acquiring a second target preset mapping structure cache entity from the preset thread isolation cache region, and reading the second target preset mapping structure cache entity;
If the cache request belongs to the cache deletion request, a thread sharing deletion mode or a thread isolation deletion mode is adopted to acquire a plurality of preset cache entities to be deleted in a preset cache region, and delete the preset cache entities to be deleted, wherein the preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted, the first preset cache entities to be deleted are preset cache entities cached in the preset thread sharing cache region, and the second preset cache entities to be deleted are preset cache entities cached in the preset thread isolation cache region.
2. The dual-buffer based caching method according to claim 1, wherein the obtaining an initial write mapping structure cache entity according to the write cache request, and determining a write logic type according to the initial write mapping structure cache entity comprises:
determining a cache handle according to the write cache request, and matching a corresponding first mapping structure cache entity according to the cache handle;
adding a key value in the first mapping structure buffer entity to generate a second mapping structure buffer entity, wherein the key value is used for storing the content value of the first mapping structure buffer entity;
Adding effective duration in the second mapping structure buffer entity to generate a third mapping structure buffer entity;
And adding an update time in the third mapping structure buffer entity, generating an initial writing mapping structure buffer entity, determining a writing logic type according to the initial writing mapping structure buffer entity, and judging whether the initial writing mapping structure is overtime invalid or not by the effective time and the update time.
3. The method according to claim 1, wherein if the writing mode of the writing cache request is determined to be a thread isolation writing mode, adjusting the initial writing mapping structure cache entity to obtain a writing mapping structure cache entity, and writing the writing mapping structure cache entity into a preset thread isolation cache area comprises:
if the writing mode of the writing cache request is judged to be a thread isolation writing mode, matching the initial writing mapping structure cache entity with a plurality of preset writing threads in the preset thread isolation cache area to obtain a target writing thread;
judging whether the target writing thread has a corresponding writing sub-buffer area in the preset thread isolation buffer area;
If the write sub-buffer corresponding to the target write thread does not exist in the preset thread isolation buffer, establishing the write sub-buffer corresponding to the target write thread to obtain a target write sub-buffer;
And adjusting the updating time of the initial writing mapping structure buffer entity to be the current time to obtain the writing mapping structure buffer entity, and storing the writing mapping structure buffer entity into the target writing sub-buffer area which is arranged in the preset thread isolation buffer area.
4. The method for caching according to claim 1, wherein if the read mode of the read cache request is the thread isolation read mode, obtaining a second target preset mapping structure cache entity from the preset thread isolation cache, and reading the second target preset mapping structure cache entity includes:
if the reading mode of the reading cache request is the thread isolation reading mode, determining a target reading thread from a plurality of preset reading threads in the preset thread isolation cache region;
Extracting a target reading sub-thread according to the target reading thread, and reading a second target preset mapping structure cache entity from the target reading sub-thread;
Acquiring preset effective time length, preset updating time and current time of the second target preset mapping structure caching entity, summing the preset effective time length and the preset updating time to obtain failure time, and judging whether the current time is greater than the failure time;
And if the current time is greater than the failure time, setting a second target preset mapping structure buffer entity as a null value.
5. The method for dual-cache-region-based caching according to any one of claims 1 to 4, wherein if the cache request of the target terminal belongs to the deletion cache request, a thread sharing deletion mode or a thread isolation deletion mode is adopted to obtain a plurality of preset cache entities to be deleted in a preset cache region, and delete the plurality of preset cache entities to be deleted, where the plurality of preset cache entities to be deleted are a plurality of first preset cache entities to be deleted or a plurality of second preset cache entities to be deleted, the plurality of first preset cache entities to be deleted are preset cache entities cached in a preset thread sharing cache region, and the plurality of second preset cache entities to be deleted are preset cache entities cached in a preset thread isolation cache region, including:
If the cache request belongs to the cache deletion request and a preset cache request entity to be deleted is deleted in a thread sharing deletion mode, the state of the preset thread sharing cache area is adjusted to be a temporary read-write state, a plurality of preset cache entities in the preset thread sharing cache area are traversed based on the update time, the effective time and the current time, a plurality of first preset cache entities to be deleted are determined, and the plurality of first preset cache entities to be deleted are deleted;
If the cache request belongs to the cache deletion request and a preset cache request entity to be deleted is deleted in a thread isolation deletion mode, traversing a plurality of sub-cache areas in the preset thread isolation cache area, determining a second preset cache entity to be deleted in the plurality of sub-cache areas based on the update time, the effective time and the current time, and deleting the plurality of second preset cache entities to be deleted.
6. A dual buffer based caching apparatus for performing the dual buffer based caching method of claim 1, wherein the dual buffer based caching apparatus comprises:
The acquisition module is used for acquiring the cache request of the target terminal;
The judging module is used for judging whether the cache request belongs to a write cache request, a read cache request or a delete cache request;
The writing module is used for acquiring a writing mapping structure buffer entity if the buffer request belongs to the writing buffer request, and writing the writing mapping structure buffer entity into a preset buffer zone by adopting a thread sharing writing mode or a thread isolation writing mode, wherein the preset buffer zone comprises a preset thread sharing buffer zone and a preset thread isolation buffer zone;
The reading module is used for acquiring a target preset mapping structure cache entity from a preset cache region by adopting a thread sharing reading mode or a thread isolation reading mode and reading the target preset mapping structure cache entity if the cache request belongs to the reading cache request, wherein the target preset mapping structure cache entity is a first target preset mapping structure cache entity or a second target preset mapping structure cache entity, the first target preset mapping structure cache entity is a preset mapping structure cache entity acquired from the preset thread sharing cache region by adopting the thread sharing reading mode, and the second target preset mapping structure cache entity is a preset mapping structure cache entity acquired from the preset thread isolation cache region by adopting the thread isolation reading mode;
The deleting module is used for acquiring a plurality of preset to-be-deleted cache entities in a preset cache region by adopting a thread sharing deleting mode or a thread isolating deleting mode, and deleting the plurality of preset to-be-deleted cache entities, wherein the plurality of preset to-be-deleted cache entities are a plurality of first preset to-be-deleted cache entities or a plurality of second preset to-be-deleted cache entities, the plurality of first preset to-be-deleted cache entities are preset cache entities cached in the preset thread sharing cache region, and the plurality of second preset to-be-deleted cache entities are preset cache entities cached in the preset thread isolating cache region.
7. A dual-buffer-based cache apparatus, comprising: a memory and at least one processor, the memory having instructions stored therein, the memory and the at least one processor being interconnected by a line;
The at least one processor invoking the instructions in the memory to cause the dual-cache based caching device to perform the dual-cache based caching method of any one of claims 1-5.
8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a dual cache zone based caching method according to any one of claims 1-5.
CN202010211118.4A 2020-03-24 2020-03-24 Caching method, device, equipment and storage medium based on double cache areas Active CN111506261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010211118.4A CN111506261B (en) 2020-03-24 2020-03-24 Caching method, device, equipment and storage medium based on double cache areas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010211118.4A CN111506261B (en) 2020-03-24 2020-03-24 Caching method, device, equipment and storage medium based on double cache areas

Publications (2)

Publication Number Publication Date
CN111506261A CN111506261A (en) 2020-08-07
CN111506261B true CN111506261B (en) 2024-05-03

Family

ID=71870718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010211118.4A Active CN111506261B (en) 2020-03-24 2020-03-24 Caching method, device, equipment and storage medium based on double cache areas

Country Status (1)

Country Link
CN (1) CN111506261B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115469797B (en) * 2021-09-09 2023-12-29 上海江波龙数字技术有限公司 Data writing method, storage device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218174A (en) * 2013-03-29 2013-07-24 航天恒星科技有限公司 IO (Input Output) double-buffer interactive multicore processing method for remote sensing image
CN104503707A (en) * 2014-12-24 2015-04-08 华为技术有限公司 Method and device for reading data
CN105912478A (en) * 2016-04-06 2016-08-31 中国航空无线电电子研究所 Dual-buffer mechanism based real-time system multi-task data sharing method
CN107341212A (en) * 2017-06-26 2017-11-10 努比亚技术有限公司 A kind of buffering updating method and equipment
CN109919827A (en) * 2019-02-22 2019-06-21 搜游网络科技(北京)有限公司 A kind of pattern drawing method, device and computer-readable medium, equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218174A (en) * 2013-03-29 2013-07-24 航天恒星科技有限公司 IO (Input Output) double-buffer interactive multicore processing method for remote sensing image
CN104503707A (en) * 2014-12-24 2015-04-08 华为技术有限公司 Method and device for reading data
CN105912478A (en) * 2016-04-06 2016-08-31 中国航空无线电电子研究所 Dual-buffer mechanism based real-time system multi-task data sharing method
CN107341212A (en) * 2017-06-26 2017-11-10 努比亚技术有限公司 A kind of buffering updating method and equipment
CN109919827A (en) * 2019-02-22 2019-06-21 搜游网络科技(北京)有限公司 A kind of pattern drawing method, device and computer-readable medium, equipment

Also Published As

Publication number Publication date
CN111506261A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
US9600492B2 (en) Data processing performance enhancement in a distributed file system
US9817879B2 (en) Asynchronous data replication using an external buffer table
Zhou et al. Spitfire: A three-tier buffer manager for volatile and non-volatile memory
KR101672901B1 (en) Cache Management System for Enhancing the Accessibility of Small Files in Distributed File System
US20140324945A1 (en) Hydration and dehydration with placeholders
US20090234809A1 (en) Method and a Computer Program Product for Indexing files and Searching Files
CN109376318B (en) Page loading method, computer readable storage medium and terminal equipment
JP4806751B2 (en) File access destination control apparatus, method and program thereof
US8850148B2 (en) Data copy management for faster reads
US9594844B2 (en) Selectively deleting items that are not of interest to a user
CN111506261B (en) Caching method, device, equipment and storage medium based on double cache areas
JP6586177B2 (en) Cumulative search processing method and apparatus, terminal, and storage medium
WO2022127288A1 (en) Webpage display method, system and medium
US10552371B1 (en) Data storage system with transparent presentation of file attributes during file system migration
US20180364942A1 (en) System and method for optimizing multiple packaging operations in a storage system
US8984028B2 (en) Systems and methods for storing data and eliminating redundancy
JP3225919B2 (en) Security system
US9213673B2 (en) Networked applications with client-caching of executable modules
CN110362776A (en) Browser front-end data storage method, device, equipment and readable storage medium storing program for executing
US10360234B2 (en) Recursive extractor framework for forensics and electronic discovery
CN109002495A (en) Date storage method and device
WO2016124099A1 (en) Webpage display method and device
CN113485642A (en) Data caching method and device
CN108124014B (en) Method for intelligently preventing third-party Cookie tracking of browser
US9411639B2 (en) System and method for managing network navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant