CN113778330B - Transaction processing method based on Flash memory - Google Patents

Transaction processing method based on Flash memory Download PDF

Info

Publication number
CN113778330B
CN113778330B CN202110898118.0A CN202110898118A CN113778330B CN 113778330 B CN113778330 B CN 113778330B CN 202110898118 A CN202110898118 A CN 202110898118A CN 113778330 B CN113778330 B CN 113778330B
Authority
CN
China
Prior art keywords
transaction
cache
buffer
mode
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110898118.0A
Other languages
Chinese (zh)
Other versions
CN113778330A (en
Inventor
马佳伟
孙楚昆
余彦飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Rongka Technology Co ltd
Original Assignee
Wuxi Rongka Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Rongka Technology Co ltd filed Critical Wuxi Rongka Technology Co ltd
Priority to CN202110898118.0A priority Critical patent/CN113778330B/en
Publication of CN113778330A publication Critical patent/CN113778330A/en
Application granted granted Critical
Publication of CN113778330B publication Critical patent/CN113778330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a transaction processing method based on a Flash memory, which comprises the following steps: receiving an instruction, and caching a first transaction in an execution period of the instruction in a first cache mode; judging whether the transaction needs to be continuously started or not; when the transaction needs to be continuously started and the cache space meets the starting condition of the second cache mode, caching the current transaction in the second cache mode; when the transaction needs to be continuously started and the buffer space does not meet the starting condition of the second buffer mode, submitting the buffer data of the transaction before the current transaction in the buffer space, resetting the buffer space, and buffering the current transaction in the first buffer mode; and circularly executing the step of judging whether the transaction needs to be started continuously until the current transaction is not required to be started continuously, submitting the cache data of the current transaction, and returning a response. By the transaction processing method, the memory chip can be effectively prevented from being frequently erased and written, so that the durability and the transaction performance of the chip are improved.

Description

Transaction processing method based on Flash memory
Technical Field
The invention relates to the technical field of data storage, in particular to a transaction processing method based on a Flash memory.
Background
Flash memory (Flash memory for short) is a Non-Volatile memory, which can hold data for a long time even without power supply, and its storage characteristic is equivalent to a hard disk, and this characteristic is the basis of Flash memory becoming a storage medium of various portable digital devices. Flash memory is widely used in digital devices such as mobile phones, tablet computers, digital cameras, communication devices, and the like.
In general, a digital device using a Flash memory needs to update data in a storage area during use, i.e. new data is written to replace original data. In the writing process of new data, if an abnormal situation, such as sudden power failure, occurs, new data cannot be completely written, and meanwhile, original data may be damaged, so that the digital device cannot be used normally, and even cannot be used continuously. If the operational safety and the service life of the digital device are affected, the loss of data may be more serious for the user. Therefore, the process of modifying important data by the digital device needs to use transactions to protect, backup the original data, and enable the original data to be restored when necessary, and the transaction performance and the durability of the memory chip of the digital device are greatly affected each time high-frequency dispersed data is written in the transaction process because the transaction protection generates additional data erasure. The currently adopted transaction caching method can only cache one transaction because the transaction is cached when the transaction is submitted, and can bring multiple times of data erasing and writing when a plurality of transactions are continuously carried out, so that the durability of the memory chip cannot be effectively ensured.
Disclosure of Invention
In view of the above, it is an object of the present invention to solve the problems of the prior art by continuously buffering transactions in a reasonable period.
The invention provides a transaction processing method based on a Flash memory, which comprises the following steps:
receiving an instruction, and caching a first transaction in an execution period of the instruction in a first cache mode, wherein in the first cache mode, all cache spaces are used for caching data of the first transaction;
judging whether the transaction needs to be continuously started or not;
when the transaction needs to be continuously started and the buffer space meets the starting condition of a second buffer mode, the current transaction is buffered in the second buffer mode, the buffer space is divided into a first buffer area and a second buffer area, the first buffer area is used for buffering the current transaction and the data of the transaction before the current transaction, and the second buffer unit is used for buffering the data of the transaction before the current transaction;
when the transaction needs to be continuously started and the buffer space does not meet the starting condition of the second buffer mode, submitting the buffer data of the transaction before the current transaction in the buffer space, resetting the buffer space, and buffering the current transaction in the first buffer mode;
And circularly executing the step of judging whether the transaction needs to be started continuously until the current transaction is not required to be started continuously, submitting the cache data of the current transaction, and returning a response.
Optionally, the starting condition of the second cache mode is: and more than half of the free space remains in the cache space.
Optionally, one to more transactions are included in the execution cycle, and a plurality of the transactions include the first transaction and the current transaction.
Optionally, the buffer space includes a plurality of buffer units, where the number of buffer units is even, and a size of each buffer unit is the same as a size of a minimum erasing unit of the Flash memory.
Optionally, in the second cache mode, the first cache area and the second cache area contain the same number of cache units; and in the second cache mode, the number of the cache units in the first cache area is only half of the number of the cache units in the cache space in the first cache mode.
Optionally, in the second cache mode, the data in the second cache area is copied from the data in the first cache area, so as to satisfy atomicity, consistency, isolation and durability of the transaction.
Optionally, when the transaction needs to be continuously started and the buffer space does not meet the starting condition of the second buffer mode, submitting the buffer data of the transaction before the current transaction in the buffer space, resetting the buffer space, and buffering the current transaction in the first buffer mode includes:
when the cache space is in an open state, judging whether the cache space is in a second cache mode or not;
if the cache space is in the second cache mode, copying uncommitted cache data of a transaction before the current transaction in the first cache area to a second cache area;
if the cache space is in the first cache mode, submitting the cache data in the cache space to a transaction, and emptying the cache space;
and starting the transaction and caching the transaction in the first caching mode.
Optionally, when the transaction does not need to be continuously started, submitting the cached data of the current transaction, and returning the response includes:
judging whether the cache space is in a first cache mode or not;
submitting the cache data in all the cache spaces if the cache spaces are in the first cache mode and the cache data are not submitted;
Submitting the rest cache data in the cache space if the cache space is in the first cache mode and part of cache data is submitted;
submitting the cache data in the first cache region if the cache space is in the second cache mode;
and after the completion of the cache submission, clearing the transaction mark, resetting the cache space, closing the cache, ending the execution period and returning a response.
Optionally, the step of submitting the cached data includes:
writing the original data of the target page into a transaction backup area;
setting a transaction mark;
writing new data in the cache space into the target page;
the transaction tag is cleared.
Optionally, when the caching of the transaction is canceled, a transaction rollback is required, and the step of rolling back the transaction includes:
judging whether the cache space is in a first cache mode or not;
when the cache space is in a first cache mode, judging whether cache data in the cache space are submitted or not;
resetting the buffer space when the buffer space is in a first buffer mode and the buffer data is submitted, and recovering the backup data of the current transaction;
resetting the buffer space when the buffer space is in a first buffer mode and the buffer data is not submitted;
And when the cache space is in the second cache mode, submitting the cache data of the second cache region, and resetting the first cache region.
According to another aspect of the present invention, there is provided a spatial structure based on a Flash memory, and a method for executing any one of the above transactions, wherein the spatial structure includes:
the system comprises a data storage area and a transaction processing area, wherein the data storage area is used for storing data of a target page, the transaction processing area is used for backing up original data of the target page, and the cache space is used for caching new data of the target page.
Optionally, the transaction processing area includes a transaction header and a transaction body, the transaction header is used as a transaction record page, and records a mark and a check value of page information and the like; the transaction body is used as a transaction backup page and is used for backing up the original data of the target page.
Optionally, one of the transaction record pages of the transaction header corresponds to one or more of the transaction backup pages of the transaction body.
According to the transaction processing method, the proper cache mode is selected according to the use condition of the cache space, so that when a plurality of transactions are contained in one execution period, the cache is submitted before the cache space of a plurality of cache erasing units is insufficient or the execution period is ended, frequent erasing of a memory chip is effectively avoided, and the service performance and the durability of the memory chip are improved; meanwhile, the atomicity, consistency, isolation and durability of the transaction can be ensured in one instruction period, so that the writing operation is effectively protected.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 shows a schematic block diagram of a memory space according to an embodiment of the invention;
FIG. 2 illustrates a diagram of the correspondence of transaction heads and transaction volumes within a conventional transaction region;
FIG. 3 illustrates a flow response diagram of a high performance Flash memory based transaction method according to an embodiment of the invention;
FIG. 4 illustrates a correspondence diagram of transaction heads and transaction volumes within a transaction region according to an embodiment of the present invention;
FIG. 5a illustrates a transaction cache spatial relationship diagram in a first cache mode, according to an embodiment of the invention;
FIG. 5b illustrates a transaction cache spatial relationship diagram in a second cache mode, according to an embodiment of the invention;
FIG. 6 shows a flow chart of a transaction initiation process in a Flash memory based transaction method according to an embodiment of the invention;
FIG. 7 shows a flow chart of a Flash memory based transaction method according to an embodiment of the invention;
FIG. 8 illustrates an atomic write flow diagram of a Flash memory based transaction method in a transaction scenario according to an embodiment of the invention;
FIG. 9 is a flow chart illustrating a transaction cache commit process in a Flash memory based transaction method according to an embodiment of the present invention;
FIG. 10 is a flow chart illustrating a transaction commit process in a Flash memory based transaction method in accordance with an embodiment of the present invention;
FIG. 11 shows a flow chart of a transaction rollback procedure in a Flash memory based transaction method according to an embodiment of the invention;
FIG. 12 illustrates a non-atomic write flow diagram of a Flash memory based transaction method in a non-transactional scenario in accordance with embodiments of the invention;
FIG. 13 is a flow chart showing a system read operation procedure in a Flash memory based transaction method according to an embodiment of the present invention.
Detailed Description
The invention will be described in more detail below with reference to the accompanying drawings. Like elements are denoted by like reference numerals throughout the various figures. For clarity, the various features of the drawings are not drawn to scale. Furthermore, some well-known portions may not be shown.
The present invention is described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth in detail. The present invention will be fully understood by those skilled in the art without the details described herein. Well-known methods, procedures, flows, components and circuits have not been described in detail so as not to obscure the nature of the invention.
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples.
Fig. 1 shows a schematic block diagram of a memory space according to an embodiment of the invention.
As shown in fig. 1, the chip 10 according to the embodiment of the present invention includes a main memory RAM (Random Access Memory ) 20 and a memory chip 30, where the chip 10 may be, for example, an entity of existing RAM and NVM, or a combination of a memory chip and a memory chip, and the memory chip 30 may be, for example, a Flash memory Flash, and the memory chip 30 includes a data storage area 310 and a transaction area 320. The data storage area 310 is used to store the original data of the target page, i.e., a data area that needs to be replaced or updated; the transaction area 320 is used for storing backup data of the original data, and is used for performing transaction processing, including a transaction header and a transaction body; the RAM20 is used to store new data of a target page, i.e., original data used to replace the data storage area 310. After the transaction is started, the transaction is cached in the RAM20, new data of a target page is cached, backup of original data is carried out in the transaction processing area 320, when the caching is completed and the backup is completed, the new data of the RAM20 is written into the data storage area 310, when the new data is determined to be successfully written into the data storage area 310, the backup data in the transaction processing area 320 is cleared, when a fault occurs and the new data is failed to be written into, the backup data is extracted from the transaction processing area 320, and the original data in the data storage area 310 is recovered.
Fig. 2 shows a diagram of correspondence between transaction heads and transaction volumes within a conventional transaction region.
As shown in FIG. 2, a typical transactional memory architecture requires two sets of minimum erasure elements, called pages for short, to be allocated in a nonvolatile memory chip (Flash), and the relationship between the two sets of elements is one-to-one. In connection with fig. 1, transaction area 320 includes a transaction header 321 and a transaction body 322. In the transaction process, atomic writing follows the principle of backup-before-write, in order to be able to identify the progress of a transaction, it is also necessary to write a mark and page information after each step until the mark is cleared after the transaction is submitted, so these two sets of storage units are also commonly referred to as a transaction header 321 and a transaction body 322, where the transaction header 321 records information, and the transaction body 322 backs up data, so that the page of the transaction header 321 is a transaction record page, and the page of the transaction body 322 is a transaction backup page, which corresponds to each other one by one.
In electronic devices, in order to prevent power failure, the transaction needs to back up the original data in a nonvolatile memory chip. If power failure occurs, the original data can be restored according to the backup when the power is on next time.
For example, interaction between the smart card and the external device is accomplished using an APDU (Application Protocol Data Unit ) defined in the ISO7816-4 specification, and the smart card may continuously use transactions during processing of the APDU, and if the power is suddenly turned off, and when the power is restored, the smart card may consider all the transactions to be failed, and continue to maintain the original data, thereby satisfying four characteristics of the transactions, namely, atomicity, consistency, isolation and persistence.
Further, when the transaction is started, the transaction buffer may be started, and the atomic write is made to buffer data in the RAM20 as early as possible until the transaction is committed and then written into the memory chip 30, if the buffer space is insufficient, the transaction is committed and then buffered. By adopting the RAM20 to carry out transaction caching, the erasing and writing of transaction heads and transaction bodies can be reduced, and the transaction performance is improved to a certain extent. However, according to the embodiment presented in FIG. 2, a commit is cached at the time of a transaction commit, so RAM20 can only cache one transaction. For a plurality of transactions or a plurality of atomic writes in succession, the optimization effect of this scheme is not obvious and the durability of the memory chip cannot be effectively ensured.
The invention improves the transaction caching, provides a transaction processing method based on a Flash memory, and can cache a plurality of erasing units in a continuous transaction of one execution period to make the space insufficient or submit the cache before the execution period is finished, thereby effectively avoiding frequent erasing of the transaction and further improving the transaction performance and the durability of a memory chip. The following is a detailed description of the embodiments with reference to fig. 3 to 13.
FIG. 3 illustrates a flow response diagram of a high performance Flash memory based transaction method according to an embodiment of the invention.
As shown in fig. 3, the terminal device receives the instructions sent by other devices, processes the instructions and returns a response, namely three stages of receiving the instructions, processing the instructions and returning the response are divided. During the processing of an instruction, there may be one or more atomic writes or multiple transactions, and the entire processing of an instruction may be considered as an execution cycle, where multiple transactions may be included in an execution cycle, and multiple transaction caches are executed, where atomicity, consistency, isolation, and durability may be guaranteed. Of course, an execution cycle may also contain only one transaction, where an execution cycle caches only one transaction. When a plurality of transactions are contained in one instruction response period, the invention can adopt the first cache mode or the second cache mode to cache the transactions according to the size of the cache space, and can continuously use the cache which is not submitted by the last transaction in the second cache mode, thereby reducing the erasing of transaction heads and transaction bodies and improving the durability of the memory chip.
FIG. 4 illustrates a correspondence diagram of transaction heads and transaction volumes within a transaction region according to an embodiment of the present invention.
As shown in fig. 4, using the transaction processing method of the present invention, the relationship between the transaction header 321 and the transaction body 322 is not limited to a one-to-one relationship, but a one-to-one or one-to-many relationship. That is, information of a plurality of transaction backup pages of the transaction body 322 can be stored in one transaction recording page of the transaction head 321, thereby reducing the erasing times of the transaction head 321 and the transaction body 322 and enhancing the chip durability. The transaction write operation is to update the new data of the target page in the RAM first, until the cache is submitted, the original data of the target page is written into the transaction backup, and the new data of the page is written into the target page from the RAM. One transaction record page of the transaction header 321 can record information of a plurality of transaction backup pages, and the cache is submitted after the RAM caches a plurality of transactions, so that the erasing times of the transaction header and the target page can be reduced by the way of the transaction write operation.
FIG. 5a illustrates a transaction cache spatial relationship diagram in a first cache mode, according to an embodiment of the invention; FIG. 5b illustrates a transaction cache spatial relationship diagram in a second cache mode, according to an embodiment of the invention.
As shown in fig. 5a, the cache structure of the first cache mode is suitable for a scenario in which the number of pages of the transactional write operation is large. RAM20 includes a plurality of cache locations, each of which has the same size as the page of transaction header 321 or transaction body 322. The plurality of cache units are used for caching new data of the target page. In this embodiment, the first cache mode is a cache mode in which all cache units in the RAM20 are used to cache the same transaction, and all cache units are regarded as a whole as the first cache area tc_cur. In the following embodiment, the first cache mode is also denoted by M1.
When only one transaction is contained in one execution cycle, a first cache mode is adopted, and all cache units are used as a first cache area TC_Cur. When the number of the transaction pages is greater than the number of the transaction cache pages, that is, the cache space is smaller, the cache may be submitted to the transaction in multiple times so that the cache space is in an idle state and the transaction continues to be cached. And when the cache space is larger, the cache only needs to be submitted once.
As shown in fig. 5b, the cache structure of the second cache mode is suitable for a scenario in which there are a plurality of transactions for one execution cycle and the number of pages of the write operation is small. RAM20 includes a plurality of cache locations, each of which has the same size as the page of transaction header 321 or transaction body 322. The plurality of cache units are used for caching new data of the target page. In the second cache mode, the cache unit in the RAM20 is used for caching a plurality of transactions, and in order to ensure data consistency when the caches overflow under the plurality of transactions, the plurality of cache units are split into two areas tc_cur and tc_pre according to the data volume. Specifically, the buffer unit is divided into two portions with equal space, the first portion being the first buffer area tc_cur and the second portion being the second buffer area tc_pre. The first buffer area tc_cur is used for buffering data of a current transaction and a transaction before the current transaction, and the second buffer area tc_pre is used for storing data of the transaction before the current transaction, such as a buffer not submitted by a last transaction. When the first buffer mode is switched to the second buffer mode, the data of the first buffer area tc_cur is copied from the data of the second buffer area tc_pre. In a subsequent embodiment, the second cache mode is also denoted as M2.
When a plurality of transactions are contained in one execution cycle, the first transaction is cached in a first cache mode, and when the second transaction starts, whether at least half of free space remains in the cache space is judged: when at least half of the free space remains, switching from the first cache mode to the second cache mode; if less than half of the free space remains, the buffer is submitted, and the buffer space is reset, and the buffer is still in the first buffer mode.
Therefore, when a plurality of transactions are contained in one execution period, the embodiment of the invention can continuously buffer the plurality of transactions and then submit the plurality of transactions under the condition of larger buffer space, thereby avoiding the repeated erasing and writing of the transaction head and the transaction body and effectively improving the durability and the transaction performance of the memory chip.
Fig. 6 shows a flowchart of a transaction start procedure in a Flash memory-based transaction processing method according to an embodiment of the present invention.
As shown in fig. 6, when a transaction is started, it is determined whether the cache has been started and the state of the cache, so that which cache mode is applicable is selected. When the transaction is started, if the transaction cache is started and is in the M1 mode, judging whether a condition for starting M2 is met: in case at least half of the tc_cur of M1 remains unused, the mode is switched to M2, i.e. the tc_cur is copied to tc_pre, after which the use of tc_cur continues during the current transaction. Specifically:
In step S101, a transaction is started.
In step S102, it is determined whether the buffer has been started.
If the cache is not yet started when the current transaction is about to be started, executing step S103; if the current transaction is about to be started, the cache is already started, and step S104 is performed.
In step S103, the buffer M1 is turned on.
In this step, when the current transaction is started, the cache is not started yet, which means that the current transaction is the first transaction of the current cache, and at this time, the transaction is cached in the first cache mode M1. Then step S108 is executed to successfully open the transaction.
In step S104, it is determined whether the cache is in M1 mode.
In this step, when the current transaction is started, the cache is already started, and then the state of the cache at this time needs to be determined, if in M1 mode, whether it meets the condition of starting M2 needs to be determined, and step S105 is executed; otherwise, the cache is now in M2 mode, then step S107 is performed to open the transaction according to M2 mode.
In step S105, it is determined whether the buffer satisfies the state of turning on M2.
In this step, if the current transaction is about to be started, the cache is already started and is already in M1 mode, then it needs to be determined whether the condition for starting M2 is satisfied, if so, the current transaction is cached in M2 mode, and step S107 is executed; otherwise, the current transaction cannot be cached in M2 mode, but can only be cached in M1 mode, and step S106 is performed.
In step S106, the M1 cache is committed, and the M1 cache is closed.
In this step, when the current transaction is about to be started, the buffer is already started and is already in M1 mode, and the buffer space is insufficient to continue to buffer the current transaction with M2, so that the buffer before the current transaction needs to be submitted, and the buffer in M1 mode is closed, and the execution step S102 is returned to again determine whether the buffer is started. Of course, in this step, the cache has been turned off, at which time the cache is not turned on, and the current transaction is cached in M1 mode by re-executing step S103.
In step S107, in the M2 mode, tc_cur is submitted to tc_pre.
In this step, when the current transaction is about to be started, the cache is already started, and is in M2 mode or satisfies the starting condition of M2 mode, at this time, the data in the first cache area tc_cur needs to be submitted to the second cache area tc_pre, and then the transaction is successfully started, and step S108 is executed.
In step S108, the transaction is successfully started.
When the cache is in the M2 mode, and after the data in the first cache area TC_Cur is submitted to the second cache area TC_Pre, whether the condition for starting the M2 is met can be judged again, and if the condition is met, the current transaction is cached in the M2 mode; if not, submitting the current cache, and caching the current transaction according to the M1 mode. Further, the scene with sufficient cache space is subdivided into a scene A and a scene B; the scene with insufficient cache space is subdivided into a scene C and a scene D.
Scene a: in one instruction response period, only one transaction is started, and the buffer space is enough in the transaction process, the buffer is submitted to the memory chip before the response is returned, and the process is in an M1 mode.
Scene B: in one instruction response period, n (n is more than or equal to 2) transactions are started, the data volume to be written in each transaction does not exceed the size of the current cache space, when the 1 st transaction is started, the cache is started, and when the 2 nd to n th transactions are started, the current cache data is submitted to the previous transaction cache, and the current cache data is submitted before response is returned. The process is in M1 mode before the 2 nd transaction is started, and in M2 mode after the 2 nd transaction is started.
Scene C: in one instruction response period, only one transaction is started, the number of pages of the transaction writing operation exceeds the size of a cache space, the current cache is directly submitted, the current transaction residual data is continuously cached until the transaction is submitted, all cache data are submitted, and a response is returned, wherein the process is in an M1 mode.
Scene D: in one instruction response period, a plurality of transactions are started, the buffer memory space before the xth transaction is enough (from M1 to M2), when the xth transaction is started, TC_Cur data is submitted (copied) to TC_Pre, in the xth transaction, TC_Cur space is insufficient, at the moment, TC_Pre is submitted, the buffer memory space is increased, the current transaction data is continuously buffered (from M2 to M1), until all buffer memory data are submitted when the transaction is submitted, and then the response is returned. The process switches from M1 mode to M2 mode and then to M1 mode.
FIG. 7 shows a flow chart of a Flash memory based transaction method according to an embodiment of the invention.
As shown in fig. 7, in step S01, an instruction is received to cache a current transaction in a first cache mode. In this step, the first transaction in the instruction is started immediately after receiving the instruction sent from the outside, and the current transaction is cached according to the first cache mode because the cache is not started at this time.
In step S02, it is determined whether or not a subsequent transaction needs to be started. In this step, it is determined whether there are other transactions in the instruction to be started after the first transaction is started. If only one transaction is included in the instruction response period, the caching is ended, the current transaction is submitted and cached, the subsequent transaction does not need to be continuously started, and step S06 is needed to be executed; if one instruction response cycle includes a plurality of transactions, then a subsequent transaction is started, and step S03 is performed.
In step S03, it is determined whether the buffer space satisfies the on condition of the second buffer mode. In this step, if the buffer space satisfies the open condition of the second buffer mode after the last transaction is buffered, step S04 is executed; otherwise, step S05 is performed.
In step S04, the subsequent transaction is cached in the second cache mode. In the step, after the last transaction buffering is finished, a new transaction is started again, and the buffering space meets the second buffering mode starting condition, and at the moment, the transaction is buffered in the second buffering mode. Then, the process returns to step S02, and it is determined again whether or not the subsequent transaction needs to be started.
In step S05, the current cache is committed, and subsequent transactions are cached in a first cache mode. In this step, the buffering of the last transaction is ended, and the buffering space is insufficient to start the second buffering mode, and the buffering of the last transaction needs to be submitted at this time, i.e. the current buffering is submitted, the buffering space is left, and the current transaction is buffered in the first buffering mode. Then, the process returns to step S02, and it is determined again whether or not the subsequent transaction needs to be started.
In step S06, the current cache and transaction are committed and a response is returned. The step is executed under the condition that the subsequent transaction is not required to be started in the step S02, and the subsequent transaction is not required to be started to indicate that all the transactions in one instruction response period are completed, and the current cache and the transaction are submitted at the moment, and the response is returned.
After the execution of step S01, the embodiment circularly executes steps S02-S05 until all things in an instruction response period are cached, and at this time, without starting a subsequent transaction again, then step S06 is executed, the current cache and the transaction are submitted, and a response is returned, so as to complete an instruction.
The step of opening a transaction is described in detail in fig. 6, and the steps of committing a cache and committing a transaction are described in subsequent embodiments.
FIG. 8 shows an atomic write flow diagram of a Flash memory based transaction method in a transaction scenario according to an embodiment of the invention.
The implementation flow of the transaction cache is based on the atomicity and consistency of a single instruction, and two scenes need to be considered: transaction scenario, non-atomic scenario. The transaction scenario has atomic writes as the basic operation, with most of the features of the transaction, except that a transaction may contain multiple atomic writes, which are themselves the smallest operations. A typical atomic write implementation is: if the transaction is not started currently, actively starting the transaction, using the transaction write to replace the atomic write, and immediately submitting the transaction; the transaction write is used if the transaction has been opened, and the data is committed by the transaction flow.
In this embodiment, in the transaction scenario, if the target page is cached, the new data of the page is updated directly in the RAM cache; in the case that the target page is not cached, the method can be subdivided into two scenes of sufficient space and insufficient space according to the use condition of the cache space of the RAM. Further, the scene with sufficient space is subdivided into a scene A and a scene B; the space-deficient scene is subdivided into a scene C and a scene D (described in fig. 6, and not described here).
As shown in fig. 8, in step S201, in a transaction scenario, atomic writes are turned on.
In step S202, it is checked whether it is in the cache. If the target page has already started caching, then step S203 is directly performed; otherwise, step S204 is performed.
In step S203, the buffered new data is updated. At this point, the cache has already started, and then the new data for the target page is updated directly in the cache.
In step S204, it is determined whether the buffer space is sufficient. In this step, the atomic write does not start a new cache, and at this time, it is first determined whether the cache space is enough to accommodate the cache of the current atomic write, if the cache space is large enough, step S205 is executed, and the current atomic write is continuously cached in the cache; otherwise, step S206 is executed to submit a part of the buffer before buffering.
In step S205, newly added cache page data is written into the cache. The buffer space is sufficient, and the data is written into the buffer at this time.
In step S206, it is determined whether the cache is in the first cache mode. If the cache space is insufficient to perform the current cache operation of atomic write, at this time, judging which mode the cache is in, and if the cache space is in the first cache mode, executing step S207 and step S205; otherwise, step S208 is performed. After the caches in the two different modes are submitted, a cache space can be left, and the current operation of atomic writing is cached.
In step S207, the transaction cache in the first cache mode is committed. At this time, since the buffer space is insufficient to support the current atomic write buffer operation and the buffer is in the first buffer mode, the current buffer in the first buffer mode needs to be submitted at this time, the buffer space is free, and step S205 is executed to buffer new data with a new buffer page.
In step S208, in the second cache mode, the data of tc_pre is committed. In this step, the remaining buffer space is insufficient to satisfy the next atomic write buffer, and the data in the buffer needs to be submitted first at this time, and the data in the second buffer area tc_pre, that is, the data of the last buffer, is submitted because the data is in the second buffer mode. Then, step S209 is performed.
In step S209, the cache mode is set to the first cache mode. In this step, since the buffer space is insufficient to buffer the current atomic write operation and is in the second buffer mode, the data in the second buffer mode is submitted at this time, the buffer space is vacated, the buffer mode is set to the first buffer mode, and the current atomic write operation is continuously buffered, that is, step S205 is executed.
The series of operations of the atomic write cache of the present embodiment is generally consistent with the embodiments of fig. 6-7.
FIG. 9 is a flow chart illustrating a transaction cache commit process in a Flash memory based transaction method according to an embodiment of the invention.
As shown in fig. 9, in step S301, commit buffering is started. TC (Transaction Cache ), i.e. the caching of transactions.
In step S302, it is determined whether or not it is the first commit cache. In this step, it is determined whether the current cache is submitted for the first time, if yes, step S304 is executed, otherwise step S303 is executed. If the current buffer to be submitted is the first time, it is necessary to select which data to submit according to the buffer mode, submit the data of tc_cur in M1 mode, and submit the data of tc_pre in M2 mode, see steps S304-S306.
In step S303, in the M1 state, TC_Cur is committed to the transaction. In this step, the current buffer is not submitted for the first time, that is, some buffer data has been submitted before, which means that the buffer space is smaller and the number of transaction pages is larger, and then the buffer is in M1 state, and all data of tc_cur is submitted to the transaction. Step S307 is then performed.
In step S304, it is determined whether the cache is in the M2 state. In this step, since the current cache is submitted for the first time, i.e. the cache space is larger. At this time, it is necessary to determine in which cache mode the current cache to be committed is. If in the second cache mode, step S305 is executed; otherwise, step S306 is performed.
In step S305, the current cache is in the M2 state, and data of tc_pre is committed to the transaction. In this step, since the current buffer is in the M2 state, i.e. the data in the first buffer area tc_cur is copied into the second buffer area tc_pre, the current buffer is submitted before the next buffer is opened, and then the data in the second buffer area tc_pre needs to be submitted. Step S307 is then performed.
In an embodiment of the present invention, tc_cur represents the current transaction cache (Current Transaction Cache), not a cache within a transaction, but a set of pages in the M2 state. TC_pre represents the last transaction cache (Previous Transaction Cache), referring to a set of pages in the M2 state whose data is the last transaction's cache, which can continue to be accessed and modified before it is committed, but rollback can only be to recover the last transaction's cache, not the subsequently modified data. M1 refers to a first cache Mode (Mode 1) capable of using all cache space. M2 refers to the second cache Mode (Mode 2) and only half the cache space of the M1 state can be used.
In step S306, the current cache is in the M1 state, and data of tc_cur is committed to the transaction. In this step, since the current buffer is in the M1 state and is the first data to be buffered, all the data is buffered in the tc_cur in the first buffer area. Therefore, the current buffer is committed before the next buffer is started, and then all data in the first buffer area tc_cur needs to be committed. Step S307 is then performed.
In step S307, the commit cache ends.
Further, when the transaction cache is submitted, if the configuration of the TC_PN is greater than or equal to T_PN, it is not necessary to judge whether to submit the TC for the first time; otherwise, it is necessary to determine whether to commit for the first time. TC_PN represents the transaction cache page number (Transaction Cache Page Number), which refers to the number of pages allocated to the transaction cache. T_PN represents the number of transaction pages (Transaction Page Number), which refers to the number of pages allocated to a transaction. Specifically, the number of pages in the buffer space is far greater than the number of pages required by each transaction, so the RAM may be submitted after buffering the transactions for multiple times, so the buffer must not be submitted, and it is not necessary to determine whether the buffer is submitted for the first time, or the buffer must be submitted for the first time at this time, and then step S304 may be directly performed after step S301 is performed.
The embodiment of the invention provides an implementation mode for submitting the cache to the transaction, and the implementation process is divided into 4 steps: backing up original data of a target page; the page information and the mark are written into the transaction head after the check value is calculated; writing the new data into the target page; the mark is erased. The transaction is cached for a plurality of times, and then the transaction is submitted to be cached, so that the backup times of the transaction can be reduced, the erasing times of the transaction head and the transaction body can be reduced, and the durability of the memory chip can be improved.
FIG. 10 is a flow chart illustrating a transaction commit process in a Flash memory based transaction method in accordance with an embodiment of the present invention.
As shown in fig. 10, in step S401, a commit transaction starts.
In step S402, it is checked whether the APDU is ready to return a response. In this step, it is determined whether all transactions within one instruction response cycle have completed, i.e., a response is returned. When the transaction is submitted, if a response is ready to return, the buffer of the current transaction, that is, TC_Cur in M2 or M1 mode, must be submitted, and steps S405-S408 are executed; otherwise, the step S403-S404 is executed as a commit process of a certain transaction in the instruction response process.
In step S403, it is determined whether the cache is in the M1 state. In this step, the current transaction is ready to commit and no response is returned, at which point the state of the cache and the size of the remaining cache space need to be determined. If the current cache is in the M2 state, step S408 is executed, and the commit transaction is ended, where, since the current cache is in the M2 state, it is explained that the cache space is larger, then the transaction is committed first, and then it is judged in which mode the next transaction should be cached, see fig. 6 and 7. If the cache is in the M1 state, it is further determined whether the cache has been committed, and step S404 is performed.
In step S404, it is determined whether the cache has been committed. If the cache is not submitted, executing step S408; otherwise, step S407 is performed. In this step, the current transaction needs to be submitted, and no response needs to be returned, the buffer memory at this time is still in M1 mode, if the buffer memory corresponding to the transaction has not been submitted yet in M1 mode, the transaction is directly submitted, then the next transaction is started, and the determination of which buffer memory space is suitable for is continued. If the buffer memory of the transaction is already partially submitted in the M1 mode, in order to ensure the four characteristics of the transaction, the rest buffer memory should be submitted first and then the transaction is submitted for bundling.
In step S405, it is determined whether the cache is in the M2 state. In this step, all transactions within one instruction response cycle are started, and a response is ready to be returned, at this time, it is determined in which mode the cache of the transaction is located, and the cache of the current transaction is committed. If in the M2 mode, steps S406 and S407 are performed; otherwise, step S407 is directly performed.
In step S406, tc_pre is cleared. In this step, the buffer is in M2 state, and since the response is ready to be returned, the buffer in M2 state needs to be submitted at this time, then the previous buffer data is first cleared, that is, the data in the second buffer area tc_pre is cleared, and then step S407 is performed.
In step S407, the tc_cur is committed to the transaction. In this step, the current buffer needs to be committed, and then, whether in M1 mode or M2 mode, the buffer of the current transaction is buffered in the first buffer area tc_cur, so that the data in tc_cur is committed to the transaction.
In step S408, the commit transaction ends.
Further, for step S403, if the configuration of tc_pn (number of transaction cache pages) is greater than or equal to t_pn (number of transaction pages), then no multiple cache commits during a transaction, whether M1 mode or M2 mode is adopted, and it is not necessary to determine whether cache commits according to step S404. Only when the next transaction is started, whether the storage space is enough is judged, if the cache needs to be submitted, whether to submit TC_Pre (in M2 mode) or TC_Cur (in M1 mode) can be distinguished according to different cache modes according to FIG. 9.
If the configuration of tc_pn is smaller than that of t_pn, it is further determined in step S404 whether the M1 mode is cached, at this time, M1 may switch to the M2 mode when the transaction is started next time, or the first commit may be completed when the M1 cache space is insufficient. Specifically, when it is determined that the cache is not submitted according to step S404, and the cache space is sufficient in the M1 mode, when the next transaction is started according to fig. 6, the M2 mode cache is adopted or the M1 cache is restarted; when it is determined that the cache is submitted according to step S404, the cache is in M1 mode again, which indicates that the cache space is insufficient, and the current cache needs to be submitted.
FIG. 11 shows a flow chart of a transaction rollback procedure in a Flash memory based transaction method according to an embodiment of the invention.
As shown in fig. 11, when a transaction rolls back, only the current transaction is rolled back, and the previous transaction is not rolled back. Therefore, if the current cache is in M1 mode, the current first cache tc_cur needs to be emptied, and then it is determined whether the cache is committed: under the condition that the buffer memory is submitted, backup data needs to be restored, namely, the page data submitted to the transaction in the buffer memory is written into the corresponding destination address, so that the integrity of the transaction is ensured; in the event that the cache is not committed, this transaction rollback is completed. If the current buffer is in M2 mode, the previous transaction buffer needs to be submitted first, only the current transaction buffer is emptied, and according to the setting of M2 mode, TC_Pre needs to be submitted first, and then TC_Cur needs to be emptied. Before the transaction rollback ends, the transaction cache needs to be shut down. Specifically:
in step S501, transaction rollback begins.
In step S502, it is determined whether the cache is in the M1 state. If the state is M1, step S503 is executed, otherwise step S507 is executed.
In step S503, tc_cur is emptied. In this step, the current buffer of the current transaction is emptied, so that the transaction is rolled back, and then it is determined whether the transaction has previously submitted a part of the buffer, and step S504 is executed.
In step S504, it is determined whether cached data has been committed in M1 mode. If the transaction is committed, i.e. a part of the cached data has been backed up, the backup data needs to be restored, step S505 is executed, otherwise, the rollback is completed, step S506 is executed.
In step S505, the backup data is restored. In this step, the rollback of the current transaction starts and in M1 mode, the cache data in RAM has been emptied, but now there is already a part of the cache committed, so in order to guarantee the four-way nature of the current transaction, the already committed cache data must be restored. Then, the already backed up partial data needs to be restored to the target page, that is, the original data of the target page corresponding to the current transaction is restored.
In step S506, the transaction rollback ends.
In step S507, tc_pre is submitted. In this step, the current transaction needs to be rolled back and in M2 mode, then all the previous transactions should be backed up first, so all the transaction caches before commit are needed, i.e. all the data in the second cache area tc_pre is committed. Step S508 is then performed.
In step S508, tc_cur is emptied. In this step, the data of the second buffer area tc_pre is completely submitted, and at this time, the current buffer of the current transaction is emptied, that is, the data of the first buffer area tc_cur is emptied, so as to ensure rollback of the current transaction. Step S506 is then executed, and the transaction rollback ends.
As can be seen in connection with fig. 6-11: when the M1 mode is switched to the M2 mode, only the content of tc_cur is copied to tc_pre, which means that the content of tc_cur in the M2 mode is consistent with the content of tc_pre before the next transaction buffering starts.
If the new target page needs to be cached after the M1 mode is switched to the M2 mode, but the situation of insufficient cache space occurs, at the moment, the data of TC_Pre is submitted first, namely the caching of the previous transaction is performed, and then the caching is performed in the M1 mode;
if the buffer space is always sufficient after the mode is switched from the M1 mode to the M2 mode, the M2 mode is continuously adopted for buffering, and the tc_cur necessarily contains the target page of the tc_pre buffer, so that the tc_cur needs to be submitted before the response is returned.
The invention can buffer a plurality of transactions at one time by the transaction processing mode based on the Flash memory shown in the figures 3-11, and the buffer is not necessarily submitted when the transactions are submitted. Therefore, the erasing times of the transaction head and the target page can be reduced, and the durability and the transaction performance of the memory chip are improved.
The above embodiments are all transaction processing methods in a transaction scenario, and the following describes the flow of a write operation in a non-atomic scenario through fig. 12.
FIG. 12 illustrates a non-atomic write flow diagram of a Flash memory based transaction method in a non-transactional scenario, according to an embodiment of the invention.
The non-atomic writing does not need to consider the four characteristics of the transaction, but directly writes new data into the target page, and does not need to adopt the transaction for backup. As shown in fig. 12, in the non-atomic scenario, if the transaction is already open, all caches need to be committed first, and then the operation of writing into the target page is performed; if the target page has backup in the transaction, the transaction cache is updated by using page protection write; if the cache is not open or the target page is not backed up in the transaction, the target page is written directly. Specifically:
in step S601, a non-atomic write operation is started.
In step S602, it is checked whether the transaction is open. If there is an open transaction, the corresponding cache needs to be submitted, the transaction is ended, and step S603 is executed; otherwise, the step S605 is executed if the buffers are all determined to be present.
In step S603, all caches are committed. The entire cache of the opened transaction is committed, the transaction is ended, and a non-atomic write operation is used to execute step S604.
In step S604, the target page is written. In this step, new data is written directly to the target page.
In step S605, it is checked whether the buffer is on. In this step, if the transaction is not started, it is determined whether there is a cache, and if there is no cache, step S604 is executed to directly write new data into the target page. Otherwise, step S606 is executed to determine again whether the cache is in the transaction state.
In step S606, it is checked whether the cache is in a transaction. If the cache is not in the transaction state, the process is not needed at this time, and step S604 is directly executed to write the new data into the target page. Otherwise, step S607 is performed.
In step S607, the transactional backup is updated using page protection write. In this step, the cache is a cache in the transaction state, so that the transaction cache and the backup need to be updated by using page protection write, the transaction is ended, and then step S604 is executed to avoid the mutual interference of non-atomic write and atomic write.
FIG. 13 is a flow chart showing a system read operation procedure in a Flash memory based transaction method according to an embodiment of the present invention.
In the invention, the transaction write is to update the new data of the target page in the RAM buffer firstly, the original data of the target page cannot be written into the transaction backup until the buffer is submitted, and the new data of the page is written into the target page from the RAM buffer. Such transaction caching operations need to avoid affecting the performance of the system read interface, and thus figure 13 is presented to illustrate the flow of system read operations.
As shown in fig. 13, the system read operation is usually performed on the RAM20 according to the sequence of accessing the Flash address by the system read interface, and each read operation needs to traverse the cache page once, which is time-consuming.
In this embodiment, after one access, the currently accessed data may be exchanged with the data of the first page, so that if the system needs to continuously read the same page, the system does not need to traverse all cache pages, but directly reads the first page and can directly return the data, thereby reducing the performance influence of the transaction cache on the system read interface.
In fig. 13, when page 1 is accessed for the first X times and data is exchanged with page 0, page 0 data can be read directly when page 0 is accessed for the next 2 times. Thereby reducing the number of traversals over the cache pages.
In summary, according to the transaction processing method based on the Flash memory provided by the embodiment of the application, the cache mode can be selected according to the number of the transactions in one instruction response period and the size of the cache space, so that when a plurality of transactions are continuously executed, the plurality of transactions can be cached and then submitted to the transactions, the erasing times of the transactions are reduced, and the durability of the memory chip is improved.
The following description is made in connection with the examples: taking a CIU98M25 chip as an example, the size of a Flash page is 512 bytes, the page erasing time is 3ms, 2 transactions are used in a single instruction, each transaction has 5 atomic writes, three scenes are provided below as references, and the sequence numbers of each scene are ordered in ascending order using the optimized lifting effect of the transaction cache:
In a first scene, 5 different pages of different data are written for each transaction, and 10 target pages are total;
scene two, writing 2 different pages of different data every transaction, and writing 3 different data for 1 same page, wherein the total number of the different data is 5;
scene three, each transaction writes only 5 times to 1 identical page, and the data is different, totaling 1 target page.
If the transaction is executed without using the scheme of transaction buffering, scene one needs to erase 2× (5*4) =40 times of Flash, scene two needs to erase 2× (3×4+2) =28 times of Flash, scene three needs to erase 2× (1×4+1×4) =16 times of Flash, which sequentially consumes time: 120ms, 84ms, 48ms.
If the transaction processing method provided in this embodiment is used for performing transaction buffering, the performance is mainly affected by tc_pn, and this parameter needs to be configured according to the abundance of RAM resources of the terminal device and the actual scenario, for example, no matter which instruction is executed by the current terminal device, at least 5 different pages need to be written by a transaction, and two configuration schemes including a buffer space including 6 pages of buffering and a buffer space including 10 pages of buffering are described below as an example:
6 page cache: scene one needs to erase 2 (1×2+5×2) =24 times of Flash, scene two needs to erase (2×2+2×2))+ 1*2 =14 times of Flash, scene three needs to erase 1×2+1×2) =4 times of Flash, which sequentially takes time: 72ms, 42ms, 12ms.
10-page cache: scene one needs to erase 2 (1×2+5×2) =24 times of Flash, scene two needs to erase (1×2+4×2))+ 1*2 =12 times of Flash, scene three needs to erase 1×2+1×2) =4 times of Flash, which sequentially consumes time: 72ms, 36ms, 12ms.
Thus, in the three scenarios of the above example, the scheme employing transaction caching improves performance by a factor of 0.3 to 0.5 relative to the scheme without transaction caching. By comparing the two configuration schemes, it can be found that the transaction cache does not need to occupy excessive RAM resources under the condition that the tc_pn meets the actual demand.
In summary, the key of the present invention is to provide a transaction caching method based on a Flash memory, which solves the performance problem of the transaction and the problem of Flash durability caused by the performance problem. Specifically, the invention uses RAM to buffer the Flash data operated in the transaction process, and reasonably submits the transaction buffer, thereby effectively ensuring the atomicity and consistency of the transaction, and simultaneously effectively reducing the Flash erasing of the transaction, thereby improving the transaction performance and solving the problem of Flash durability caused by the Flash erasing. By adopting the transaction processing scheme provided by the invention, only a single transaction can be cached, or a plurality of transactions can be cached in a selectable period. Even for a system read interface, the optimized ordering management method of the transaction cache is provided, the possibility of needing frequent traversal is reasonably avoided, and the performance influence of the transaction cache in certain read scenes is effectively reduced.
It should be understood that the words "comprise," "comprising," and the like throughout the specification and claims are to be interpreted in an inclusive rather than an exclusive or exhaustive sense unless the context clearly requires otherwise; that is, it is the meaning of "including but not limited to". In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
Embodiments in accordance with the present invention, as described above, are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention and various modifications as are suited to the particular use contemplated. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. A transaction processing method based on a Flash memory comprises the following steps:
receiving an instruction, and caching a first transaction in an execution period of the instruction in a first cache mode, wherein in the first cache mode, all cache spaces are used for caching data of the first transaction;
judging whether the transaction needs to be continuously started or not;
when the transaction needs to be continuously started and the buffer space meets the starting condition of a second buffer mode, the current transaction is buffered in the second buffer mode, the buffer space is divided into a first buffer area and a second buffer area, the first buffer area is used for buffering the data of the current transaction and the transaction before the current transaction, and the second buffer area is used for buffering the data of the transaction before the current transaction;
when the transaction needs to be continuously started and the buffer space does not meet the starting condition of the second buffer mode, submitting the buffer data of the transaction before the current transaction in the buffer space, resetting the buffer space, and buffering the current transaction in the first buffer mode;
and circularly executing the step of judging whether the transaction needs to be started continuously until the current transaction is not required to be started continuously, submitting the cache data of the current transaction, and returning a response.
2. The transaction processing method according to claim 1, wherein the second cache mode is started under the condition that: and more than half of the free space remains in the cache space.
3. The transaction processing method of claim 1, wherein one to more transactions are included in the execution cycle, a plurality of the transactions including the first transaction and the current transaction.
4. The transaction processing method according to claim 2, wherein the buffer space includes a plurality of buffer units, the buffer units being an even number, and each buffer unit having a size identical to a size of a minimum erasing unit of the Flash memory.
5. The transaction processing method according to claim 4, wherein in the second cache mode, the first cache area and the second cache area contain the same number of cache units; and in the second cache mode, the number of the cache units in the first cache area is only half of the number of the cache units in the cache space in the first cache mode.
6. The transaction processing method of claim 5, wherein in the second cache mode, data in the second cache region is copied from data in the first cache region, satisfying atomicity, consistency, isolation, and persistence of a transaction.
7. The transaction processing method as claimed in claim 5, wherein, when the transaction needs to be continuously started and the buffer space does not satisfy the starting condition of the second buffer mode, submitting the buffer data of the transaction before the current transaction in the buffer space, resetting the buffer space, and buffering the current transaction in the first buffer mode comprises:
when the cache space is in an open state, judging whether the cache space is in a second cache mode or not;
if the cache space is in the second cache mode, copying uncommitted cache data of a transaction before the current transaction in the first cache area to a second cache area;
if the cache space is in the first cache mode, submitting the cache data in the cache space to a transaction, and emptying the cache space;
and starting the transaction and caching the transaction in the first caching mode.
8. The transaction processing method of claim 7, wherein submitting the cached data for the current transaction when the transaction does not need to continue to be started, returning a response comprises:
judging whether the cache space is in a first cache mode or not;
Submitting the cache data in all the cache spaces if the cache spaces are in the first cache mode and the cache data are not submitted;
submitting the rest cache data in the cache space if the cache space is in the first cache mode and part of cache data is submitted;
submitting the cache data in the first cache region if the cache space is in the second cache mode;
and after the completion of the cache submission, clearing the transaction mark, resetting the cache space, closing the cache, ending the execution period and returning a response.
9. The transaction processing method of claim 8, wherein the step of committing the buffered data comprises:
writing the original data of the target page into a transaction backup area;
setting a transaction mark;
writing new data in the cache space into the target page;
the transaction tag is cleared.
10. The transaction processing method of claim 7, wherein a transaction rollback is required to be employed when the caching of the transaction is canceled, the step of rollback comprising:
judging whether the cache space is in a first cache mode or not;
when the cache space is in a first cache mode, judging whether cache data in the cache space are submitted or not;
Resetting the buffer space when the buffer space is in a first buffer mode and the buffer data is submitted, and recovering the backup data of the current transaction;
resetting the buffer space when the buffer space is in a first buffer mode and the buffer data is not submitted;
and when the cache space is in the second cache mode, submitting the cache data of the second cache region, and resetting the first cache region.
CN202110898118.0A 2021-08-05 2021-08-05 Transaction processing method based on Flash memory Active CN113778330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110898118.0A CN113778330B (en) 2021-08-05 2021-08-05 Transaction processing method based on Flash memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110898118.0A CN113778330B (en) 2021-08-05 2021-08-05 Transaction processing method based on Flash memory

Publications (2)

Publication Number Publication Date
CN113778330A CN113778330A (en) 2021-12-10
CN113778330B true CN113778330B (en) 2023-04-25

Family

ID=78836971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110898118.0A Active CN113778330B (en) 2021-08-05 2021-08-05 Transaction processing method based on Flash memory

Country Status (1)

Country Link
CN (1) CN113778330B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807169A (en) * 2004-09-30 2010-08-18 英特尔公司 The mixed hardware software of transactional memory visit is realized
CN107992269A (en) * 2017-12-08 2018-05-04 华中科技大学 A kind of affairs wiring method based on duplicate removal SSD
CN110471626A (en) * 2019-08-15 2019-11-19 深圳融卡智能科技有限公司 Nor Flash management level and method applied to Java Card
CN110471617A (en) * 2018-05-10 2019-11-19 Arm有限公司 For managing the technology of buffer structure in the system using transaction memory
CN111008157A (en) * 2019-11-29 2020-04-14 北京浪潮数据技术有限公司 Storage system write cache data issuing method and related components
CN112005222A (en) * 2018-04-24 2020-11-27 Arm有限公司 Robust transactional memory
CN112416368A (en) * 2020-11-25 2021-02-26 中国科学技术大学先进技术研究院 Cache deployment and task scheduling method, terminal and computer readable storage medium
CN112527749A (en) * 2020-12-11 2021-03-19 平安科技(深圳)有限公司 Cache strategy determination method and device, computer equipment and readable storage medium
CN112685337A (en) * 2021-01-15 2021-04-20 浪潮云信息技术股份公司 Method for hierarchically caching read and write data in storage cluster
CN112714906A (en) * 2018-09-28 2021-04-27 英特尔公司 Method and apparatus to use DRAM as a cache for slow byte-addressable memory for efficient cloud applications
CN112748869A (en) * 2019-10-31 2021-05-04 华为技术有限公司 Data processing method and device
CN113190470A (en) * 2021-05-21 2021-07-30 恒宝股份有限公司 FLASH chip storage area and high-performance power-off-prevention read-write method thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8112585B2 (en) * 2009-04-30 2012-02-07 Netapp, Inc. Method and apparatus for dynamically switching cache policies
US8583868B2 (en) * 2011-08-29 2013-11-12 International Business Machines Storage system cache using flash memory with direct block access
US9262328B2 (en) * 2012-11-27 2016-02-16 Nvidia Corporation Using cache hit information to manage prefetches

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807169A (en) * 2004-09-30 2010-08-18 英特尔公司 The mixed hardware software of transactional memory visit is realized
CN107992269A (en) * 2017-12-08 2018-05-04 华中科技大学 A kind of affairs wiring method based on duplicate removal SSD
CN112005222A (en) * 2018-04-24 2020-11-27 Arm有限公司 Robust transactional memory
CN110471617A (en) * 2018-05-10 2019-11-19 Arm有限公司 For managing the technology of buffer structure in the system using transaction memory
CN112714906A (en) * 2018-09-28 2021-04-27 英特尔公司 Method and apparatus to use DRAM as a cache for slow byte-addressable memory for efficient cloud applications
CN110471626A (en) * 2019-08-15 2019-11-19 深圳融卡智能科技有限公司 Nor Flash management level and method applied to Java Card
CN112748869A (en) * 2019-10-31 2021-05-04 华为技术有限公司 Data processing method and device
CN111008157A (en) * 2019-11-29 2020-04-14 北京浪潮数据技术有限公司 Storage system write cache data issuing method and related components
CN112416368A (en) * 2020-11-25 2021-02-26 中国科学技术大学先进技术研究院 Cache deployment and task scheduling method, terminal and computer readable storage medium
CN112527749A (en) * 2020-12-11 2021-03-19 平安科技(深圳)有限公司 Cache strategy determination method and device, computer equipment and readable storage medium
CN112685337A (en) * 2021-01-15 2021-04-20 浪潮云信息技术股份公司 Method for hierarchically caching read and write data in storage cluster
CN113190470A (en) * 2021-05-21 2021-07-30 恒宝股份有限公司 FLASH chip storage area and high-performance power-off-prevention read-write method thereof

Also Published As

Publication number Publication date
CN113778330A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
US11068391B2 (en) Mapping table updating method for data storage device
US10031698B2 (en) Method of wear leveling for data storage device
US9405675B1 (en) System and method for managing execution of internal commands and host commands in a solid-state memory
US8180955B2 (en) Computing systems and methods for managing flash memory device
JP3708047B2 (en) Managing flash memory
US7076598B2 (en) Pipeline accessing method to a large block memory
US8055873B2 (en) Data writing method for flash memory, and controller and system using the same
JP5983019B2 (en) Control device, storage device, and storage control method
US20070016719A1 (en) Memory device including nonvolatile memory and memory controller
US10657048B2 (en) Garbage collection method for data storage device
US20110055464A1 (en) Device driver including a flash memory file system and method thereof and a flash memory device and method thereof
US20070214309A1 (en) Nonvolatile storage device and data writing method thereof
US10740013B2 (en) Non-volatile data-storage device with spare block pools using a block clearing method
US8825946B2 (en) Memory system and data writing method
JP2011221996A (en) Nonvolatile memory controller and nonvolatile storage device
US7058784B2 (en) Method for managing access operation on nonvolatile memory and block structure thereof
CN111708713A (en) Intelligent garbage recycling and scheduling method for solid state disk
US10942811B2 (en) Data processing method for solid state drive
US20100180072A1 (en) Memory controller, nonvolatile memory device, file system, nonvolatile memory system, data writing method and data writing program
US20150074335A1 (en) Memory system, controller and control method of memory
US11249920B2 (en) Non-volatile memory device using efficient page collection mapping in association with cache and method of operating the same
CN113778330B (en) Transaction processing method based on Flash memory
CN111580757A (en) Data writing method and system and solid state disk
KR101153688B1 (en) Nand flash memory system and method for providing invalidation chance to data pages
CN103389943A (en) Control device, storage device, and storage control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant