CN115793954A - Data management method, intelligent terminal and computer readable storage medium - Google Patents

Data management method, intelligent terminal and computer readable storage medium Download PDF

Info

Publication number
CN115793954A
CN115793954A CN202111056765.3A CN202111056765A CN115793954A CN 115793954 A CN115793954 A CN 115793954A CN 202111056765 A CN202111056765 A CN 202111056765A CN 115793954 A CN115793954 A CN 115793954A
Authority
CN
China
Prior art keywords
mapping
updated
mapping block
mapping relation
relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111056765.3A
Other languages
Chinese (zh)
Inventor
段星辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiangbolong Digital Technology Co ltd
Original Assignee
Shanghai Jiangbolong Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiangbolong Digital Technology Co ltd filed Critical Shanghai Jiangbolong Digital Technology Co ltd
Priority to CN202111056765.3A priority Critical patent/CN115793954A/en
Publication of CN115793954A publication Critical patent/CN115793954A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a data management method, an intelligent terminal and a computer readable storage medium, wherein the data management method comprises the following steps: writing the mapping relation to be updated into the mapping relation log of the cache area; based on the logical address of each mapping relation to be updated in the mapping relation log, obtaining the mapping block number in the dirty mapping block table of each mapping relation to be updated, which corresponds to the cache region; and sequentially writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated. According to the scheme, the mapping relation to be updated is written into the mapping relation linked list in the dirty mapping block table of the cache region, so that the number of times of accessing the flash memory when user data are obtained from the flash memory of the storage device can be effectively reduced, and the number of times of traversing mapping relation logs is less when the mapping table is updated, so that the query is faster, the update is faster, and the random writing performance of the corresponding storage device is higher.

Description

Data management method, intelligent terminal and computer readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a data management method, an intelligent terminal, and a computer-readable storage medium.
Background
Nowadays, a mainstream Flash memory-based Storage device, such as an SSD (Solid State Disk), a UFS (Universal Flash Storage), an eMMC (embedded multimedia Card), etc., generally adopts a page mapping manner for an FTL (Flash Translation Layer), and compared with a conventional block mapping manner, the Storage device adopting the page mapping manner has better random write performance, but has a disadvantage that a Storage space occupied by a correspondingly adopted mapping table is generally large, generally 1/1024 of the Storage device capacity, for example, the size of the mapping table of the Storage device of 1TB is 1GB.
Conventional SSDs are generally configured with a DRAM (Dynamic Random Access Memory) with a corresponding size to store a mapping table during operation, but in some consumer SSDs and mobile storage devices (such as UFS and eMMC), for cost and power consumption, they are usually not configured with a DRAM, but are replaced by a DRAM-less firmware architecture in software design.
However, the performance of the CPU (central processing unit) of the current mainstream mobile phone is usually relatively strong, the DRAM resources are rich (generally, the DRAM resources are in the range of 4GB-12 GB), and the performance bottleneck of the CPU is usually limited by the performance of the storage device. The user is stuck during the use of the mobile phone, which is often caused by the delay of accessing the storage device. The performance of the storage device determines the user experience, so that the performance of the storage device is improved, and the use experience of a mobile phone user is improved.
However, compared with the memory device with the DRAM, the conventional DRAM-less memory device requires more times of accessing the flash memory when obtaining user data, and thus the read/write performance is much worse, especially in terms of random read/write performance. And when the mapping table is updated, the problems of slow query and slow mapping table update usually exist, so that the random writing performance is seriously affected.
Disclosure of Invention
The application mainly solves the technical problems that a data management method, an intelligent terminal and a computer readable storage medium are provided to solve the problems that in the prior art, a data management method of a storage device is poor in read-write performance, and when a mapping table is updated, query is slow, the mapping table is updated slowly, and therefore random write performance is seriously affected.
In order to solve the above problem, a first aspect of the present application provides a data management method for a storage device, where the data management method includes: writing the mapping relation to be updated into a mapping relation log of the cache region; based on the logical address of each mapping relation to be updated in the mapping relation log, obtaining the mapping block number in the dirty mapping block table of each mapping relation to be updated, which corresponds to the cache region; and writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation in sequence.
After writing the mapping relationship to be updated into the mapping relationship log of the cache region, before obtaining the mapping block number in the dirty mapping block table of the cache region corresponding to each mapping relationship to be updated based on the logical address of each mapping relationship to be updated in the mapping relationship log, the method further includes: and constructing a dirty mapping block table with a fixed length and comprising a hash table and an expansion table in the cache region.
After obtaining the mapping block number in the dirty mapping block table corresponding to the cache region of each mapping relationship to be updated based on the logical address of each mapping relationship to be updated in the mapping relationship log, and before writing each mapping relationship to be updated into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship log, the method further includes: based on the mapping block numbers and the length of the hash table, obtaining the index number of each mapping relation to be updated in the dirty mapping block table; writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation in sequence, wherein the mapping relation linked list comprises the following mapping relation lists: sequentially writing each mapping relation to be updated into a mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated, wherein when the mapping relation to be updated currently to be written is consistent with the index number corresponding to the mapping relation to be updated which is written before, and the mapping block numbers are inconsistent, the mapping block number corresponding to the mapping relation to be updated currently to be written is written into the expansion table; and writing the mapping relation to be updated to be written into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated in the expansion table.
After writing the mapping relation to be updated currently to be written into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated in the extended table, the method further includes: acquiring a logical address to be read; based on the logic address to be read, obtaining a read mapping block number and a read index number corresponding to the logic address to be read in the dirty mapping block table; judging whether the storage position of the read index number corresponding to the dirty mapping block table is empty or not; and if the storage position in the dirty mapping block table corresponding to the read index number is empty, determining that the mapping relation to be updated corresponding to the logical address to be read does not exist in the mapping relation log, and ending the current reading operation.
If the storage position of the read index number corresponding to the dirty mapping block table is not empty, judging whether the read mapping block number is consistent with the mapping block number currently stored in the storage position; and if the number of the read mapping block is consistent with the number of the mapping block currently stored in the storage position, searching whether the mapping relation corresponding to the logical address to be read exists in a mapping relation linked list corresponding to the number of the read mapping block or not.
If the number of the read mapping block is not consistent with the number of the mapping block currently stored in the storage position, searching whether the mapping block number consistent with the number of the read mapping block exists in the expansion table or not; if the extended table has a mapping block number consistent with the read mapping block number, searching whether a mapping relation to be updated corresponding to the logical address to be read exists in a mapping relation linked list corresponding to the read mapping block number in the extended table.
The length of the hash table is a positive integer multiple of 4 of the length of the extended table.
After writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation in sequence, the method further comprises the following steps: searching and obtaining each mapping block to be updated in a mapping table of the flash memory area based on the mapping relation linked list of each mapping block number, and loading each mapping block to be updated to a cache area; and updating the mapping blocks to be updated according to each mapping relation to be updated in the mapping relation linked list of the mapping block number corresponding to each mapping block to be updated, and writing the updated mapping blocks to be updated into the mapping table.
In order to solve the above problem, a second aspect of the present application provides a data management apparatus, wherein the data management apparatus includes: the writing module is used for writing the mapping relation to be updated into the mapping relation log of the cache area; the processing module is used for obtaining mapping block numbers in a dirty mapping block table of each mapping relation to be updated, which corresponds to the cache region, based on the logic address of each mapping relation to be updated in the mapping relation log; the writing module is further configured to sequentially write each mapping relationship to be updated into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship to be updated.
In order to solve the above problem, a third aspect of the present application provides an intelligent terminal, wherein the intelligent terminal includes a memory and a processor coupled to each other, the memory stores program data; the processor is used for executing program data to realize the data management method.
In order to solve the above problem, a fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the data management method as any one of the above.
The beneficial effects of the invention are: different from the situation in the prior art, the data management method of the application obtains the mapping block number in the dirty mapping block table corresponding to the cache region by writing the mapping relationship to be updated into the mapping relationship log of the cache region based on the logical address of each mapping relationship to be updated in the mapping relationship log, so that each mapping relationship to be updated can be sequentially written into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship log, which is equivalent to performing classified storage on a plurality of mapping relationships to be updated in the mapping relationship log according to the logical address of each mapping relationship to be updated, thereby being capable of more quickly finding the mapping relationship to be updated from the dirty mapping block table in a classified manner, and being capable of effectively reducing the number of times of accessing the mapping relationship log, thereby being capable of effectively reducing the number of times of accessing the flash memory when user data is obtained from the flash memory of the storage device, and being less in the number of times of traversing the cache when the mapping table is updated, thereby enabling the speed of the corresponding storage device to perform data query and update faster, and the random write-in performance of the storage device is also higher.
Drawings
FIG. 1 is a schematic diagram of writing mapping relationships to be updated in a mapping relationship log of a cache area of a storage device;
FIG. 2 is a schematic diagram illustrating a storage device updating a mapping relationship to be updated according to the prior art;
FIG. 3 is a schematic flow chart diagram of a first embodiment of the data management method of the present application;
FIG. 4 is a schematic diagram illustrating a storage of mapping relationships to be updated according to the present application;
FIG. 5 is a schematic flow chart diagram of a second embodiment of the data management method of the present application;
FIG. 6 is a schematic flow chart diagram of a third embodiment of the data management method of the present application;
FIG. 7 is a schematic flow chart diagram of a fourth embodiment of the data management method of the present application;
FIG. 8 is a block diagram of an embodiment of a smart terminal according to the present application;
FIG. 9 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The inventor has found, through long-term research, that in current mainstream flash memory-based storage devices (such as SSD, UFS, eMMC, etc.), the FTL of the storage device adopts a page mapping manner, and compared with the conventional block mapping manner, the storage device adopting the page mapping manner has better random write performance, but has a disadvantage that the correspondingly adopted mapping table (used for storing the physical address of the logical block in the flash memory) occupies a large storage space, which is generally 1/1024 of the capacity of the storage device, for example, the size of the mapping table of the storage device of 1TB is 1GB.
The conventional SSD is generally equipped with a DRAM with a corresponding size to store a mapping table during operation, but in some consumer-level SSDs and mobile storage devices (e.g., UFS, eMMC), the DRAM is not configured in some consumer-level SSDs and mobile storage devices (e.g., UFS, eMMC), and a DRAM-less firmware architecture is adopted in software design. Specifically, most mapping tables are stored in the flash Memory of the storage device, and the storage device loads mapping table data into a mapping table cache (typically, several tens to several hundreds KB SRAM, static Random-Access Memory) as required during operation. For example, when a Logical Block Address (LBA) needs to be read, the firmware first searches the mapping table cache, and if the Logical Block Address (LBA) is hit, directly obtains a Physical Address (PPA) of the Logical Block, and then reads a flash memory area of the storage device according to the Physical Address to obtain final user data. However, since the mapping table cache is usually very small, the mapping table cache will not hit in a cache in a very large probability, and the mapping table data needs to be temporarily loaded into the cache in the flash memory, and then the user data is read according to the physical address. Therefore, compared with a memory device with a DRAM, the DRAM-less memory device requires more accesses to the flash memory, and thus, the read/write performance is much worse, especially in the random read/write performance.
The logic block is a general mechanism for describing a block where data is located on a computer storage device, and is generally used in an auxiliary memory device such as a hard disk. The LBA may mean an address of a certain data block or a data block to which a certain address points. A so-called logical block in computers today is typically 512 or 1024 bytes. And the physical address is the actual address corresponding to the storage unit in the network card physical address memory, and corresponds to the logical address. The physical address of the network card is usually an EPROM (Erasable Programmable Read-Only Memory) written into the network card by a network card manufacturer, and it stores the addresses of the host computers which really identify the sending data and receiving data when transmitting data.
Further, in the memory device of DRAM-less, the update of the mapping table is also a problem. User data is written, erased (e.g., trim), and garbage collected inside the device, which results in the updating of the mapping. In the storage device with the DRAM, the updating of the mapping table only needs to update the mapping table in the DRAM, but for the DRAM-less storage device, because the mapping table is in the flash memory, the corresponding mapping relation cannot be loaded in the flash memory every time a mapping relation is generated, and then the mapping relation is written back to the flash memory after updating, otherwise, the writing performance (especially the random writing performance) is very poor.
To solve the problem of mapping table update, as shown in fig. 1, fig. 1 is a schematic diagram of writing a mapping relationship to be updated in a mapping relationship log of a cache area of a storage device, and a DRAM-less storage device often adopts a log mode, that is, a generated mapping relationship (LBA, PPA) is recorded in the mapping relationship log of an SRAM first, and after the mapping relationship logs are aggregated to a certain number, the mapping relationship is updated in a centralized manner.
When the mapping table is updated, as shown in fig. 2, fig. 2 is a schematic diagram of a storage device in the prior art that updates a mapping relationship to be updated, and may load a corresponding mapping block (a basic unit of mapping table data, which is generally 2KB or 4KB, is loaded and updated) into a cache according to a mapping relationship log, replace an old mapping relationship with a new mapping relationship, and write back the mapping relationship to the flash memory after the update is completed. The mode of delaying batch updating avoids frequent loading and updating of the flash memory, and improves the performance and the service life of the storage device.
However, there are 2 problems with this approach: firstly, the query is slow, when a reading command comes, the firmware needs to search the mapping relation log at first to detect whether the LBA mapping relation to be read is in the log. Because the mapping relation is not sorted in the log, when the log is large, the log is searched, and the reading performance is seriously influenced; if the logs are sequenced when the mapping relations are inserted, although the problem of slow query can be solved (binary search can be adopted), the insertion operation is slow, and in the worst case, when one mapping relation is inserted, the mapping relations inserted in front need to be moved backwards; secondly, the mapping table is updated slowly, and when one mapping block is loaded into the cache when the mapping table is updated, the whole log needs to be traversed to search all mapping relations subordinate to the mapping block. Assuming that N mapping blocks need to be updated, the log size is M mapping relationships, and updating these mapping blocks requires accessing the log N × M times. When writing is relatively random (N is relatively large), and the log M is relatively large, this overhead is large, and random writing performance is seriously affected.
In order to improve the read-write performance of the storage device and improve the query speed when the mapping table is updated and the updating speed of the mapping table, the application provides a data management method of the storage device. The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples and not all examples of the present application, and all other examples obtained by a person of ordinary skill in the art without any inventive work are within the scope of the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Please refer to fig. 3 and fig. 4, wherein fig. 3 is a schematic flowchart of a first embodiment of a data management method of the present application, and fig. 4 is a schematic diagram of storing a mapping relationship to be updated according to the present application. Specifically, the method may include the steps of:
s11: and writing the mapping relation to be updated into the mapping relation log of the cache region.
With the rapid development of the memory device industry, especially the development of DRAM-less memory devices, the memory devices are more and more popular in the market due to their advantages in cost, power consumption, and other factors. For the storage device, when reading the logic block and querying and updating the corresponding mapping table, how to effectively reduce the number of times of accessing the flash memory, and improve the querying speed when updating the mapping table and the updating speed of the mapping table is a key point that affects the read-write performance of the storage device and is also a key factor that affects whether the storage device can be accepted by the market. In this embodiment, the data management method specifically corresponds to any reasonable data processing manner such as reading, querying, writing, and mapping table updating for data in the storage device, and may be applied to any reasonable storage device, for example, SSD, UFS, eMMC, and the like, especially, a DRAM-less storage device.
The flash memory area of the storage device stores a mapping table integrating all mapping relations so as to carry out processing of randomly writing and reading data in the storage device according to the mapping table; the cache area is correspondingly constructed with a mapping relation log so as to record each newly generated mapping relation (LBA, PPA), and after the mapping relation log is gathered to a certain number, the mapping relation log is concentrated to update the mapping relation stored in the mapping table.
Specifically, as shown in fig. 4, when the mapping table in the storage device needs to be updated, first, each mapping relationship to be updated, which is generated in sequence, for example, a, b, c,... Z, is written into the mapping relationship log of the cache area of the storage device in sequence, so as to aggregate data of the mapping relationship to be updated in the mapping relationship log.
S12: and obtaining the mapping block number in the dirty mapping block table of the cache region corresponding to each mapping relation to be updated based on the logical address of each mapping relation to be updated in the mapping relation log.
It can be understood that each mapping relationship to be updated corresponds to a pair (LBA, PPA), where the LBA is a logical address of the mapping relationship to be updated, and mapping blocks corresponding to different mapping relationships to be updated may be the same, as shown in fig. 2, a mapping relationship log usually has a plurality of mapping relationships to be updated corresponding to the same mapping block, and the plurality of mapping relationships to be updated have the same logical address and different physical addresses.
Therefore, in order to reduce the total number of accesses to the mapping relation log when the mapping relation is subsequently updated, all mapping relations to be updated in the mapping relation log can be classified according to the logical address, that is, the mapping relations to be updated with consistent logical addresses are classified into one class, and are uniformly stored, for example, a dirty mapping block table is constructed in a cache region of the storage device, so that the mapping relations to be updated of each class are stored in a partitioned manner, and mapping block numbers can be correspondingly set in the dirty mapping block table so as to establish association with the mapping relations to be updated of each class, so that the subsequent classification query is facilitated.
Specifically, based on the logical address of each mapping relation to be updated in the mapping relation log, the mapping block number in the dirty mapping block table of each mapping relation to be updated corresponding to the cache region is obtained through set function or logical comparison calculation.
S13: and writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation in sequence.
It can be understood that the dirty mapping block table is specifically constructed on the basis of the mapping relationship log, so as to record which mapping blocks generate new mapping relationships, and establish association for different mapping relationships to be updated which belong to one class and have consistent logical addresses, and the mapping relationships to be updated in the mapping relationship log, which belong to the same mapping block number, can also be linked through a linked list, so that each mapping relationship to be updated, which is generated in sequence, can be written into the mapping relationship linked list of the corresponding mapping block number in the dirty mapping block table in sequence.
The linked list is a non-continuous and non-sequential storage structure on the physical storage unit, and the logical sequence of the data elements is realized through the pointer link sequence in the linked list. The linked list is made up of a series of nodes (each element in the linked list is called a node), which can be dynamically generated at runtime. Each node comprises two parts: one is a data field that stores the data element and the other is a pointer field that stores the address of the next node. The linked list structure can overcome the defect that the data size of the array linked list needs to be known in advance, and the linked list structure can fully utilize the memory space of a computer and realize flexible dynamic memory management. The most obvious advantage of linked lists is that conventional arrays arrange the associated items in a different way than they are in memory or disk, and the access to the data is often switched between different orders. The linked list allows nodes to be inserted and removed anywhere on the table, but does not allow random access. Linked lists are of many different types: a single linked list, a double linked list, and a circular linked list. The linked list may be implemented in a variety of programming languages.
For convenience of understanding, in an embodiment, as shown in fig. 1 and fig. 4, taking the newly generated mapping relationships to be updated as a (LBA 1, PPA 1), b (LBA 2, PPA 2), c (LBA 3, PPA 3), and.. L (LBA 12, PPA 12) in sequence as an example, it can be known that each newly generated mapping relationship to be updated (assumed to be mapping relationship b in fig. 4) is first added to the mapping relationship log in sequence. Further, according to the LBA2 of the mapping relation b, the mapping block which is subordinate to the dirty mapping block table is calculated to be Y, and then the mapping relation b is added to the linked list which is subordinate to the mapping block number Y.
As the map continues to be added, when added to map g (LBA 7, PPA 7), it is also added to the map log first. And according to the LBA7, the mapping block to which the mapping block belongs is calculated to be Y, so the mapping relation g can also be added to the mapping relation linked list of the mapping block number Y.
By analogy, the mapping relation i is also added to the mapping relation linked list of the mapping block number Y, and in the same way, the obtained mapping relations a, c, e, and j are sequentially added to the mapping relation linked list of another mapping block number, and the mapping relations d, f, h, k, and l are sequentially added to the mapping relation linked list of another mapping block number, which is not described herein again.
According to the scheme, the mapping relation linked list is established in the dirty mapping block table of the cache region according to the mapping block number corresponding to each mapping relation to be updated, so that the mapping relations to be updated in the mapping relation log can be further classified and stored according to the logical address of each mapping relation to be updated, the mapping relations to be updated can be searched from the dirty mapping block table in a classified mode more quickly, the times of accessing the mapping relation log can be effectively reduced, the times of accessing the flash memory when user data are obtained from the flash memory of the storage device can be effectively reduced, the cache traversing times are less when the mapping table is updated, the speed of data query of the corresponding storage device is high, the updating is high, and the random writing performance of the storage device is high.
Further, in an embodiment, after S11 and before S12 (or before S11), the method may further include: and constructing a dirty mapping block table with a fixed length and comprising a hash table and an expansion table in the cache region.
It can be understood that before calculating the mapping block number of each mapping relationship to be updated corresponding to the dirty mapping block table, a dirty mapping block table with a set length needs to be constructed in the cache region. The set length of the dirty mapping block table may be determined according to the number of mapping block numbers, which may be generated in a normal situation, corresponding to the dirty mapping block table of the mapping relationship to be updated in an actual scene, which is not limited in the present application.
In order to facilitate subsequent indexing of mapping relationships to be updated in the dirty mapping block table, the dirty mapping block table can be addressed in a hash mode, and the dirty mapping block table specifically comprises a hash table and an extension table which does not adopt the hash mode.
The Hash table (also called a Hash table) is a data structure that is directly accessed from a Key value (Key value). That is, it accesses the record by mapping the key value to a location in the table to speed up the lookup. This mapping function is called a hash function and the array of stored records is called a hash table.
For example, if a given table M has a function f (key), and if an address recorded in the table including the key is obtained by substituting the function into any given key value key, the table M is called a Hash (Hash) table, and the function f (key) is a Hash (Hash) function.
Referring to fig. 5, fig. 5 is a flowchart illustrating a data management method according to a second embodiment of the present application. The data management method of the present embodiment is a flowchart of a detailed embodiment of the data management method in fig. 3, and includes the following steps:
s21: and writing the mapping relation to be updated into the mapping relation log of the cache region.
S22: and obtaining the mapping block number in the dirty mapping block table of the cache region corresponding to each mapping relation to be updated based on the logical address of each mapping relation to be updated in the mapping relation log.
S21 and S22 are the same as S11 and S12 in fig. 3, and please refer to S11 and S12 and the related text description thereof, which are not repeated herein.
S23: and obtaining the index number of each mapping relation to be updated in the dirty mapping block table based on the mapping block number and the length of the hash table.
It can be understood that, in order to facilitate the indexing of the mapping relationship to be updated in the dirty mapping block table, after the dirty mapping block table including the hash table is established in the cache region, the mapping block number in the dirty mapping block table corresponding to each mapping relationship to be updated can be obtained based on the logical address of each mapping relationship to be updated in the mapping relationship log. And after the mapping block number is obtained, the index number of each mapping relation to be updated in the dirty mapping block table can be obtained based on the mapping block number and the length of the hash table, so that when a certain mapping relation is queried subsequently, the corresponding mapping block number and the corresponding mapping relation linked list can be found according to the index number.
S24: and sequentially writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated, wherein when the mapping relation to be updated currently to be written is consistent with the index number corresponding to the mapping relation to be updated written before and the mapping block numbers are inconsistent, the mapping block number corresponding to the mapping relation to be updated currently to be written is written into the expansion table.
It can be understood that, in order to avoid mapping the mapping relationships to be updated with different LBAs to the same index, on the basis of the hash table, the dirty mapping block table further includes an extension table for storing the mapping relationships to be updated with the same index but belonging to different mapping block numbers. And the expansion table does not use a hash mode, but directly carries out insertion and query.
Specifically, each mapping relation to be updated in the mapping relation log is sequentially written into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated, and when each current mapping relation to be updated to be written is consistent with the index number corresponding to the mapping relation to be updated which has been written before, and the mapping block numbers are inconsistent, the mapping block number corresponding to the mapping relation to be updated which is currently to be written needs to be directly written into the extended table, so that the situation that the direction is unknown when a certain mapping relation is subsequently inquired is avoided.
Optionally, as shown in fig. 4, in order to ensure a reasonable number of linked lists of mapping relationships affiliated to the hash table, the length M of the hash table is specifically a positive integer multiple of 4 of the length N of the extended table, for example, any reasonable multiple such as 4 times, 8 times, or 16 times, which is not limited in this application.
S25: and writing the mapping relation to be updated to be written into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated in the expansion table.
Further, after the mapping block number corresponding to the mapping relationship to be updated to be currently written is written into the extended table, the mapping relationship to be updated to be currently written needs to be written into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship to be updated in the extended table.
Further, in an embodiment, the S22 may specifically include: and calculating to obtain the mapping block number corresponding to each mapping relation to be updated in the dirty mapping block table by adopting a setting function Y (= LBA > > 10).
The LBA represents each mapping relationship to be updated in the mapping relationship log, and Y is the mapping block number to which the LBA belongs. Y (= LBA > > 10), specifically, Y = LBA/1024.
Further, in an embodiment, the S23 may specifically include: and calculating the index number of each mapping relation to be updated in the dirty mapping block table by adopting a setting function X (= Y% M).
Wherein, Y represents the mapping block number obtained by the above calculation, M is the length of the hash table, and X is the index number of each mapping relation to be updated in the dirty mapping block table. X (= Y% M), specifically, X = Y to the remainder of M.
Similarly, for convenience of understanding, in an embodiment, as shown in fig. 1 and fig. 4, taking the newly generated mapping relationships to be updated as a (LBA 1, PPA 1), b (LBA 2, PPA 2), c (LBA 3, PPA 3), and.. Z (LBA 26, PPA 26) in sequence as an example, it can be known that each newly generated mapping relationship to be updated (assumed to be mapping relationship b in fig. 4) is first added to the mapping relationship log in sequence. Further, according to the LBA of the mapping relationship b, the mapping block number of the mapping relationship b, which is subordinate to the dirty mapping block table, is calculated to be Y, and the index of the dirty mapping block table is calculated to be obtained, and if the mapping block number is X, it is found that the corresponding position is empty, and therefore, it occupies the position of the dirty mapping block table X, and the mapping block number is recorded to be Y, so as to add the mapping relationship b to the mapping relationship linked list which is subordinate to the mapping block number Y.
As the map continues to be added, it is also added to the map log first when added to map g (LBA 7, PPA 7). And according to the LBA7, the mapping block number to which the mapping block belongs is calculated to be Y, the index of the dirty mapping block table is calculated to be X, and the position is found to be distributed and occupied by the mapping block number Y to which the mapping block belongs, so the mapping relation g can be added to the mapping relation linked list of the mapping block number Y.
If a mapping Z is added to the mapping log, and the index of the dirty mapping block is calculated to be X according to the LBA26, but it is subordinate to the mapping block number Z, and since the position is already occupied by the mapping block number Y, it will find a position in the extension table of the dirty mapping block to add.
By analogy, the mapping relation i is also added to the mapping relation linked list of the mapping block number Y, and in the same way, the obtained mapping relations a, c, e, and j are sequentially added to the mapping relation linked list of another mapping block number, and the mapping relations d, f, h, k, and l are sequentially added to the mapping relation linked list of another mapping block number, which is not described herein again.
Referring to fig. 6, fig. 6 is a flowchart illustrating a data management method according to a third embodiment of the present application. The data management method of the present embodiment is a flowchart of a detailed embodiment of the data management method in fig. 5, and includes the following steps:
s31: and writing the mapping relation to be updated into the mapping relation log of the cache region.
S32: and obtaining the mapping block number in the dirty mapping block table of the cache region corresponding to each mapping relation to be updated based on the logical address of each mapping relation to be updated in the mapping relation log.
S33: and obtaining the index number of each mapping relation to be updated in the dirty mapping block table based on the mapping block number and the length of the hash table.
S34: and sequentially writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated, wherein when the mapping relation to be updated currently to be written is consistent with the index number corresponding to the mapping relation to be updated written before and the mapping block numbers are inconsistent, the mapping block number corresponding to the mapping relation to be updated currently to be written is written into the expansion table.
S35: and writing the mapping relation to be updated to be written into the mapping relation linked list of the mapping block number corresponding to the mapping relation in the extended table.
S31, S32, S33, S34, and S35 are the same as S21, S22, S23, S24, and S25 in fig. 5, and for details, please refer to S21, S22, S23, S24, and S25 and the related text description thereof, which is not repeated herein.
S36: and acquiring a logical address to be read.
Understandably, when the storage device reads a certain logical address through the dirty mapping block table, the corresponding logical address to be read is obtained first.
S37: and obtaining a reading mapping block number and a reading index number corresponding to the dirty mapping block table based on the logical address to be read.
Further, based on the logical address to be read, for example, by using the above functions Y (= LBA > > 10) and X (= Y% M), the logical address to be read is sequentially calculated to correspond to the read mapping block number and the read index number in the dirty mapping block table.
S38: and judging whether the storage position of the read index number corresponding to the dirty mapping block table is empty or not.
It can be understood that each index number uniquely corresponds to a storage location in the dirty mapping block table, and the storage location is used for storing the mapping block number, so as to establish a corresponding mapping relation linked list.
After the logical address to be read is obtained and corresponds to the read mapping block number and the read index number in the dirty mapping block table, whether the storage position of the read index number corresponding to the dirty mapping block table is empty is further judged, that is, whether the index number corresponding to the mapping relationship to be updated is consistent with the read index number is detected, and the index number is stored in a mapping relationship linked list of the mapping block number of the storage position in advance.
Wherein S39 is performed if the read index number corresponds to a storage location in the dirty mapping block table being empty, and S310 is performed if the read index number corresponds to a storage location in the dirty mapping block table not being empty.
S39: and determining that the mapping relation to be updated corresponding to the logical address to be read does not exist in the mapping relation log, and ending the current reading operation.
It can be understood that, since each mapping relationship to be updated in the mapping relationship log is already stored in the mapping relationship linked list of the corresponding mapping block number in the dirty mapping block table according to the LBA classification, that is, when the storage location corresponding to the read index number in the dirty mapping block table is empty, it indicates that there is no mapping relationship to be updated corresponding to the corresponding logical address to be read in the mapping relationship log, and the current reading operation can be directly ended. In other embodiments, after it is determined that the mapping relationship log does not have the mapping relationship to be updated corresponding to the logical address to be read, the mapping table in the flash memory may be loaded into the cache, so as to further query the mapping table.
S310: and judging whether the read mapping block number is consistent with the mapping block number currently stored in the storage position.
When the storage position of the read index number corresponding to the dirty mapping block table is determined to be not empty, it is indicated that the mapping block number and the corresponding mapping relation linked list are stored in the storage position. Further, whether the read mapping block number is consistent with the mapping block number currently stored in the storage location is judged.
If the read mapping block number is consistent with the mapping block number currently stored in the storage location, S311 is executed, and if the read mapping block number is not consistent with the mapping block number currently stored in the storage location, S312 is executed.
S311: and searching whether a mapping relation to be updated corresponding to the logical address to be read exists in a mapping relation linked list corresponding to the read mapping block number.
It can be understood that each mapping block number in the dirty mapping block table uniquely corresponds to a mapping relationship linked list, and the logical addresses LBA corresponding to each mapping relationship to be updated loaded in the mapping relationship linked list are the same. Therefore, when the corresponding read mapping block number is determined by the logical address to be read, it may be determined that a mapping relation to be updated consistent with the logical address to be read may exist in the mapping relation linked list corresponding to the read mapping block number, and it may be directly searched whether a mapping relation to be updated corresponding to the logical address to be read exists in the mapping relation linked list corresponding to the read mapping block number, and it is not necessary to address mapping relation linked lists of other mapping block numbers, that is, after the logical address to be read is obtained and the read mapping block number is obtained through corresponding calculation, only one of a plurality of mapping relation linked lists established in advance according to the mapping block number needs to be queried, and it is not necessary to query other mapping relation linked lists, so as to effectively reduce the number of times of accessing the dirty mapping block table and the mapping relation log.
S312: and searching whether a mapping block number consistent with the read mapping block number exists in the expansion table.
When the number of the read mapping block is not consistent with the number of the mapping block currently stored in the storage position, whether the mapping block number consistent with the number of the read mapping block exists in the extended table can be further searched, so that when the mapping block number consistent with the number of the read mapping block exists in the extended table, whether the mapping relation corresponding to the logical address to be read exists in the mapping relation linked list corresponding to the number of the read mapping block exists in the extended table is searched.
It can be understood that, when the mapping relation linked list corresponding to the read mapping block number does not have the mapping relation to be updated corresponding to the logical address to be read, and the mapping relation linked list corresponding to the read mapping block number does not have the mapping relation to be updated corresponding to the logical address to be read in the extended table, it can be determined that the mapping relation to be updated does not exist in the mapping relation log and the corresponding logical address to be read are consistent, and the current reading operation can be directly ended, or the mapping table in the flash memory is loaded into the cache, so as to further query the mapping table.
In one embodiment, for convenience of understanding, if the mapping relationship to be read is LBA, it can be known that, first, according to the LBA, the mapping block number to which the LBA belongs is calculated, and Y (= LBA > > 10) is assumed, and the index number X (= Y% M) of the dirty mapping block table is calculated.
If the position X of the dirty mapping block table is empty, the mapping relation of the LBA does not exist in the mapping relation log, and further operation is not needed; if the storage position X of the dirty mapping block table is not empty and the mapping block number recorded by the storage position X is Y, traversing a mapping relation linked list of the mapping block number Y and searching whether the mapping relation of the LBA exists or not;
if the mapping block number recorded in the position is not Y, searching whether a record with the mapping block number Y exists in the de-expansion table; if the extended table has the LBA mapping relation, traversing the linked list corresponding to the mapping number to search the LBA mapping relation; if not, it indicates that the mapping relation of the LBA does not exist in the mapping relation log.
Therefore, through the setting of the dirty mapping block table and the establishment of the mapping relation linked list of the dirty mapping block table, when the query is needed, the search in the whole mapping relation log is not needed, so that the query becomes simpler and quicker, and the read-write performance of the DRAM and the DRAM-less storage device can be improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating a data management method according to a fourth embodiment of the present application. The data management method of the present embodiment is a flowchart of a detailed embodiment of the data management method in fig. 3, and includes the following steps:
s41: and writing the mapping relation to be updated into the mapping relation log of the cache region.
S42: and obtaining the mapping block number in the dirty mapping block table of the cache region corresponding to each mapping relation to be updated based on the logical address of each mapping relation to be updated in the mapping relation log.
S43: and writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation in sequence.
S41, S42, and S43 are the same as S11, S12, and S13 in fig. 3, and for details, refer to S11, S12, and S13 and their related text descriptions, which are not repeated herein.
S44: and searching to obtain each mapping block to be updated in the mapping table of the flash memory area based on the mapping relation linked list of each mapping block number, and loading each mapping block to be updated to the cache area.
It can be understood that, when each mapping relation to be updated in the mapping relation log needs to be updated, each mapping block to be updated in the mapping table of the flash memory area can be searched and obtained based on the mapping relation linked list of each mapping block number, and only each corresponding mapping block to be updated is loaded to the cache area.
S45: and updating the mapping blocks to be updated according to each mapping relation to be updated in the mapping relation linked list of the mapping block number corresponding to each mapping block to be updated, and writing the updated mapping blocks to be updated into the mapping table.
Further, after each mapping block to be updated in the mapping table of the flash memory area is loaded into the cache area, the mapping blocks to be updated can be sequentially updated according to each mapping relation to be updated in the mapping relation linked list of the mapping block number corresponding to each mapping block to be updated, and the updated mapping blocks to be updated are written into the mapping table of the flash memory area.
Therefore, compared with the scheme only having the mapping relation log, the method can quickly find out which mapping blocks need to be updated by setting the dirty mapping block table; and for a certain mapping block, the mapping relation can be quickly found out and needs to be updated, and the whole mapping relation log does not need to be traversed, so that the updating speed of the mapping table is greatly increased.
Referring to fig. 8, fig. 8 is a schematic diagram of a framework of an embodiment of an intelligent terminal according to the present application. The intelligent terminal 51 comprises a memory 511 and a processor 512 coupled to each other, wherein the processor 512 is configured to execute program instructions stored in the memory 511 to implement the steps of any of the embodiments of the data management method described above.
In a specific implementation scenario, the intelligent terminal 51 may include, but is not limited to: any reasonable terminal equipment including a flash memory-based storage device, such as a solid state disk, a mobile phone, a tablet computer, etc., is not limited in this application.
In particular, the processor 512 is configured to control itself and the memory 511 to implement the steps of any of the above-described embodiments of the video display method. Processor 512 may also be referred to as a CPU (Central Processing Unit). Processor 512 may be an integrated circuit chip having signal processing capabilities. The Processor 512 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, processor 512 may be implemented collectively by an integrated circuit chip.
Referring to fig. 9, fig. 9 is a block diagram illustrating an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 61 stores program instructions 611 executable by the processor, the program instructions 611 being for implementing the steps of any of the data management method embodiments described above.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or contributing to the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.

Claims (10)

1. A data management method of a storage device, the data management method comprising:
writing the mapping relation to be updated into a mapping relation log of the cache region;
obtaining mapping block numbers in a dirty mapping block table of the cache region corresponding to each mapping relation to be updated based on the logical address of each mapping relation to be updated in the mapping relation log;
and sequentially writing each mapping relation to be updated into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated.
2. The data management method according to claim 1, wherein after writing the mapping relationship to be updated into the mapping relationship log of the cache region, before obtaining the mapping block number in the dirty mapping block table of the cache region corresponding to each mapping relationship to be updated based on the logical address of each mapping relationship to be updated in the mapping relationship log, the method further comprises:
and constructing the dirty mapping block table with a fixed length and comprising a hash table and an expansion table in the cache region.
3. The data management method according to claim 2, wherein after obtaining, based on the logical address of each mapping relationship to be updated in the mapping relationship log, a mapping block number in a dirty mapping block table of the cache region corresponding to each mapping relationship to be updated, and before writing each mapping relationship to be updated into the mapping relationship linked list of the mapping block number corresponding to each mapping relationship in sequence, the method further comprises:
obtaining the index number of each mapping relation to be updated in the dirty mapping block table based on the mapping block number and the length of the hash table;
the writing of each mapping relationship to be updated into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship in sequence includes:
writing each mapping relation to be updated into a mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated in sequence, wherein when the mapping relation to be updated currently to be written is consistent with the index number corresponding to the mapping relation to be updated written before, and the mapping block numbers are inconsistent, the mapping block number corresponding to the mapping relation to be updated currently to be written is written into the extended table;
and writing the mapping relation to be updated to be written into the mapping relation linked list of the mapping block number corresponding to the mapping relation to be updated in the extended table.
4. The data management method according to claim 3, wherein after writing the mapping relationship to be updated to be written currently into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship in the extended table, the method further comprises:
acquiring a logical address to be read;
based on the logical address to be read, obtaining a reading mapping block number and a reading index number which correspond to the logical address to be read in the dirty mapping block table;
judging whether the storage position of the read index number corresponding to the dirty mapping block table is empty or not;
and if the storage position in the dirty mapping block table corresponding to the read index number is empty, determining that the mapping relation to be updated corresponding to the logical address to be read does not exist in the mapping relation log, and ending the current reading operation.
5. The data management method according to claim 4,
if the storage position corresponding to the read index number in the dirty mapping block table is not empty, judging whether the read mapping block number is consistent with the mapping block number currently stored in the storage position;
if the mapping block number is consistent with the mapping block number currently stored in the storage position, searching whether the mapping relation to be updated corresponding to the logical address to be read exists in the mapping relation linked list corresponding to the mapping block number to be read.
6. The data management method according to claim 5,
if the read mapping block number is not consistent with the mapping block number currently stored in the storage position, searching whether the mapping block number consistent with the read mapping block number exists in the extended table or not;
if the mapping block number consistent with the read mapping block number exists in the extended table, searching whether the mapping relation corresponding to the logical address to be read exists in a mapping relation linked list corresponding to the read mapping block number in the extended table or not.
7. The data management method according to any one of claims 2 to 6, wherein the length of the hash table is a positive integer multiple of 4 of the length of the extended table.
8. The data management method according to claim 1, wherein after writing each mapping relationship to be updated into the mapping relationship linked list of the mapping block number corresponding to the mapping relationship in sequence, the method further comprises:
searching and obtaining each mapping block to be updated in a mapping table of a flash memory area based on a mapping relation linked list of each mapping block number, and loading each mapping block to be updated to the cache area;
and updating the mapping block to be updated according to each mapping relation to be updated in the mapping relation linked list of the mapping block number corresponding to each mapping block to be updated, and writing the updated mapping block to be updated into the mapping table.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a memory and a processor coupled to each other;
the memory stores program data;
the processor is configured to execute the program data to implement the data management method of any one of claims 1-8.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor implement the data management method of any of claims 1-8.
CN202111056765.3A 2021-09-09 2021-09-09 Data management method, intelligent terminal and computer readable storage medium Pending CN115793954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111056765.3A CN115793954A (en) 2021-09-09 2021-09-09 Data management method, intelligent terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111056765.3A CN115793954A (en) 2021-09-09 2021-09-09 Data management method, intelligent terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115793954A true CN115793954A (en) 2023-03-14

Family

ID=85473502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111056765.3A Pending CN115793954A (en) 2021-09-09 2021-09-09 Data management method, intelligent terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115793954A (en)

Similar Documents

Publication Publication Date Title
CN110678836B (en) Persistent memory for key value storage
US11853549B2 (en) Index storage in shingled magnetic recording (SMR) storage system with non-shingled region
KR100725390B1 (en) Apparatus and method for storing data in nonvolatile cache memory considering update ratio
US8219781B2 (en) Method for managing a memory apparatus, and associated memory apparatus thereof
US7533230B2 (en) Transparent migration of files among various types of storage volumes based on file access properties
JP2013137770A (en) Lba bitmap usage
JP2011530133A (en) Cache content storage management
CN110555001B (en) Data processing method, device, terminal and medium
US7870122B2 (en) Self-tuning index for flash-based databases
US20090319721A1 (en) Flash memory apparatus and method for operating the same
CN111966281B (en) Data storage device and data processing method
WO2020192710A1 (en) Method for processing garbage based on lsm database, solid state hard disk, and storage apparatus
CN106776361B (en) Caching method and system for large-scale nonvolatile storage medium
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
CN113835639B (en) I/O request processing method, device, equipment and readable storage medium
CN111831691A (en) Data reading and writing method and device, electronic equipment and storage medium
CN112463055B (en) Method, system, equipment and medium for optimizing and using L2P table of solid state disk
CN111475099A (en) Data storage method, device and equipment
CN117215485A (en) ZNS SSD management method, data writing method, storage device and controller
CN108804571B (en) Data storage method, device and equipment
CN115079957B (en) Request processing method, device, controller, equipment and storage medium
CN115793954A (en) Data management method, intelligent terminal and computer readable storage medium
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
CN108984432B (en) Method and device for processing IO (input/output) request
CN113742378A (en) Data query and storage method, related equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination