KR20140042520A - Bitmap used segment cleaning apparatus and storage device stroing the bitmap - Google Patents

Bitmap used segment cleaning apparatus and storage device stroing the bitmap Download PDF

Info

Publication number
KR20140042520A
KR20140042520A KR1020120109376A KR20120109376A KR20140042520A KR 20140042520 A KR20140042520 A KR 20140042520A KR 1020120109376 A KR1020120109376 A KR 1020120109376A KR 20120109376 A KR20120109376 A KR 20120109376A KR 20140042520 A KR20140042520 A KR 20140042520A
Authority
KR
South Korea
Prior art keywords
block
segment
area
bitmap
live
Prior art date
Application number
KR1020120109376A
Other languages
Korean (ko)
Inventor
김재극
이창만
이철
황주영
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020120109376A priority Critical patent/KR20140042520A/en
Publication of KR20140042520A publication Critical patent/KR20140042520A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Provided are a segment cleaning method using a bitmap and a host device in which the method is applied. According to the present invention, the host device comprises: an interface which relays data communication with a storage device; and a file system module which identifies a live block included in a victim segment among a plurality of segments stored in the storage device by referring to a bitmap stored in the storage device, writes back the identified live block through the interface, and performs segment cleaning by recovering the victim segment into a free area. The bitmap is composed of bits indicating if a corresponding block is live or not, and one segment is composed of a plurality of blocks.

Description

Segment cleaning apparatus using a bitmap and a storage device for storing the bitmap {Bitmap used segment cleaning apparatus and storage device stroing the bitmap}

The present invention relates to a segment cleaning device using a bitmap and a storage device for storing the bitmap, and more specifically, segment cleaning that can be applied to a file system supporting a log structured file system is performed. A host device and a segment cleaning method thereof, the present invention relates to a segment cleaning device using a bitmap and a storage device for storing the bitmap in order to quickly determine whether each block included in the segment is a live block.

Log structured file systems have been proposed in server storage systems that use hard disk drives. Hard disk drives are devices that use rotating motors, so they have seek latency and rotational latency. Thus, the log structured file system organizes the entire disk into one log, which performs only sequential writes. That is, the log structured file system adds the modified data to the last part of the log, without modifying the data at the original location when modifying the file.

Therefore, the log continuously expands in one direction, and a situation arises in which the log can no longer be extended due to modification and addition of data. At this time, segment cleaning is required to reduce the invalid data included in the log to a free area where data can be stored. The invalid data means data that is deleted or updated and is no longer valid. However, a live block including valid data in a segment reduced to a free area during segment cleaning may be written-back at the end of a log to prevent data loss due to segment cleaning.

Accordingly, a process of identifying a live block among blocks included therein is required for all segments that are reduced to the free area during segment cleaning. However, in order to identify a live block, many operations are required, such as checking a parent node of each block to determine whether each parent node points to the block. Therefore, in order to perform segment cleaning quickly, it is desired to provide a method capable of quickly determining whether a block is a live block or not.

SUMMARY OF THE INVENTION An object of the present invention is to provide a host device which performs segment cleaning and quickly grasps a live block in a segment to be reduced to a free area to increase the processing speed of segment cleaning.

Another object of the present invention is to provide a storage device for storing a bitmap used for quickly identifying a live block in a segment to be reduced to a free area in a segment cleaning process.

The problems to be solved by the present invention are not limited to the above-mentioned problems, and other matters not mentioned can be clearly understood by those skilled in the art from the following description.

In accordance with an aspect of the present invention, a host device includes an interface for relaying data transmission and reception with a storage device and a Victim segment of a plurality of segments stored in the storage device. Segment cleaning for identifying an included live block by referring to a bitmap stored in the storage device, writing back the identified live block through the interface, and reducing the Victim segment to a free area. Includes a file system module that performs the At this time, the bitmap is composed of bits indicating whether the corresponding block is live, and one segment is composed of a plurality of blocks.

According to an embodiment, the bits constituting the bitmap and the block may correspond one-to-one.

According to an embodiment, the file system module may be configured to update the bitmap when the segment cleaning is performed, to set a bit corresponding to the identified live block in the big segment to a value representing an invalid block, and to perform the live The bit corresponding to the block of the location where the block is live-backed may be set to a value representing a live block.

According to an embodiment, the file system module may include a first area in which the storage device is written in a random access method and a second in which the plurality of segments are written in a sequential access method. The segment cleaning may be performed by dividing the data into regions, so that the live block is write-backed to a free region located at the end of the log region.

According to an embodiment, the file system module may include a first area in which the storage device is written in a random access method and a second in which the plurality of segments are written in a sequential access method. The second area is further divided into a log area in which the plurality of segments are stored and a free area in which the segments can be stored, and the first area includes metadata about data stored in the second area. The metadata may be stored, and the metadata may include the bitmap.

The file system module may write a first block to a free area located at an end of the log area, and set a bit corresponding to the first block in the bitmap to a value representing a live block. The file system module may perform an operation of deleting or updating data of a second block included in the log area, and setting a bit corresponding to the second block in the bitmap to a value representing an invalid block. You may.

A write-back cache used to write data to the storage device; And a cache management module managing the write-back cache, wherein the file system module may load the bitmap into the write-back cache. The file system module may request the cache management module to set a dirty flag for the bitmap at predetermined intervals. The file system module may request setting of a dirty flag for the bitmap upon power-off of the host device.

According to an embodiment of the present disclosure, the file system module may select the big team segment from among the plurality of segments, and select the big team segment with reference to the bitmap. For example, the Victim segment may be selected by referring to the ratio of live blocks in the segment obtained by referring to the bitmap, or the Victim segment may be selected by referring to the number of live blocks in the segment obtained by referring to the bitmap. have.

According to another aspect of the present invention, there is provided a controller, which receives a control signal from a host device and controls a nonvolatile memory; And a first area that reads / writes data under the control of the controller and writes data in a random access method and a sequential access method, and includes a plurality of blocks. And a nonvolatile memory device having a storage area as a second area in which a plurality of segments to be written are written. In this case, the second area includes a log area in which the plurality of segments are stored and a free area in which the segments can be stored, and the first area stores meta data about data stored in the second area. The metadata includes a bitmap indicating whether a corresponding block is live, and the bitmap may be data that is referred to to identify a live block in a Victim segment in a segment cleaning process.

According to an embodiment, the bits constituting the bitmap and the block may correspond one-to-one.

According to one embodiment, the controller is provided with an internal buffer used for the random access, the storage device may be a SSD (Static Solid Disk) device.

According to one embodiment, the controller, during the segment cleaning process, write-back the clean segment consisting of the live block in the free area located at the end of the log area under the control of the host device, the bit Update a map, and set a bit corresponding to a live block position in the bit segment among the bits included in the bitmap to a value representing an invalid block, and among the bits included in the bitmap, live in the clean segment. The bit corresponding to the block position can be set to a value representing a live block.

Other specific details of the invention are included in the detailed description and drawings.

1 is a block diagram illustrating a computing system according to an embodiment of the present invention.
FIG. 2 is a block diagram illustrating the host device of FIG. 1.
FIG. 3 is a logical module hierarchy diagram for describing the host device of FIG. 1.
FIG. 4 is a diagram for describing a configuration of a storage area of the storage device of FIG. 1.
FIG. 5 is a diagram for describing a relationship between a structural unit of data stored in the storage device of FIG. 1 and a bitmap according to the present invention.
FIG. 6 is a diagram for describing a structure of a file stored in the storage device of FIG. 1.
7A to 7D are diagrams for describing arrangement of blocks and arrangement of bitmaps stored in the storage device of FIG. 1.
8 is a diagram for explaining a node address table.
9A through 9B are conceptual views illustrating a process of updating data stored in the storage device of FIG. 1 and updating a bitmap accordingly.
10A through 10E are conceptual views illustrating a process of performing segment cleaning on a data stored in the storage device of FIG. 1 and updating a bitmap accordingly.
11 to 13 are block diagrams illustrating another specific example of a computing system according to some embodiments of the present disclosure.
14 is a flowchart illustrating a segment cleaning method using a bitmap according to an embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.

One element is referred to as being "connected to " or " coupled to" another element, either directly connected or coupled to another element, . On the other hand, when one element is referred to as being "directly connected to" or "directly coupled to " another element, it means that no other element is interposed in between. Like reference numerals refer to like elements throughout the specification.

For example, when one component "transmits, provides, transmits, or outputs" data or a signal to another component, the component "transmits, provides, transmits, or transmits said data or signal directly to said other component. Output ", and" transmit, provide, transmit, or output "said data or signal to said other component through at least one other component.

"And / or" include each and every combination of one or more of the mentioned items.

Although the first, second, etc. are used to describe various elements, components and / or sections, it is needless to say that these elements, components and / or sections are not limited by these terms. These terms are only used to distinguish one element, element or section from another element, element or section. Therefore, it goes without saying that the first element, the first element or the first section mentioned below may be the second element, the second element or the second section within the technical spirit of the present invention.

The terminology used herein is for the purpose of illustrating embodiments and is not intended to be limiting of the present invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. It is noted that the terms "comprises" and / or "comprising" used in the specification are intended to be inclusive in a manner similar to the components, steps, operations, and / Or additions.

Unless defined otherwise, all terms (including technical and scientific terms) used herein may be used in a sense commonly understood by one of ordinary skill in the art to which this invention belongs. Also, commonly used predefined terms are not ideally or excessively interpreted unless explicitly defined otherwise.

Referring to FIG. 1, the computing system 1 according to an embodiment of the present invention includes a host device 10 and a storage device 20.

The host device 10 may be a computer, an ultra mobile PC (UMPC), a workstation, a netbook, a personal digital assistant (PDA), a portable computer, a web tablet, a wireless phone. ), Mobile phones, smart phones, e-books, portable multimedia players, portable game consoles, navigation devices, black boxes, digital cameras (digital camera), 3-dimensional television, digital audio recorder, digital audio player, digital picture recorder, digital picture player , A digital video recorder, a digital video player, a device that can send and receive information in a wireless environment, one of the various electronic devices that make up a home network, the various electronics that make up a computer network It is provided to one of one of the teeth, any of a variety of electronic devices constituting a telematics network, RFID devices, or the various components of the electronic device, such as any of a variety of components that make up the computing system.

The host device 10 and the storage device 20 exchange data with each other using a specific protocol. For example, the host device 10 and the storage device 20 may include a universal serial bus (USB) protocol, a multimedia card (MMC) protocol, a peripheral component interconnection (PCI) protocol, a PCI-E (PCI-express) protocol, and an ATA. At least one of a variety of interface protocols, such as Advanced Technology Attachment (SER) protocol, Serial-ATA protocol, Parallel-ATA protocol, small computer small interface (SCSI) protocol, enhanced small disk interface (ESDI) protocol, and Integrated Drive Electronics (IDE) protocol. Communication can be made according to one, but is not limited thereto.

The host device 10 controls the storage device 20. For example, the host device 10 may write data to the storage device 20 or read data from the storage device 20.

The storage device 20 may be, but not limited to, various types of card storage, a data server, such as a solid solid disk (SSD), a hard disk drive (HDD), an eMMC, and the like.

FIG. 2 is a block diagram of the host device 10 shown in FIG. 1.

As illustrated in FIG. 2, the host device 10 may include a write-back cache (WB CACHE) 104 and a write-back cache 104 used to write data to the storage device 20. It may include a cache management module 102 for managing the data, the file system module 103 and the interface 105 for relaying data transmission and reception with the storage device 20.

Each component illustrated in FIG. 2 may refer to software or hardware such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). However, the components are not limited to software or hardware, and may be configured to be in an addressable storage medium and configured to execute one or more processors. The functions provided in the components may be implemented by a more detailed component or may be implemented by a single component that performs a specific function by combining a plurality of components.

The interface 105 may support the protocol used for data transmission and reception between the host device 10 and the storage device 20, and may include a connector for data cable connection and logic for processing data transmission and reception. can do.

The file system module 103 converts a live block included in a Victim segment among a plurality of segments stored in the storage device 20 to a bitmap stored in the storage device 20. A segment cleaning is performed by referring to the identification, writing back the identified live block through the interface, and reducing the Victim segment to the free area. The operation of the file system module 103 will be described in more detail below.

The bitmap is stored in the storage device 20 and updated by the file system module 103. The bitmap may be stored in an area in which metadata about data stored in the storage device 20 is stored. Bits constituting the bitmap and the block may correspond one-to-one. The bitmap will be described in more detail later.

The file system module 103 selects a Victim segment among a plurality of segments stored in the storage device 20. The Victim segment is a segment to be reduced to the free area. The big segment includes not only a live block that is valid data, but also an invalid block that is no longer valid due to deletion, update, or the like. The live blocks included in the Victim segment may be reconfigured into a new clean segment and written back to the storage device 20.

The file system module 103 may identify the live block in each segment by referring to the bitmap when selecting the big team segment, and select the big team segment based on the result. For example, the file system module 103 may select the Big Segment segment based on whether it includes a live block having a predetermined ratio or less or a predetermined number or less of live blocks. However, the present invention is not limited in the manner of selecting the big team segment.

File system module 103 identifies the live-blocks included in the VicTeg segment. The invalidity of each block included in the big segment may be determined by inquiring the bitmap stored in the storage device 20.

According to one embodiment, the file system module 103 does not write-back the clean segment directly, but by flushing the writeback cache 104 by flushing the live block to the writeback cache 104. The clean segment composed of the live blocks may be written back to the storage device 20.

Write-back through write-back cache 104 is described in more detail. First, the file system module 103 stores a file identifier (FILE ID) of a file to which the identified live block belongs and a block offset indicating a location of the live block in the file in the meta data stored in the storage device 20. Get the data by querying it.

Next, the file system module 103 inquires whether the live block is preloaded in the write-back cache 104 using the file identifier and the block offset of the live block.

Next, after the inquiry, the file system module 103 reads the live block from the storage device 20 only when the live block is not pre-loaded in the write-back cache. 104).

Next, the file system module 103 requests the cache management module 102 to set a dirty flag for the live block so that the cache management module 102 is loaded in the cache. Flushes back to the storage device 20 via the file system module 103.

That is, the cache management module 102 may include a file system module such that a clean segment including one or more live blocks loaded in the write-back cache 104 is written back to the storage device. 103) can be controlled.

According to this embodiment, in the segment cleaning process, the file system module 103 does not always read the live block, but only when the live block is not loaded in the write-back cache 104. Can be read for loading into the write-back cache 104. That is, according to the present invention, the read overhead is reduced during the segment cleaning process, thereby saving the I / O bandwidth.

The write-back occurs when the writeback cache 104 flushes the data with the dirty flag set to the storage device 20. Therefore, the time point at which the clean segment including the live block is stored in the storage device 20 may be different from the time point at which the live block is loaded in the write-back cache 104. When the type of segment cleaning is a background type that is periodically performed, the probability that the live block is immediately read is not high, so the time difference may be tolerated.

As mentioned, since the big segment has to be reduced to the free area, the file system module 103 may designate the big team segment as the free area after the write-back is performed.

Referring to FIG. 3 further, the host device 10 will be described in more detail. 3 is a logical module hierarchy diagram for describing the host device 10.

Referring to FIG. 3, the host device 10 includes a user space 11 and a kernel space 13.

The user space 11 is an area in which a user application 12 is executed, and the kernel space 13 is an area dedicated to kernel execution. In order to access the kernel space 13 from the user space 11, a system call provided by the kernel may be used.

The kernel space 13 is a virtual file system 14 that connects I / O call calls from user space 11 to the appropriate file system 16, and a memory management module 15 that manages the memory of the host device 10. One or more file systems 16, device drivers 18 that provide hardware control calls for controlling the storage device 20, and the like. For example, file system 16 may be ext2, ntfs, smbfs, proc, or the like. Further, according to the present invention, one of the file systems 16 may be a log structured file system based F2FS file system according to the present invention. The F2FS file system will be described later with reference to FIGS. 4 through 9B.

The virtual file system 14 allows one or more file systems 16 to interact with each other. In order to read / write to different file systems 16 on different media, standardized system calls can be used. For example, system calls such as open (), read (), write () can be used regardless of the type of file system 16. In other words, the virtual file system 14 is an abstraction layer existing between the user space 11 and the file system 16.

The device driver 18 is responsible for the interface between the hardware and the user application (or operating system). The device driver 18 is a program necessary for the hardware to operate normally under a specific operating system. The device driver 18 may control the interface 105.

The file system module 103 shown in FIG. 2 may operate as the F2FS file system. In addition, the cache management module 102 illustrated in FIG. 2 may be a sub-module included in the virtual file system 14 or the memory management module 15 illustrated in FIG. 3.

Hereinafter, how the F2FS file system controls the storage device 20 will be described with reference to FIGS. 4 through 9B.

The storage device 20 has storage means. The storage area of the storage means may include a first area 30 and a second area 40 as shown in FIG. The first area 30 is an area to be written in a random access method, and the second area 40 is an area to be written in a sequential access method. The write of the sequential approach writes the data to be written adjacent to the incremental address, and the write of the random approach writes the data to be written to the designated address irrespective of whether or not it is adjacent.

In the first region 30, a bitmap 640 including bits indicating whether a corresponding block is a live block may be stored.

When formatting the F2FS file system, the storage device 20 may be divided into a first region 30 and a second region 40, but is not limited thereto. The first area 30 is an area in which various types of information managed by the system are stored. For example, the first area 30 may include information such as the number of files currently assigned, the number of valid pages, and the location. The second area 40 is a space for storing various directory information, data, file information, etc. that are actually used by the user.

When formatting the FSFS file system, all bits included in the bitmap 640 may be initialized to a value representing an invalid block.

The storage device 20 may include a buffer utilized for the random access. In order to optimally utilize the buffer, the first region 30 may be stored at the front of the storage device 20, and the second region 40 may be stored at the rear of the storage device 20. Here, the front part means that the front part is preceded by the physical address rather than the rear part.

If the storage device 20 is, for example, an SSD, there may be a buffer inside the SSD. The buffer may be, for example, a single layer cell (SLC) memory having a high read / write speed. Thus, such a buffer can speed up the write speed of a limited space random approach. Therefore, by utilizing the buffer, the first region 30 may be positioned in front of the storage device 20 to prevent the I / O speed of the storage device 20 from being deteriorated due to random access.

The second area 40 may be composed of a log area 41 and a free area 42. As shown in FIG. 4, the log area 41 may be a single connected area. However, as the segment segments included in the log area 41 are converted to the free area 42 during the segment cleaning process, the log area 41 may not be connected. It can also be an area.

The log area 41 is an area where data is written, and the free area 42 is an area where data can be written. Since the second area 40 is written in a sequential approach, data may be written only to the free area located at the end of the log area 41.

Even when the data previously stored in the log area 41 is modified, the data after the modification is written to the free area located at the end of the log area 41 rather than the position in the previously stored log area 41. At this time, the previously stored data becomes invalid data.

As the data is newly written or the pre-stored data is modified, the end point of the log area is gradually moved to the rear of the second area 40 so that the free area 42 is insufficient. At this time, segment cleaning is performed. The segment cleaning method according to the present invention will be described in detail later.

FIG. 5 is a diagram for describing a configuration unit of data stored in the storage device 20.

Segment 53 includes a plurality of blocks BLK 51, section 55 includes a plurality of segments 53, and zone 57 includes a plurality of sections. (55). For example, the block 51 may be 4 Kbytes, and the segment 53 may be 2M bytes including 512 blocks 51. Such a configuration may be determined at a format time point of the storage device 20, but is not limited thereto. The size of sections 55 and zones 57 may be modified at format time. The F2FS file system can read / write all data in 4Kbyte pages. That is, one page may be stored in block 51 and a plurality of pages may be stored in segment 53.

As shown in FIG. 5, each bit of the bitmap 640 may correspond to each block 51. That is, each bit of the bitmap 640 may indicate whether each corresponding block 51 is a live block or an invalid block.

Meanwhile, a file stored in the storage device 20 may have an indexing structure, as shown in FIG. 6. A file can contain multiple data and multiple nodes associated with multiple data. The data block 70 is a part for storing data, and the node blocks 80, 81 to 88, and 91 to 95 are parts for storing a node.

The node blocks 80, 81 to 88, and 91 to 95 are direct node blocks 81 to 88, indirect node blocks 91 to 95, and inode blocks. 80 may be included. In an F2FS file system, one file has one inode block 80.

Meanwhile, each bit constituting the bitmap 640 may indicate not only the data block 70 but also the live block of the node blocks 80, 81, 88, 91, and 95.

The direct node blocks 81 to 88 designate an identifier of the inode block 80 and a data pointer directly pointing to the data block 70 by the number of data blocks that are child blocks of the direct node blocks 81 to 88. Include. The direct node blocks 81 to 88 further store information about how many blocks each data block 70 is in the file corresponding to the inode block 80, that is, offset information of the blocks.

The indirect node blocks 91 to 95 include pointers indicating a direct node block or another indirect node block. The indirect node blocks 91 to 95 may include, for example, first indirect node blocks 91 to 94, second indirect node blocks 95, and the like. The first indirect node blocks 91 to 94 include a first node pointer pointing to the direct node blocks 83 to 88. The second indirect node block 95 includes a second node pointer pointing to the first indirect node blocks 93 and 94.

The inode block 80 includes a data pointer, a first node pointer pointing to the direct node blocks 81 and 82, a second node pointer pointing to the first indirect node blocks 91 and 92, and a second indirect node block ( 95 may include at least one of the third node pointers. One file may be, for example, up to 3 Tbytes, and such a large file may have an index structure as follows. For example, there are 994 data pointers in the inode block 80, and each of the 994 data pointers may point to each of the 994 data blocks 70. The first node pointer is two, and each of the two first node pointers may point to two direct node blocks 81 and 82. The second node pointer is two, and each of the two second node pointers may point to two first indirect node blocks 91 and 92. The third node pointer is one and may point to the second indirect node block 95.

7A-7D illustrate in more detail the storage area configuration of the storage device 20 that the F2FS file system constitutes, in accordance with some embodiments of the present invention.

First, according to an embodiment, the F2FS file system is illustrated in FIG. 7A to include a storage area of the storage device 20 including a first area 30 of a random access method and a second area 40 of a sequential access method. It can be configured as shown.

In detail, the first area 30 includes the superblocks 61 and 62, the checkpoint area CP, the segment information table 64, and the node address table Node. Address table (NAT) 65, segment summary area (SSA) 66, and the like.

First, the superblocks 61 and 62 store default information of the file system 16. For example, the size of block 51, the number of blocks 51, the status plugs of file system 16 (clean, stable, active, logging, unknown), and the like may be stored. As shown, the superblocks 61 and 62 may be two, and the same contents may be stored in each. Therefore, even if a problem occurs in either one, the other can be used.

The check point area 63 stores check points. Checkpoints are logical breakpoints, and the state up to these breakpoints is completely preserved. If an accident (eg, shutdown) occurs during operation of the computing system, file system 16 may recover the data using the preserved checkpoints. The generation point of the check point may be, for example, periodically generated, Umount time, or system shutdown time, but is not limited thereto.

As shown in FIG. 8, the node address table 65 may include a plurality of node identifiers (NODE IDs) corresponding to each node and a plurality of physical addresses corresponding to each of the plurality of node identifiers. have. For example, a node block corresponding to node identifier N0 corresponds to physical address a, a node block corresponding to node identifier N1 corresponds to physical address b, and a node block corresponding to node identifier N2 corresponds to physical address c. Can be. Every node (inode, direct node, indirect node, etc.) each has its own unique node identifier. In other words, every node (inode, direct node, indirect node, etc.) may be assigned a unique node identifier from the node address table 65. The node address table 65 may store a node identifier of an inode, a node identifier of a direct node, a node identifier of an indirect node, and the like. Each physical address corresponding to each node identifier may be updated.

The segment information table 64 includes a number of live blocks included in each segment and a bitmap 640 indicating whether each block is a live block. Each bit constituting the bitmap 640 indicates whether each corresponding block is a live block. The segment information table 64 can be used in a segment cleaning operation. That is, the file system module 103 may refer to the bitmap included in the segment information table 64 to identify the live block included in the big team segment.

The segment information table 64 may also be referred to for selecting a big team segment. That is, the big team segment may be selected according to the number of live blocks or the ratio of the number of live blocks among the blocks included in the segment.

The segment summary area 66 describes an identifier of a parent node to which each block included in each segment of the second area 40 belongs.

The direct node blocks 81 to 88 have address information of each data block 70 for access to the data block 70 which is a child block thereof. On the other hand, the indirect node blocks 91 to 95 have an identifier list of their respective child nodes for access to their child node blocks. Once the identifier of a particular node block is known, the node address table 65 can be queried to know its physical address.

On the other hand, in a log structured file system, a new data block with updated data is written to the end of the log without overwriting the data written to the data block from the existing storage location to another value. Therefore, the parent node block of the existing data block must also modify the address for the data block. Therefore, when overwriting a specific data block or writing back to the end of the log in the segment cleaning step, information about the parent node of the data block is needed. However, each data block or each node block is difficult to know information about its parent node. Accordingly, the F2FS file system according to the present invention has a segment summary area 66 in which an index in which each data block or each node block identifies an identifier of its parent node block is described. Makes it easy to know the identifier of the parent node block.

One segment summary block has information about one segment located in the second area 40. In addition, the segment summary block includes a plurality of summary information, and one summary information corresponds to one data block or one node block.

As illustrated in FIG. 7A, the second region 40 may include data segments DS0 and DS1 and node segments NS0 and NS1 that are separated from each other. The plurality of data may be stored in the data segments DS0 and DS1, and the plurality of nodes may be stored in the node segments NS0 and NS1. If the area where the data and the node are separated are different, the segment can be managed efficiently, and the data can be read more quickly and effectively when the data is read.

In the drawing, the first area 30 is in the order of the superblocks 61 and 62, the check point area 62, the segment information table 64, the node address table 65, and the segment summary area 66. It is not limited to this. For example, the position of the segment information table 64 and the node address table 65 may be changed, and the position of the node address table 65 and the segment summary area 66 may be changed.

The F2FS file system may configure a storage area of the storage device 20 as shown in FIG. 7B. Referring to FIG. 7B, in the storage device of the computing system according to another exemplary embodiment of the present disclosure, the second region 40 may include a plurality of segments S1 to Sn, where n is a natural number. . Each segment S1-Sn differs from FIG. 7A in that data segment and node segment are separately managed in that it can be stored without data and node division.

The F2FS file system may configure the storage area of the storage device 20 as shown in FIG. 7C. Referring to FIG. 7C, the first region 30 does not include the segment summary region (see 66 of FIG. 5). That is, the first area 30 includes the superblocks 61 and 62, the check point area 62, the segment information table 64, and the node address table 65. Instead, segment summary information may be stored in the second area 40. In detail, the second area 40 includes a plurality of segments S0 to Sn, and each segment S0 to Sn is divided into a plurality of blocks. Segment summary information may be stored in at least one block SS0 to SSn of each segment S0 to Sn.

The F2FS file system may configure a storage area of the storage device 20 as shown in FIG. 7D. Referring to FIG. 7D, similar to FIG. 7C, the first region 30 does not include the segment summary region (see 66 of FIG. 5). That is, the first area 30 includes the superblocks 61 and 62, the check point area 62, the segment information table 64, and the node address table 65. Segment summary information may be stored in the second area 40. The second area 40 includes a plurality of segments 53, each segment 53 is divided into a plurality of blocks BLK0 to BLKm, and each of the blocks BLK0 to BLKm has an out of band (OBB) ( OOB1 to OOBm, where m is a natural number). Segment summary information may be stored in the OOB areas OOB1 to OOBm.

9A and 9B, the F2FS file system will be described for performing a data update operation. 9A to 9B are conceptual views illustrating a process in which data stored in the storage device of FIG. 1 is updated, and accordingly, the bitmap 640 is also updated. FIG. 9A shows that FILE 0 stores data of "ABC" before data update, and FIG. 9B shows data of FILE 0 "ADC" as the application commands update to "ADC" for FILE 0. FIG. It shows what is to be saved. In F2FS, a node and a data block are configured in the form of FILE 0 as shown in FIG. 6, but in FIG. 9A and FIG. 9B, for convenience of description, the inode points to one direct node N0, and the direct node ( N0) points to three data blocks and assumes that FILE 0 is configured.

First, referring to FIG. 9A, the first to third data blocks BLK 0, BLK 1, and BLK 2 are included in the first data segment DS0 in the log area 41. It is assumed that "A" is stored in the first data block BLK 0, "B" is stored in the second data block BLK 1, and "C" is stored in the third data block BLK 2, respectively. Unlike in FIG. 9A, the first data segment DS0 may include not only the first to third data blocks but also more data blocks.

The direct node N0 block may be included in the first node segment NS0 in the log area 41. At least the identifier N0 of the node and physical address information of the first to third data blocks may be stored in the direct node N0 block. As shown in Fig. 9A, the physical address of the direct node NO block is " a ".

As shown in FIG. 9A, the bitmap 640 may include bits 6400 corresponding to the first data block BLK 0, bits 6401 corresponding to the second data block BLK 1, and third data blocks. Bit 6402 corresponding to and bit 6403 corresponding to direct node NO. Since all of the first to third data blocks store valid data, the bits 6400, 6401, and 6402 corresponding to the first to third data blocks of the bitmap 640 indicate a live block (for example, For example, "1", the same below) may be set. In addition, since the direct node N0 block is also a live block that stores valid data, a value indicating that the block is a live block may also be set in the bit 6403 corresponding to the direct node block of the bitmap 640.

Meanwhile, the NAT may store N0, which is an identifier of the direct node block, and "a", which is a physical address of N0.

Referring to Fig. 9B, the configuration of FILE 0 with updated data will be described.

Since the application instructed to update "ABC" of FILE 0 to "ADC", the second data block BLK 1 storing "B" must be updated. Instead of updating "B" stored in the second data block BLK 1 to "D", the F2FS file system logs a new fourth data block BLK 3 that stores "D" in the log area 41. ) Is stored in the second data segment DS1 located at the end of the. In other words, the updated FILE 0 is composed of a zeroth data block BLK 0, a fourth data block BLK 3 and a second data block BLK 2. Accordingly, the physical node information indicating the second data block also needs to be updated with the physical address information of the fourth data block. The F2FS file system instead of updating the child block physical address information of the node N0 previously stored, the zeroth data block BLK 0, the fourth data block BLK 3, and the second data block BLK 2 Has the physical address information as the address information of the child block, but generates a new node block having the same node identifier as N0. The new node block may be included in the second node segment located at the end of the log area 41.

As the node N0 block included in the fourth data block and the second node segment is newly written, the bit 6640 corresponding to the fourth data block BLK 3 and the newly written node block included in the second node segment correspond. The value of the bit 6405 is also set to a value indicating a live block.

Meanwhile, since the node N0 block included in the second data block BLK 1 and the first node segment is no longer valid data, the bit 6401 corresponding to the second data block and the node included in the first node segment are no longer valid data. The value of bit 6403 corresponding to the N0 block is set to a value indicating an invalid block (eg, "0", hereinafter equal).

The physical address of the node block N0 is changed from "a" to "f". According to the conventional log structured file system, the physical address information of the node block N0 included in the indirect node that is the parent node of the node block N0 will also need to be modified. In addition, since the indirect node will also be written to a new node block, the update operation of the node block continues to transition to the inode to the parent node block. This problem is called the wandering tree problem. The wandering tree problem causes too many nodes to be unnecessarily rewritten, thus diminishing the light efficiency of sequential access lights.

The F2FS file system according to the present invention only needs to modify the physical address corresponding to the direct node in the node address table 65 when the direct node block needs to be rewritten due to an update of the data block ("a"). In the " f "), the update operation of the node block does not transition beyond the direct node. Thus, the F2FS file system according to the present invention solves the wandering tree problem that occurs in conventional log structured file systems.

Hereinafter, a process of performing segment cleaning by the F2FS file system according to the present invention will be described with reference to FIGS. 10A to 10E.

Referring to FIG. 10A, the segments S0 to S9 are included in the log area 41. The segment cleaning operation is started because the free area 42 is formed narrower than the reference value. First, the big team segment among the segments included in the log area 41 is selected. As already mentioned, the Victim segment is the segment to be reduced to the free area. For example, the F2FS file system may select the big team segment based on whether it includes live blocks having a predetermined ratio or less, or includes a predetermined number or less of live blocks. It is not limited to a specific way. 10A illustrates a situation in which segments S1, S3, S5, and S6 are selected as big team segments.

Also shown in FIG. 10A is a write-back cache 104. The write-back cache 104 is shown loaded with the live blocks LB3, LB5, and LB6 included in the segments S3, S5, and S6. Also shown is a dirty bit corresponding to data stored in each entry of the write-back cache 104. The write-back cache 104 writes back to the storage device 20 the data loaded in the entry where the dirty bit is set when flushing.

Next, the F2FS file system inquires whether the live blocks included in the Victim segment are preloaded in the write-back cache 104.

The live block of each Vic segment can be quickly identified with reference to the bitmap 640 included in the segment information table (SIT) 64. As shown in FIG. 10B, the bitmap 640 includes bits indicating whether or not the blocks included in the segment S1 are live blocks, and the F2FS file system includes only the bitmap 640 to refer to the bitmap segment S1. You can immediately identify which live block is included. In the case shown in FIG. 10B, only blocks 1A, 1C, and 1E are live blocks among the blocks 1A to 1G included in the segment S1, and the F2FS file system may load only the blocks 1A, 1C, and 1E into the write-back cache 104. have.

In addition, since the write-back cache 104 manages the identifier of the file to which the data blocks stored in each entry belong and the offset information indicating the position in the file of the data blocks, the F2FS file system is the file identifier and offset of the live block. The information may be used to query the memory management module 15 or the virtual file system 14 managing the write-back cache 104 whether the live block is loaded. The F2FS file system looks up the parent node identifier of each live block in segment summary area 66, obtains a physical address corresponding to the parent node identifier in node address table 65, and stores the parent node stored in the physical address. In the block, an identifier of a file to which the live block belongs and the offset information of the live block may be obtained.

As shown in FIG. 10A, among the live blocks included in Victim Segments S1, S3, S5, and S6, LB3, LB5, and LB6, which are live blocks of S3, S5, and S6, are preloaded in the write-back cache 104. However, LB1, which is a live block of S1, is not loaded in the write-back cache 104 yet. Thus, the F2FS file system reads the LB1 from the storage device 20 and loads the read LB1 into the writeback cache 104 so that the memory management module 15 or virtual file manages the writeback cache 104. Ask the system 14. This process is illustrated in Figure 10b.

Next, the F2FS file system includes a memory management module 15 or a virtual file system that manages the write-back cache 104 such that the dirty flag corresponding to the previously loaded or newly loaded live blocks is set. 14).

As already mentioned, the write-back cache 104 writes back to the storage device 20 the data stored in the entry where the dirty flag is set at the time of flush. Therefore, there may be a time difference between the time when the live blocks included in the Victim segment are loaded into the write-back cache 104 and the time when the write-back is performed. As shown in FIG. 10C, a data append command to the file may be called by a command of an application before the write-back is performed. The memory management module 15 or the virtual file system 14 managing the write-back cache 104 directly adds the blocks BLK A and BLK B to be added to the storage device 20 according to the data addition command. It can be input to the write-back cache 104 instead. As shown in FIG. 10C, since the BLK A and the BLK B are not stored in the storage device 104, the dirty bit is set.

As shown in FIG. 10D, when the write-back cache 104 reaches the flush time, the memory management module 15 or the virtual file system 14 managing the write-back cache 104 enters the write-back cache 104. Among the data loaded, data whose dirty bit is set is written back to the storage device 20.

The memory management module 15 or the virtual file system 14 managing the write-back cache 104 should write back the clean segment S10 composed of only the live block to the end of the log area 41 and be added according to the data addition command. Blocks BLK A and BLK B can be included in the segment S11 to write-back. Unlike in FIG. 10D, if there is a margin in the number of blocks that can be included in one segment, segment S10 may include not only the live blocks but also BLK A and BLK B.

The memory management module 15 or the virtual file system 14 managing the write-back cache 104 may update the bitmap 640 to reflect writing of the clean segments S10 and S11. That is, the bit corresponding to the live block included in the clean segment S10, the bit corresponding to the BLK A included in the segment S11, and the bit corresponding to the BLK B may be set to a value indicating the live block.

Meanwhile, according to the write-back, the Victim segments S1, S3, S5, and S6 may be reduced to the free region 42. The memory management module 15 or the virtual file system 14 managing the write-back cache 104 may update the bitmap 640 to reflect that the Victim segments are reduced to the free area 42. That is, all bits corresponding to all blocks included in the Victim segments S1, S3, S5, and S6 may be set to values indicating invalid blocks.

The memory management module 15 or the virtual file system 14 that manages the write-back cache 104 collects data to be written back and collects data included in the same file. Therefore, after being written back, data belonging to one file can be arranged adjacently.

Referring to FIG. 10A and FIG. 10D, four Victim segments were included in the log area 41 before segment cleaning according to the present invention. However, as a result of the cleaning, one live block included in the four Victim segments was included. Of the three segments can be returned to the free area.

Prior to segment cleaning according to the present invention, the log area 41 was configured as one connected area. However, as a result of segment cleaning, the Vic Team segments included in the log area 41 are returned to the free area, which is illustrated in FIG. 10E. As described above, the log area 41 is composed of two or more divided areas. In the state where the log area 41 and the free area 42 are configured as shown in FIG. 10E, when the segment needs to be recorded further, the segment is recorded in the free area 42 located at the end of the log once, and the storage device 20 At the end of the storage area of, the free area 42 is found again from the beginning of the second area 40. Since the segments S1, S3, S5, and S6 are returned to the free area 42, new data or node blocks may be written to the S1, S3, S5, and S6 segments in a sequential approach.

Hereinafter, a specific system to which a computing system according to some embodiments of the present invention can be applied will be described. The system described below is merely exemplary, but is not limited thereto.

11 to 13 are block diagrams illustrating another specific example of a computing system according to some embodiments of the present disclosure.

First, referring to FIG. 11, the storage device 20 may include a nonvolatile memory device 22 and a controller 21.

The nonvolatile memory device 22 reads / writes data under the control of the controller 21, and writes the data in a random access manner in the first area 30 and the sequential access. In the sequential access method, the storage area is configured as the second area 40 in which a plurality of segments composed of a plurality of blocks are written. The second area 40 includes a log area in which the plurality of segments are stored and a free area in which the segments can be stored, and the first area 30 includes metadata about data stored in the second area 40. The metadata includes a bitmap 640 indicating whether a corresponding block is live. Bitmap 640 is data that is referenced to identify a live block within a victor segment in the segment cleaning process.

In the first region 30 of the nonvolatile memory device 22, the above-described superblocks 61 and 62, a checkpoint region 63, a segment information table 64, a node address table 65, and the like are included. It may be stored as the metadata. The segment information table 64 includes a bitmap 640.

Bits constituting the bitmap 640 and the blocks correspond one-to-one.

The controller 21 is connected to the host device 10 and the nonvolatile memory device 22. In response to a request from the host device 10, the controller 21 is configured to access the nonvolatile memory device 22. For example, the controller 21 is configured to control read, write, erase, and background operations of the nonvolatile memory device 22. The controller 21 is configured to provide an interface between the nonvolatile memory device 22 and the host. The controller 21 is configured to drive firmware for controlling the nonvolatile memory device 22.

During the segment cleaning process, the controller 21 writes back the clean segment composed of the live blocks to the free area located at the end of the log area under the control of the host device 10, and then bitmaps 640. ) Is set, but among the bits included in the bitmap, the bit corresponding to the live block position in the big segment is set to a value representing an invalid block, and among the bits included in the bitmap 640, the clean segment A bit corresponding to a live block position in the terminal may be set to a value representing a live block.

In exemplary embodiments, the controller 21 may further include well-known components, such as random access memory (RAM), a processing unit, a host interface, and a memory interface. The RAM may include at least one of an operating memory of the processing unit, a cache memory between the nonvolatile memory device 22 and the host device 10, and a buffer memory between the nonvolatile memory device 22 and the host device 10. It is used as. The processing unit controls the overall operation of the controller 21.

The controller 21 may be provided with a buffer 210. The buffer 210 may be composed of, for example, random access memory (RAM), in particular, DRAM.

The controller 21 and the nonvolatile memory device 22 may be integrated into one semiconductor device. In exemplary embodiments, the controller 21 and the nonvolatile memory device 22 may be integrated into one semiconductor device to configure a memory card. For example, the controller 21 and the nonvolatile memory device 22 may be integrated into a single semiconductor device so that a PC card (PCMCIA, personal computer memory card international association), a compact flash card (CF), a smart media card (SM, Memory cards such as SMC), memory sticks, multimedia cards (MMC, RS-MMC, MMCmicro), SD cards (SD, miniSD, microSD, SDHC), universal flash storage (UFS) and the like.

The controller 21 and the nonvolatile memory device 22 may be integrated into one semiconductor device to configure a solid state drive (SSD). The SSD includes a storage device configured to store data in a semiconductor memory. When the SSD is connected to the host device 10, the operating speed of the host device 10 may be significantly improved.

In exemplary embodiments, the nonvolatile memory device 22 or the storage device 20 may be mounted in various types of packages. For example, the nonvolatile memory device 22 or the storage device 20 may be a package on package (PoP), ball grid arrays (BGAs), chip scale packages (CSPs), plastic leaded chip carrier (PLCC), plastic dual in Line Package (PDIP), Die in Waffle Pack, Die in Wafer Form, Chip On Board (COB), Ceramic Dual In Line Package (CERDIP), Plastic Metric Quad Flat Pack (MQFP), Thin Quad Flatpack (TQFP), Small Outline (SOIC), Shrink Small Outline Package (SSOP), Thin Small Outline (TSOP), Thin Quad Flatpack (TQFP), System In Package (SIP), Multi Chip Package (MCP), Wafer-level Fabricated Package (WFP), Wafer -Can be packaged and implemented in the same way as Level Processed Stack Package (WSP).

Subsequently, referring to FIG. 12, the storage device 20 may include a nonvolatile memory device 24 and a controller 23. The nonvolatile memory device 24 includes a plurality of nonvolatile memory chips. The plurality of non-volatile memory chips are divided into a plurality of groups. Each group of the plurality of nonvolatile memory chips is configured to communicate with the controller 23 through one common channel. For example, the plurality of nonvolatile memory chips are shown to communicate with the controller 23 through the first through kth channels CH1 through CHk.

In FIG. 12, it has been described that a plurality of nonvolatile memory chips are connected to one channel. However, it will be understood that one non-volatile memory chip in one channel may be deformed in the storage device 20.

Subsequently, referring to FIG. 13, the system 3000 may include a central processing unit 3100, a random access memory (RAM) 3200, a user interface 3300, a power supply 3400, and the storage device 20 of FIG. 12. ) May be included.

The system 3000 is electrically connected to the CPU 3100, the RAM 3200, the user interface 3300, and the power supply 3400 through the system bus 3500. Data provided through the user interface 3300 or processed by the central processing unit 3100 is stored in the system 3000.

In FIG. 13, the nonvolatile memory device 24 is shown to be connected to the system bus 3500 via the controller 23. However, the nonvolatile memory device 24 may be configured to be directly connected to the system bus 3500.

Hereinafter, a segment cleaning method using a bitmap according to an embodiment of the present invention will be described with reference to FIG. 14. 14 is a flowchart of a segment cleaning method using a bitmap according to the present embodiment.

First, the Victim segment to be reduced to the free region among the segments included in the log region is selected (S102). As described above, the criterion for selecting the big team segment is not particularly limited, but the criterion may be based on selecting a segment having fewer live blocks as the big team segment.

Next, among the blocks included in the big team segment, a live block in which valid data is stored is identified (S104). Whether a specific block is a live block can be quickly identified by referring to bitmap data indicating whether each block is a live block among metadata stored in a log area and another area.

Next, it is checked whether each identified live block is preloaded in the write-back cache (S106). The file identifier and offset information of each live block may be obtained from a parent node of the live block, and information about the parent node may be obtained from the metadata. The preloading state may be inquired using the file identifier and the offset information.

If the live block is not preloaded in the write-back cache, the live block is read (S108), and loaded into the write-back cache (S110).

Next, requesting the module for managing the write-back cache to set the dirty bit for the live blocks loaded in the cache (S112). As a result, at the time of flushing the writeback cache, the writeback cache writes back the loaded live blocks at the end of the log area (S118). In other words, it is not necessary to perform the write-back required for the segment cleaning directly, and the write-back cache itself can be used to write-back blocks adjacent to the same file.

Due to the nature of the write-back cache, there may be a time difference between the loading time and the flushing time of the live block. The time difference may be tolerated when the type of segment cleaning is a background type. However, in the case of an ON-DEMAND type performed due to lack of a free area, a write operation may be delayed by waiting until the flush time. Therefore, according to the present embodiment, when the segment cleaning type is the emergency type, it may be requested to immediately write back to the write-back cache (S116). After the write-back S116 is performed, the return to the free area of the Vic segment and the write-back of the live block may be reflected in the bitmap. That is, among the bits included in the bitmap, the bit corresponding to the live block position in the Victim segment is set to a value representing an invalid block, and among the bits included in the bitmap, at the live block position in the clean segment. The corresponding bit may be set to a value representing a live block.

Even when write-back S118 is performed by flushing a write-back cache, the return to the free area of the Vic segment and the write-back of the live block are reflected in the bitmap.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, You will understand. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive.

Claims (10)

An interface for relaying data with the storage device; And
Among the plurality of segments stored in the storage device, a live block included in a Victim segment is identified with reference to a bitmap stored in the storage device, and the identified live block is identified. And a file system module for performing segment cleaning to write-back through the interface and reduce the Victim segment to the free area.
The bitmap includes a bit indicating whether a corresponding block is live and one segment includes a plurality of blocks.
The method according to claim 1,
The file system module,
The bitmap is updated when the segment cleaning is performed.
A host that sets a bit corresponding to the identified live block in the Victim segment to a value representing an invalid block, and sets a bit corresponding to a block of a location where the live block is live-backed to a value representing a live block. Device.
The method according to claim 1,
The file system module,
The storage device is divided into a first area written in a random access method and a second area written in the plurality of segments in a sequential access method, wherein the live block is the log. And performing the segment cleaning to write-back to the free area located at the end of the area.
The method according to claim 1,
The file system module,
The storage device is divided into a first area written in a random access method and a second area written in the plurality of segments in a sequential access method, and the second area is divided into the plurality of areas. It is further divided into log areas where segments are stored and free areas where segments can be stored.
The first region stores metadata about data stored in the second region, and the metadata includes the bitmap.
5. The method of claim 4,
The file system module,
And writing a first block to a free area located at an end of the log area, and setting a bit corresponding to the first block in the bitmap to a value representing a live block.
5. The method of claim 4,
The file system module,
And deleting or updating data of a second block included in the log area, and setting a bit corresponding to the second block in the bitmap to a value representing an invalid block.
The method according to claim 1,
A write-back cache used to write data to the storage device; And
Further comprising a cache management module for managing the write-back cache,
And the file system module loads the bitmap into the writeback cache.
A controller configured to receive a control signal from a host device to control the nonvolatile memory device; And
The first region reads / writes data under the control of the controller, and writes the data in a random access method and a plurality of blocks in a sequential access method. A nonvolatile memory device including a storage area configured as a second area in which a plurality of segments are written,
The second area includes a log area in which the plurality of segments are stored and a free area in which the segments may be stored.
Meta data about data stored in the second area is stored in the first area,
The metadata includes a bitmap indicating whether a corresponding block is live or not.
And the bitmap is data that is referenced to identify a live block within a victor segment in a segment cleaning process.
9. The method of claim 8,
The controller has an internal buffer used for the random access,
The storage device is a SSD (Static Solid Disk) device.
9. The method of claim 8,
The controller comprising:
During the segment cleaning process, the clean segment composed of the live blocks is written back to the free area located at the end of the log area under the control of the host device, and the bitmap is updated.
Among the bits included in the bitmap, a bit corresponding to a live block position in the big segment is set to a value representing an invalid block,
And a bit corresponding to a live block position in the clean segment among the bits included in the bitmap, to a value representing a live block.
KR1020120109376A 2012-09-28 2012-09-28 Bitmap used segment cleaning apparatus and storage device stroing the bitmap KR20140042520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120109376A KR20140042520A (en) 2012-09-28 2012-09-28 Bitmap used segment cleaning apparatus and storage device stroing the bitmap

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120109376A KR20140042520A (en) 2012-09-28 2012-09-28 Bitmap used segment cleaning apparatus and storage device stroing the bitmap

Publications (1)

Publication Number Publication Date
KR20140042520A true KR20140042520A (en) 2014-04-07

Family

ID=50651677

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120109376A KR20140042520A (en) 2012-09-28 2012-09-28 Bitmap used segment cleaning apparatus and storage device stroing the bitmap

Country Status (1)

Country Link
KR (1) KR20140042520A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552377B2 (en) 2016-05-26 2020-02-04 Research & Business Foundation Sungkyunkwan University Data discard method for journaling file system and memory management apparatus thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552377B2 (en) 2016-05-26 2020-02-04 Research & Business Foundation Sungkyunkwan University Data discard method for journaling file system and memory management apparatus thereof

Similar Documents

Publication Publication Date Title
KR102002830B1 (en) Segment cleaning apparatus and method thereof
KR102007650B1 (en) Segment group considering segment cleaning apparatus and method thereof
KR101977575B1 (en) Apparatus and method for directory entry look up, and recording medium recording the directory entry look up program thereof
KR102050725B1 (en) Computing system and method for managing data in the system
KR102050723B1 (en) Computing system and data management method thereof
KR102050732B1 (en) Computing system and method for managing data in the system
US20160162187A1 (en) Storage System And Method For Processing Writing Data Of Storage System
CN108121813B (en) Data management method, device, system, storage medium and electronic equipment
CN110908927A (en) Data storage device and method for deleting name space thereof
US20140095771A1 (en) Host device, computing system and method for flushing a cache
US10025706B2 (en) Control device, storage device, and storage control method
KR101465460B1 (en) Execution method for trim and data managing apparatus
CN112015671B (en) Flash memory controller, memory device and method for accessing flash memory module
JP6215631B2 (en) Computer system and data management method thereof
KR20140042520A (en) Bitmap used segment cleaning apparatus and storage device stroing the bitmap
US20140095558A1 (en) Computing system and method of managing data thereof
KR101716348B1 (en) Memory system, operating method thereof and computing system including the same
WO2020019173A1 (en) Memory control circuit for object storage

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination