US20170031832A1 - Storage device and storage virtualization system - Google Patents

Storage device and storage virtualization system Download PDF

Info

Publication number
US20170031832A1
US20170031832A1 US15/216,312 US201615216312A US2017031832A1 US 20170031832 A1 US20170031832 A1 US 20170031832A1 US 201615216312 A US201615216312 A US 201615216312A US 2017031832 A1 US2017031832 A1 US 2017031832A1
Authority
US
United States
Prior art keywords
virtual
storage
address
host
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/216,312
Other languages
English (en)
Inventor
Joo-young Hwang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HWANG, JOO-YOUNG
Publication of US20170031832A1 publication Critical patent/US20170031832A1/en
Priority to US16/810,500 priority Critical patent/US11397607B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/109Address translation for multiple virtual address spaces, e.g. segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/151Emulated environment, e.g. virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/651Multi-level translation tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages

Definitions

  • the inventive concept relates to virtualization systems and control methods thereof, and more particularly, to storage devices and storage virtualization systems.
  • the inventive concept may provide a storage device for improving software overhead in a virtualization system.
  • the inventive concept also may provide a storage virtualization system for improving software overhead in a virtualization system.
  • a storage device including a non-volatile memory device, and a memory controller configured to generate at least one virtual device corresponding to a physical storage area of the non-volatile memory device and convert a virtual address for the virtual device into a physical address in response to an access request.
  • a storage virtualization system including a host configured to communicate with devices connected through an input/output (I/O) adapter and process data in a virtualization environment, and at least one storage device connected to the I/O adapter, wherein the storage device generates at least one virtual device in response to a device virtualization request received from the host, performs a resource mapping process converting the virtual address into a logical address in response to an access request from the host, and performs an address conversion process converting the converted logical address into a physical address.
  • I/O input/output
  • FIG. 1 is a block diagram illustrating a computing system according to an exemplary embodiment of the inventive concept
  • FIG. 2 is a block diagram illustrating a computing system according to another exemplary embodiment of the inventive concept
  • FIG. 3 is a block diagram illustrating a storage virtualization system according to an exemplary embodiment of the inventive concept
  • FIG. 4 is a block diagram illustrating a storage virtualization system according to another exemplary embodiment of the inventive concept
  • FIG. 5 is a diagram illustrating a flow of an access-control operation in a storage virtualization system according to an exemplary embodiment of the inventive concept
  • FIG. 6 is a diagram illustrating a mapping table update operation according to an access-control operation in a storage virtualization system, according to an exemplary embodiment of the inventive concept
  • FIG. 7 is a block diagram illustrating a storage device according to an exemplary embodiment of the inventive concept.
  • FIG. 8 is a block diagram illustrating a detailed configuration of a memory controller in FIG. 7 , according to an exemplary embodiment of the inventive concept;
  • FIG. 9 is a block diagram illustrating a detailed configuration of a non-volatile memory chip configuring a memory device in FIG. 7 , according to an exemplary embodiment of the inventive concept;
  • FIG. 10 is a diagram illustrating a memory cell array in FIG. 9 , according to an exemplary embodiment of the inventive concept
  • FIG. 11 is a circuit diagram of a first memory block included in a memory cell array in FIG. 9 , according to an exemplary embodiment of the inventive concept;
  • FIG. 12 is a schematic view illustrating an out of band (OOB) sequence between a host and a device in a storage virtualization system and a device-recognition method, according to an embodiment of the inventive concept;
  • OOB out of band
  • FIG. 13 is a flowchart of a device virtualization method in a storage device according to an embodiment of the inventive concept
  • FIG. 14 is a flowchart illustrating a method of processing a device recognition command in a storage device according to an embodiment of the inventive concept
  • FIG. 15 is a flowchart illustrating an initialization and a device-recognition method in a storage virtualization system according to an exemplary embodiment of the inventive concept
  • FIG. 16 is a flowchart illustrating a virtualization method in a storage virtualization system according to another exemplary embodiment of the inventive concept
  • FIG. 17 is a flowchart illustrating an access control method in a storage virtualization system according to an exemplary embodiment of the inventive concept.
  • FIG. 18 is a block diagram illustrating an electronic device to which a storage device is applied according to an exemplary embodiment of the inventive concept.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the inventive concepts.
  • the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the inventive concepts.
  • FIG. 1 is a block diagram illustrating a computing system according to an exemplary embodiment of the inventive concept.
  • the computing system 1000 A includes a host 100 A and a storage device 200 .
  • the computing system 1000 A may be a personal computer (PC), a set-top box, a modem, a mobile device, and a server.
  • the host 100 A includes a processor 110 , a memory 120 , an input/output (I/O) adapter 130 A, and a bus 140 . Components of the host 100 A may exchange signals and data through the bus 140 .
  • the processor 110 may include a circuit, interfaces, or program code for processing data and controlling operations of the components of the computing system 1000 A.
  • the processor 110 may include a central processing unit (CPU), an advanced risk machine (ARM), or an application specific integrated circuit (ASIC).
  • CPU central processing unit
  • ARM advanced risk machine
  • ASIC application specific integrated circuit
  • the memory 120 may include a static random access memory (SRAM) or dynamic random access memory (DRAM), which stores data, commands, or program codes which may be needed for operations of the computing system 1000 A. Furthermore, the memory 120 may include a non-volatile memory. The memory 120 may store executable program code for operating at least one operating system (OS) and virtual machines (VMs). The memory 120 may also store program code that executes a VM monitor (VMM) for managing the VMs. The VMs and the virtualization program code executing the VMM may be included in host virtualization software (HV SW) 120 - 1 .
  • HV SW host virtualization software
  • the processor 110 may execute at least one operating system and the VMs by executing the HV SW 120 - 1 stored in the memory 120 . Furthermore, the processor 110 may execute the VMM for managing the VMs. The processor 110 may control the components of the computing system 1000 A by the above method.
  • the I/O adapter 130 A is an adapter for connecting I/O devices to the host 100 A.
  • the I/O adapter 130 A may include a peripheral component interconnect (PCI) or PCI express (PCIe) adapter, a small computer system interface (SCSI) adapter, a fiber channel adapter, a serial advanced technology attachment (ATA), or the like.
  • the I/O adapter 130 A may include a circuit, interfaces, or code capable of communicating information with devices connected to the computing system 1000 A.
  • the I/O adapter 130 A may include at least one standardized bus and at least one bus controller. Therefore, the I/O adapter 130 A may recognize and assign identifier to devices connected to the bus 140 , and may allocate resources to the various devices connected to the bus 140 .
  • the I/O adapter 130 A may manage communications along the bus 140 .
  • the I/O adapter 130 A may be a PCI or PCIe system, and the I/O adapter 130 A may include a PCIe route complex and at least one PCIe switch or bridge.
  • the I/O adapter 130 A may be controlled by the VMM.
  • PCI defines a bus protocol used for connecting I/O devices to the processor 110 .
  • PCIe defines a physical communication layer as a high-speed serial interface with a meaning of programming defined by a PCI standard.
  • the storage device 200 may be realized as a solid state drive (SSD) or a hard disk drive (HDD).
  • SSD solid state drive
  • HDD hard disk drive
  • the storage device 200 may be connected to the host 100 A by a directly attached storage (DAS) method.
  • DAS directly attached storage
  • the storage device 200 may be connected to the host 100 A by a network attached storage (NAS) method or a storage area network (SAN) method.
  • NAS network attached storage
  • SAN storage area network
  • the storage device 200 includes a memory controller 210 and a memory device 220 .
  • the memory controller 210 may control the memory device 220 based on a command received from the host 100 A.
  • the memory controller 210 may control program (or write), read and erasure operations with respect to the memory device 220 by providing address, command and control signals to the memory device 220 .
  • the memory controller 210 may store device virtualization software (DV SW) 210 - 1 , access control software (AC SW) 210 - 2 , and virtual device mapping table information (VD MT) 210 - 3 .
  • the memory controller 210 may generate at least one virtual device by operating the DV SW 210 - 1 .
  • the memory controller 210 may generate specific identify device (ID) data for each virtual device during a virtualization process.
  • ID identify device
  • the memory controller 210 may divide the memory device 220 into a plurality of storage areas and generate specific ID data for each virtual device corresponding to the storage areas.
  • the ID data may be used to identify each device in the I/O adapter 130 A of the host 100 A.
  • the memory controller 210 divides a logical address or a virtual address corresponding to the storage area of the memory device 220 into a plurality of regions, and may generate specific ID data for each virtual device corresponding to a divided logical address region or a virtual address region.
  • the memory controller 210 may set a storage capacity of the virtual device to be larger than an amount of storage space assigned to an actual device. Furthermore, a storage capacity may vary based on the virtual device.
  • the memory controller 210 may perform a virtualization process so that each virtual device may have an independent virtual device abstraction.
  • the VD MT 210 - 3 includes pieces of information required for retrieving a logical address corresponding to a virtual address for each of the virtual devices.
  • the VD MT 210 - 3 may include read or write access authority setting information of the VMs in each storage area.
  • the memory controller 210 may operate the DV SW 210 - 1 from the host 100 A and generate a physical function device and at least one virtual device based on a virtualization request received from the host 100 A.
  • the physical function device may be set as a virtual device having access authority to a VMM of the host 100 A
  • the virtual device may be set as a virtual device assigned to a VM of the host 100 A.
  • the memory controller 210 may operate the AC SW 210 - 2 and convert a virtual address corresponding to a virtual device into a physical address corresponding to a physical storage area in response to an access request received from the host 100 A.
  • the memory controller 210 may perform a resource mapping process converting the virtual address into a logical address by using the VD MT 210 - 3 .
  • the memory controller 210 may perform an address conversion process converting the logical address resulting from the resource mapping process into the physical address.
  • FIG. 1 illustrates an example of a configuration in which the single storage device 200 is connected to the host 100 A.
  • a plurality of storage devices 200 may be connected to the host 100 A.
  • FIG. 2 is a block diagram illustrating a computing system according to another exemplary embodiment of the inventive concept.
  • the computing system 1000 B includes a host 100 B and a storage device 200 .
  • the computing system 1000 B may be a PC, a set-top box, modem, a mobile device, and a server.
  • the host 100 B may include a processor 110 , a memory 120 , an I/O adapter 130 B, and a bus 140 . Components of the host 100 B may exchange signals and data through the bus 140 .
  • processor 110 and the memory 120 of FIG. 2 are substantially the same as the processor 110 and memory 120 of FIG. 1 , the repeated descriptions are omitted herein. Furthermore, since the storage device 200 of FIG. 2 is also substantially the same as the storage device 200 of FIG. 1 , the repeated descriptions are omitted herein.
  • I/O adapter 130 B From among components of the computing system 1000 B of FIG. 2 , I/O adapter 130 B will be mainly explained.
  • the I/O adapter 130 B connects I/O devices to the host 100 B.
  • the I/O adapter 130 B may include a PCI or PCIe adapter, an SCSI adapter, a fiber channel adapter, a serial ATA, or the like.
  • the I/O adapter 130 B may include a circuit, interfaces, or code capable of communicating information with devices connected to the computing system 1000 B.
  • the I/O adapter 130 B may include at least one standardized bus and at least one bus controller. Therefore, the I/O adapter 130 B may recognize and assign identifier to devices connected to the bus 140 , and may allocate resources to the various devices connected to the bus 140 . That is, the I/O adapter 130 B may manage communications along the bus 140 .
  • the I/O adapter 130 B may be a PCI or PCIe system, and the I/O adapter 130 B may include a PCIe route complex and at least one PCIe switch or bridge.
  • the I/O adapter 130 B may be controlled by a VMM.
  • I/O adapter 130 B may include a single root-I/O virtualization (SR-IOV) function 130 B- 1 .
  • SR-IOV root-I/O virtualization
  • the SR-IOV function 130 B- 1 has been developed to improve the I/O performance of a storage device in a server virtualization environment, and the SR-IOV function 130 B- 1 directly connects a VM of a server virtualization system to the storage device. Accordingly, in the computing system 1000 B including the SR-IOV function 130 B- 1 , at least one storage device or virtual device needs to be assigned to a single VM.
  • the SR-IOV function 130 B- 1 has a standard that enables a single PCIe physical device under a single root port to be represented as several individual physical devices on the VMM or a guest OS.
  • a PCIe device that supports the SR-IOV function 130 B- 1 represents several instances of the PCIe device on the guest OS and the VMM. The number of virtual functions displayed may vary according to devices.
  • the I/O adapter 130 B may directly connect the VMs with virtual devices of the storage device 200 , rather than via the VMM. Therefore, VMs of the host 100 B may be directly connected to the virtual function devices virtualized in the storage device 200 , rather than via the VMM, by using the SR-IOV function 130 B- 1 .
  • FIG. 3 is a block diagram illustrating a storage virtualization system 2000 A according to an exemplary embodiment of the inventive concept.
  • the storage virtualization system 2000 A of FIG. 3 may be a virtualization system corresponding to the computing system 1000 A of FIG. 1 .
  • the storage virtualization system 2000 A includes a VMM 300 , an I/O adapter 130 A, a control virtual monitor (CVM) 400 , a plurality of VMs (VM1 through VMj) 410 - 1 through 410 -j, and a storage device 200 A.
  • VMM 300 the storage virtualization system 2000 A includes a VMM 300 , an I/O adapter 130 A, a control virtual monitor (CVM) 400 , a plurality of VMs (VM1 through VMj) 410 - 1 through 410 -j, and a storage device 200 A.
  • CVM control virtual monitor
  • the VMM 300 , the I/O adapter 130 A, the CVM 400 , and the VMs 410 - 1 through 410 -j are software and/or hardware included in a host of a computing system.
  • Each of the VMs 410 - 1 through 410 -j may operate an OS and application programs to act like a physical computer.
  • the storage device 200 A includes a physical function device (PF) 201 and a plurality of virtual function devices (VF1 and VFj) 202 and 203 .
  • the PF 201 includes DV SW 201 - 1 , AC SW 201 - 2 , and physical function meta data (PF MD) 201 - 3 .
  • the VF1 202 includes AC SW 202 - 1 and VF1 MD 202 - 2
  • the VFj 203 includes AC SW 203 - 1 and VFj MD 203 - 2 .
  • the PF 201 controls a physical function of the storage device 200 A and controls the VF1 and VFj 202 and 203 .
  • the PF 201 may generate or delete the VF1 and VFj 202 and 203 by operating the DV SW 201 - 1 .
  • the PF 201 and the VF1 and VFj 202 and 203 have an independent configuration space, a memory space, and a message space.
  • the PF 201 may operate the AC SW 201 - 2 and convert a virtual address into a physical address corresponding to a physical storage area in response to an access request received from the host 100 A.
  • the PF 201 may perform a resource mapping process for converting the virtual address into a logical address by using the PF MD 201 - 3 .
  • the PF 201 may perform an address conversion process for converting the logical address resulting from the resource mapping process into the physical address.
  • the PF MD 201 - 3 may include virtual device mapping table information for retrieving a logical address corresponding to a virtual address and address conversion table information for retrieving a physical address corresponding to a logical address.
  • the PF MD 201 - 3 may include pieces of read or write access authority setting information corresponding to the VMs in each storage area.
  • the VF1 202 may operate the AC SW 202 - 1 and convert a virtual address into a physical address corresponding to a physical storage area in response to an access request received from the host 100 A.
  • the VF1 202 may perform a resource mapping process for converting the virtual address into a logical address by using the VF1 MD 202 - 2 .
  • the VF1 202 may perform an address conversion process for converting the logical address resulting from the resource mapping process into the physical address.
  • the VF1 MD 202 - 2 may include virtual device mapping table information for retrieving a logical address corresponding to a virtual address assigned to the VF1 202 and address conversion table information for retrieving a physical address corresponding to a logical address.
  • the VF1 MD 201 - 2 may include pieces of read or write access authority setting information corresponding to the VMs in each storage area.
  • the VFj 203 may also perform a resource mapping process and an address conversion process by using the same method used by the VF1 202 .
  • the I/O adapter 130 A transmits respective ID data about the PF 201 and the VF1 and VFj 202 and 203 , the configuration space, the memory space, and the message space to the VMM 300 or the CVM 400 .
  • the CVM 400 includes an interface for managing the VMM 300 and the VMs (VM1 through VMj) 410 - 1 through 410 -j.
  • the CVM 400 may assign the VF1 and VFj 202 and 203 to the VM1 through VMj 410 - 1 through 410 -j.
  • the VMM 300 may also assign the VF1 and VFj 202 and 203 to the VM1 through VMj 410 - 1 through 410 -j.
  • the CVM 400 or the VMM 300 may perform resource mapping and access authority setting with respect to the PF 201 and the VF1 and VFj 202 and 203 .
  • the PF 201 may provide an interface for the resource mapping and the access authority setting with respect to the VF1 and VFj 202 and 203 to the CVM 400 or the VMM 300 .
  • an initialization operation with respect to the VF1 and VFj 202 and 203 may be performed through the PF 201 .
  • the host manages the PF 201 so that software with an admin/root authority such as the VMM 300 or the CVM 400 may access the PF 201 . Otherwise, security issues may occur due to improperly setting access authority.
  • Each VF1 and VFj 202 and 203 has independent virtual device abstraction.
  • Capacity of a virtual device may be set at an initialization time of the virtual device.
  • the capacity of the virtual device may be set to be larger than an amount of storage space assigned to an actual device.
  • the function may be called a “thin provisioning function”.
  • the “thin provisioning function” grants access authority for accessing a block on which an actual write operation is to be performed. Therefore, the total capacity of all virtual devices provided by a storage device may be greater than physical storage capacity of an actual (e.g., physical) storage device.
  • a storage space of the virtual device may be as large as the capacity of the virtual device and may be set differently for each virtual device.
  • read/write access authority may be set for each virtual block.
  • a virtual block permitting only a read operation may be set as a copy-on-write block.
  • a virtual block permitting only a write operation may also be set in the storage device 200 A.
  • the read/write access authority may be used for data transmission between two VMs by setting a block of one VM as “read-only” and a block of another VM as “write-only”. In general, block-access authority may be set so that both read/write operations may be permitted.
  • FIG. 4 is a block diagram illustrating a storage virtualization system 2000 B according to another exemplary embodiment of the inventive concept.
  • the storage virtualization system 2000 B of FIG. 4 may be a virtualization system corresponding to the computing system 1000 B of FIG. 2 .
  • the storage virtualization system 2000 B includes a VMM 300 , an I/O adapter 130 B, a CVM 400 , a plurality of VM1 through VMj 410 - 1 through 410 -j, and a storage device 200 A.
  • VMM 300 , the CVM 400 , the VM1 through VMj 410 - 1 through 410 -j, and the storage device 200 A of FIG. 4 are substantially the same as the VMM 300 , the CVM 400 , the VM1 through VMj 410 - 1 through 410 -j, and the storage device 200 A included in the storage virtualization system 2000 B of FIG. 3 , repeated descriptions are omitted.
  • An I/O adapter 130 B including an SR-IOV function 130 B- 1 is applied to the storage virtualization system 2000 B.
  • the CVM 400 and the VMs 410 - 1 through 410 -j may be directly connected to virtual devices 201 to 203 of the storage device 200 A, rather than via the VMM 300 . That is, the CVM 400 and the VMs 410 - 1 through 410 -j of a host may be directly connected to the PF 201 and the VF1 and VFj 202 and 203 of the storage device 200 A, rather than via the VMM 300 .
  • FIG. 5 is a diagram illustrating a flow of an access-control operation in a storage virtualization system according to an exemplary embodiment of the inventive concept.
  • FIG. 5 illustrates a main configuration to explain an access-control operation in the storage virtualization system 2000 B of FIG. 4 .
  • access authority of a virtual block address V 1 is set as “read-only (RO)” and that of a virtual block address V 2 is set as “read/write (RW)” in an address region assigned to a VF1 202 .
  • access authority of a virtual block address V 3 is set as “read-only (RO)” and that of a virtual block address V 4 is set as “read/write (RW)” in an address region assigned to a VFj 203 .
  • Virtual address mentioned above may be referred to as “virtual block address”, “virtual logical block address” or “pseudo logical block address”. Furthermore, “logical address” may be referred to as “logical block address”.
  • the virtual block addresses V 1 and V 3 in which the access authority is set as “read-only (RO)” permit only a read operation and not a write operation.
  • the virtual block addresses V 2 and V 4 in which the access authority is set as “read/write (RW)” permit both read and write operations.
  • a storage area L 0 of a physical layout in which data is actually stored represents a logical address region corresponding to a virtual address region in which access authority is set as “read-only (RO)”, in the storage device 200 A.
  • Remaining storage areas other than the storage area L 0 represent logical address regions corresponding to virtual address regions in which access authority is set as “read/write (RW)”. Therefore, logical block addresses L 1 and L 2 represent logical address regions corresponding to virtual address regions in which access authority is set as “read-only (RO)”, and logical block addresses L 3 and L 4 represent logical address regions corresponding to virtual address regions in which access authority is set as “read/write (RW)”.
  • mapping information M 1 and M 2 in virtual device mapping table information of VF1 MD 202 - 2 corresponding to the VF1 202 shows that the logical block address L 1 is respectively mapped to the virtual block addresses V 1 and V 3 . Furthermore, mapping information M 3 and M 4 shows that the logical block address L 2 is respectively mapped to the virtual block addresses V 2 and V 4 .
  • a write request with respect to a logical block address in which access authority is set as “read-only (RO)” is generated in a state of setting access authority of logical block addresses to copy-on-write, a new block is assigned and added to a mapping table after copying data in a storage area corresponding to the new block.
  • the VF1 202 when the VF1 202 receives a write request to the virtual block address V 2 corresponding to the logical block address L 2 , the VF1 202 operates as described below when a copy-on-write option is set.
  • mapping information M 3 which maps the logical block address L 2 to the virtual block address V 2 is changed to mapping information M 3 which maps the logical block address L 3 to the virtual block address V 2 .
  • the VFj 203 when the VFj 203 receives a write request to the virtual block address V 4 corresponding to the logical block address L 2 , the VFj 203 operates as described below when a copy-on-write option is set.
  • mapping information M 4 which maps the logical block address L 2 to the virtual block address V 4 is changed to mapping information M 4 which maps the logical block address L 4 to the virtual block address V 4 .
  • FIG. 6 is a diagram illustrating a mapping table update operation according to an access-control operation in a storage virtualization system, according to an exemplary embodiment of the inventive concept.
  • FIG. 6 illustrates a mapping table update operation according to an access-control operation when access authority is set as in FIG. 5 .
  • mapping information indicating respective virtual logical block addresses (virtual LBAs), logical block addresses (LBAs), and physical block addresses (PBAs) of virtual function devices VF1 and VFj before a write request for virtual LBAs V 2 and V 4 are generated is as described below.
  • mapping information indicating virtual LBAs V 1 to V 4 of virtual function devices VF1 and VFj is (V 1 , L 1 , P 2 ), (V 2 , L 2 , P 1 ), (V 3 , L 1 , P 2 ), and (V 4 , L 2 , P 1 ).
  • the mapping information may be divided into a piece of mapping table information corresponding to the virtual LBA and the LBA and a piece of mapping table information corresponding to mapping of the LBA and the PBA.
  • the virtual function device VF1 assigns a new LBA L 3 and a PBA P 3 in which read/write access authority is set to a physical function device and data stored in a PBA P 1 is copied to the PBA P 3 . Afterwards, the mapping information (V 2 , L 2 , P 1 ) is changed to (V 2 , L 3 , P 3 ).
  • the virtual function device VFj assigns a new LBA L 4 and a PBA P 4 in which read/write access authority is set to a physical function device and data stored in a PBA P 1 is copied to the PBA P 4 .
  • the mapping information (V 4 , L 2 , P 1 ) is changed to (V 4 , L 4 , P 4 ).
  • FIG. 7 is a block diagram illustrating a storage device according to an exemplary embodiment of the inventive concept.
  • the storage device 200 of the computing system 1000 A of FIGS. 1 and 2 or the storage device 200 A of the storage virtualization system of FIGS. 3 and 4 may be implemented by a storage device 200 B of FIG. 7 .
  • the storage device 200 B includes a memory controller 210 and a memory device 220 B.
  • the storage device 200 B may be implemented by using an SSD.
  • the memory controller 210 may perform a control operation with respect to the memory device 220 B based on a command received from a host.
  • the memory controller 210 may control program (or write), read and erasure operations with respect to the memory device 220 B by providing address, command and control signals through a plurality of channels CH1 to CHM.
  • the memory controller 210 may store a DV SW 210 - 1 , an AC SW 210 - 2 , and a VD MT 210 - 3 .
  • the memory controller 210 may perform an operation for generating identify device (ID) data about each virtual device so that a single physical storage device may be recognized as a plurality of virtual devices.
  • ID identify device
  • the memory controller 210 may divide a storage area of the memory device 220 B into a plurality of storage areas that are initialized and may generate specific ID data about each virtual device corresponding to the storage areas. Furthermore, the memory controller 210 generates address, command and control signals for writing a plurality of pieces of the ID data generated in each storage area to the memory device 220 B. Furthermore, the memory controller 210 generates address, command and control signals for writing information about storage capacity of each virtual device and a physical address region to the memory device 220 B.
  • the memory controller 210 may generate a plurality of pieces of ID data so that a single physical storage device may be recognized as a plurality of virtual devices based on an initialized device virtualization command. Furthermore, the memory controller 210 may control the plurality of pieces of the generated ID data so as to write the ID data to the memory device 220 B.
  • a device virtualization command may be provided to the memory controller 210 through a producer manage tool in a storage device manufacturing process.
  • a device virtualization command may be provided to the memory controller 210 through the host.
  • ID data may include information about a model name, a firmware revision, a serial number, a worldwide name (WWN), a physical logical sector size, a feature, and the like based on a serial advanced technology attachment (SATA) standard.
  • WWN worldwide name
  • SATA serial advanced technology attachment
  • At least the information about the serial number and the WWN from among the pieces of information included in ID data for each virtual device corresponding to a physical storage device may be set differently.
  • the storage capacity of each virtual device may be set to be a capacity obtained by dividing a maximum number of logical block addresses (max LBAs) of the physical storage device by N.
  • capacity of a virtual device may be set to be greater than a storage space assigned to an actual device. For example, only a block in which an actual write operation is performed is assigned when the write operation is performed. Therefore, the total capacity of all virtual devices provided by a storage device may be greater than physical storage capacity of an actual storage device.
  • a storage space of the virtual device is as large as the capacity of the virtual device and may be set differently for each virtual device.
  • a block size of the virtual device does not need to be equal to that of the logical device, but may not be smaller than that of a logical device. That is, a size of a virtual LBA is set to be equal to or larger than that of an LBA.
  • read/write access authority may be set with respect to a virtual block.
  • a virtual block permitting only a read operation may be set as a copy-on-write block.
  • a virtual block permitting only a write operation may also be set in the storage device 200 B.
  • the information about setting the access authority may be included in virtual device mapping table information.
  • the memory controller 210 When an identify device (ID) command is transmitted from the host to the memory controller 210 after setting a plurality of pieces of the ID data, the memory controller 210 transmits the ID data to the host.
  • the memory controller 210 may read a plurality of pieces of the ID data from the memory device 220 B and transmit a plurality of pieces of the read ID data to the host.
  • the memory controller 210 may perform the access-control operation described in FIGS. 5 and 6 by using the AC SW 210 - 2 and the VD MT 210 - 3 .
  • the memory device 220 B may include at least one non-volatile memory chip (NVM) 220 - 1 .
  • NVM non-volatile memory chip
  • the NVM 220 - 1 included in the memory device 220 B may be not only a flash memory chip, but may also be a phase change RAM (PRAM) chip, a ferroelectric RAM (FRAM) chip, a magnetic RAM (MRAM) chip, or the like.
  • PRAM phase change RAM
  • FRAM ferroelectric RAM
  • MRAM magnetic RAM
  • the memory device 220 B may include at least one non-volatile memory chip and at least one volatile memory chip, or may include at least two types of non-volatile memory chips.
  • FIG. 8 is a block diagram illustrating a detailed configuration of the memory controller 210 in FIG. 7 , according to an exemplary embodiment of the inventive concept.
  • the memory controller 210 includes a processor 211 , a random access memory (RAM) 212 , a host interface 213 , a memory interface 214 , and a bus 215 .
  • RAM random access memory
  • the components of the memory controller 210 are electrically connected to each other via the bus 215 .
  • the processor 211 may control an overall operation of the storage device 200 B by using program code and pieces of data that are stored in the RAM 212 .
  • the processor 211 may read from the memory device 220 B program code and data which are necessary for controlling operations performed by the storage device 200 B, and may load the read program code and data into the RAM 212 .
  • the processor 211 may read from the memory device 220 B a DV SW 210 - 1 , an AC SW 210 - 2 , and a VD MT 210 - 3 and load the read DV SW 210 - 1 , AC SW 210 - 2 , and VD MT 210 - 3 into the RAM 212 .
  • the processor 211 loads one piece of ID data into the RAM 212 before executing the DV SW 210 - 1 . After executing the DV SW 210 - 1 , the processor 211 loads a plurality of pieces of ID data into the RAM 212 .
  • the processor 211 When the processor 211 receives the device virtualization command via the host interface 213 , the processor 211 divides a physical storage device into a plurality of virtual devices. For example, the processor 211 may set a plurality of pieces of ID data for one physical storage device.
  • the processor 211 may read the plurality of pieces of ID data from the RAM 212 and transmit the same to the host via the host interface 213 .
  • the RAM 212 stores data that is received via the host interface 213 or data that is received from the memory device 220 B via the memory interface 214 .
  • the RAM 212 may also store data that has been processed by the processor 211 .
  • the RAM 212 may store the plurality of pieces of ID data set in response to the device virtualization command
  • the host interface 213 includes a protocol for exchanging data with a host that is connected to the memory controller 210 , and the memory controller 210 may interface with the host via the host interface 213 .
  • the host interface 213 may be implemented by using, but not limited to, an ATA interface, an SATA interface, a parallel advanced technology attachment (PATA) interface, a universal serial bus (USB) or serial attached small computer system (SAS) interface, an SCSI, an embedded multimedia card (eMMC) interface, or a universal flash storage (UFS) interface.
  • the host interface 213 may receive a command, an address, and data from the host under the control of the processor 211 or may transmit data to the host.
  • the memory interface 214 is electrically connected to the memory device 220 B.
  • the memory interface 214 may transmit a command, an address, and data to the memory device 220 B under the control of the processor 211 or may receive data from the memory device 220 B.
  • the memory interface 214 may be configured to support NAND flash memory or NOR flash memory.
  • the memory interface 214 may be configured to perform software or hardware interleaving operations via a plurality of channels.
  • FIG. 9 is a block diagram illustrating a detailed configuration of a non-volatile memory chip configuring the memory device 220 B in FIG. 7 , according to an exemplary embodiment of the inventive concept.
  • the NVM 220 - 1 may be a flash memory chip.
  • the NVM 220 - 1 a may include a memory cell array 11 , control logic 12 , a voltage generator 13 , a row decoder 14 , and a page buffer 15 .
  • the components included in the NVM 220 - 1 will now be described in detail.
  • the memory cell array 11 may be connected to at least one string selection line SSL, a plurality of word lines WL, and at least one ground selection line GSL, and may also be connected to a plurality of bit lines BL.
  • the memory cell array 11 may include a plurality of memory cells MC (see FIGS. 11 ) that are disposed at intersections of the plurality of bit lines BL and the plurality of word lines WL.
  • each memory cell MC may have one selected from an erasure state and first through n-th programmed states P 1 through Pn that are distinguished from each other according to a threshold voltage.
  • n may be a natural number equal to or greater than 2.
  • n when each memory cell MC is a 2-bit level cell, n may be 3.
  • n when each memory cell MC is a 3-bit level cell, n may be 7.
  • n when each memory cell MC is a 4-bit level cell, n may be 15.
  • the plurality of memory cells MC may include multi-level cells.
  • embodiments of the inventive concept are not limited thereto, and the plurality of memory cells MC may include single-level cells.
  • the control logic 12 may receive a command signal CMD, an address signal ADDR, and a control signal CTRL from the memory controller 210 to output various control signals for writing the data to the memory cell array 11 or for reading the data from the memory cell array 11 . In this way, the control logic 12 may control overall operations of the NVM 220 - 1 .
  • control logic 12 may provide a voltage control signal CTRL_vol to the voltage generator 13 , may provide a row address signal X_ADDR to the row decoder 14 , and may provide a column address signal Y_ADDR to the page buffer 15 .
  • the voltage generator 13 may receive the voltage control signal CTRL_vol to generate various voltages for executing a program operation, a read operation and an erasure operation with respect to the memory cell array 11 .
  • the voltage generator 13 may generate a first drive voltage VWL for driving the plurality of word lines WL, a second drive voltage VSSL for driving the at least one string selection line SSL, and a third drive voltage VGSL for driving the at least one ground selection line GSL.
  • the first drive voltage VWL may be a program (or write) voltage, a read voltage, an erasure voltage, a pass voltage, or a program verification voltage.
  • the second drive voltage VSSL may be a string selection voltage, namely, an on voltage or an off voltage.
  • the third drive voltage VGSL may be a ground selection voltage, namely, an on voltage or an off voltage.
  • the row decoder 14 may be connected to the memory cell array 11 through the plurality of word lines WL and may activate some of the plurality of word lines WL in response to the row address signal X_ADDR received from the control logic 12 . In detail, during a read operation, the row decoder 14 may apply a read voltage to a word line selected from the plurality of word lines WL and apply a pass voltage to the remaining unselected word lines.
  • the row decoder 14 may apply a program voltage to the selected word line and apply the pass voltage to the unselected word lines. According to the present embodiment, the row decoder 14 may apply a program voltage to the selected word line and an additionally selected word line, in at least one selected from a plurality of program loops.
  • the page buffer 15 may be connected to the memory cell array 11 via the plurality of bit lines BL.
  • the page buffer 15 may operate as a sense amplifier so as to output data DATA stored in the memory cell array 11 .
  • the page buffer 15 may operate as a write driver so as to input data DATA to be stored in the memory cell array 11 .
  • FIG. 10 is a view of the memory cell array 11 in FIG. 9 , according to an exemplary embodiment of the inventive concept.
  • the memory cell array 11 may be a flash memory cell array.
  • the memory cell array 11 may include a plurality of memory blocks BLK 1 , . . . , and BLKa (where “a” is a positive integer which is equal to or greater than two) and each of the memory blocks BLK 1 , . . . , and BLKa may include a plurality of pages PAGE 1 , . . . , and PAGEb (where “b” is a positive integer which is equal to or greater than two).
  • each of the pages PAGE 1 , . . . , and PAGEb may include a plurality of sectors SEC1, . . .
  • FIG. 11 is a circuit diagram of a first memory block BLK 1 a included in the memory cell array in FIG. 9 , according to an exemplary embodiment of the inventive concept.
  • the first memory block BLK 1 a may be a NAND flash memory having a vertical structure.
  • a first direction is referred to as an x direction
  • a second direction is referred to as a y direction
  • a third direction is referred to as a z direction.
  • embodiments of the inventive concept are not limited thereto, and the first through third directions may vary.
  • the first memory block BLK 1 a may include a plurality of cell strings CST, a plurality of word lines WL, a plurality of bit lines BL, a plurality of ground selection lines GSL 1 and GSL 2 , a plurality of string selection lines SSL 1 and SSL 2 , and a common source line CSL.
  • the number of cell strings CST, the number of word lines WL, the number of bit lines BL, the number of ground selection lines GSL 1 and GSL 2 , and the number of string selection lines SSL 1 and SSL 2 may vary according to embodiments.
  • Each of the cell strings CST may include a string selection transistor SST, a plurality of memory cells MC, and a ground selection transistor GST that are serially connected to each other between a bit line BL corresponding to the cell string CST and the common source line CSL.
  • each cell string CST may further include at least one dummy cell.
  • each cell string CST may include at least two string selection transistors SST or at least two ground selection transistors GST.
  • Each cell string CST may extend in the third direction (z direction).
  • each cell string CST may extend in a vertical direction (z direction) perpendicular to the substrate.
  • the first memory block BLK 1 a including the cell strings CST may be referred to as a vertical-direction NAND flash memory.
  • the integration density of the memory cell array 11 may be increased.
  • the plurality of word lines WL may each extend in the first direction x and the second direction y, and each word line WL may be connected to memory cells MC corresponding thereto. Accordingly, a plurality of memory cells MC arranged adjacent to each other on the same plane in the second direction y may be connected to each other by the same word line WL. In detail, each word line WL may be connected to gates of memory cells MC to control the memory cells MC. In this case, the plurality of memory cells MC may store data and may be programmed, read, or erased via the connected word line WL.
  • the plurality of bit lines BL may extend in the first direction x and may be connected to the string selection transistors SST. Accordingly, a plurality of string selection transistors SST arranged adjacent to each other in the first direction x may be connected to each other by the same bit line BL. In detail, each bit line BL may be connected to drains of the plurality of string selection transistors SST.
  • the plurality of string selection lines SSL 1 and SSL 2 may each extend in the second direction y and may be connected to the string selection transistors SST. Accordingly, a plurality of string selection transistors SST arranged adjacent to each other in the second direction y may be connected to each other by string selection line SSL 1 or SSL 2 . In detail, each string selection line SSL 1 or SSL 2 may be connected to gates of the plurality of string selection transistors SST to control the plurality of string selection transistors SST.
  • the plurality of ground selection lines GSL 1 and GSL 2 may each extend in the second direction y and may be connected to the ground selection transistors GST. Accordingly, a plurality of ground selection transistors GST arranged adjacent to each other in the second direction y may be connected to each other by ground selection line GSL 1 or GSL 2 . In detail, each ground selection line GSL 1 or GSL 2 may be connected to gates of the plurality of ground selection transistors GST to control the plurality of ground selection transistors GST.
  • the ground selection transistors GST respectively included in the cell strings CST may be connected to each other by the common source line CSL.
  • the common source line CSL may be connected to sources of the ground selection transistors GST.
  • a plurality of memory cells MC connected to the same word line WL and to the same string selection line, for example, string selection line SSL 1 or SSL 2 and arranged adjacent to each other in the second direction y may be referred to as a page PAGE.
  • a plurality of memory cells MC connected to a first word line WL 1 and to a first string selection line SSL 1 and arranged adjacent to each other in the second direction y may be referred to as a first page PAGE 1 .
  • a plurality of memory cells MC connected to the first word line WL 1 and to a second string selection line SSL 2 and arranged adjacent to each other in the second direction y may be referred to as a second page PAGE 2 .
  • 0V may be applied to a bit line BL
  • an on voltage may be applied to a string selection line SSL
  • an off voltage may be applied to a ground selection line GSL.
  • the on voltage may be equal or greater than the threshold voltage so that a string selection transistor SST is turned on, and the off voltage may be smaller than the threshold voltage so that the ground selection transistor GST is turned off.
  • a program voltage may be applied to a memory cell selected from the memory cells MC, and a pass voltage may be applied to the remaining unselected memory cells. In response to the program voltage, electric charges may be injected into the memory cells MC due to F-N tunneling. The pass voltage may be greater than the threshold voltage of the memory cells MC.
  • an erasure voltage may be applied to the body of the memory cells MC, and 0V may be applied to the word lines WL. Accordingly, data stored in the memory cells MC may be temporarily erased.
  • FIG. 12 is a schematic view illustrating an out of band (OOB) sequence between a host and a device in a storage virtualization system and a device-recognition method, according to an embodiment of the inventive concept.
  • OOB out of band
  • a host 2100 may be the host 100 A or 100 B of FIG. 1 or 2
  • a device 2200 may be the storage device 200 of FIG. 2 .
  • the host 2100 transmits a COMRESET signal, which is an analog signal, to the device 2200 .
  • the device 2200 transmits a COMINIT signal, which is an analog signal, to the host 2100 .
  • the host 2100 transmits a COMWAKE signal, which is an analog signal, to the device 2200 .
  • the device 2200 transmits a COMWAKE signal, which is an analog signal, to the host 2100 .
  • operations S 5 and S 6 the host 2100 and the device 2200 adjust a communication speed while exchanging a primitive align signal ALIGN. In this way, the initialization process is completed.
  • the host 2100 transmits an ID command ID CMD to the device 2200 .
  • the device 2200 transmits ID data ID_DATA set by the device 2200 to the host 2100 .
  • the device 2200 transmits a plurality of pieces of ID data to the host 2100 . Accordingly, the host 2100 recognizes the physical device 2200 as a plurality of virtual devices.
  • a device virtualization method performed in a computing system and a device recognition method performed in a storage virtualization system according to an embodiment of the inventive concept will now be described.
  • FIGS. 13 and 14 may be performed by the memory controller 210 of FIG. 7 . In detail, the methods of FIGS. 13 and 14 may be performed under the control of the processor 211 of the memory controller 210 of FIG. 8 .
  • FIG. 13 is a flowchart of a device virtualization method in the storage device 200 B according to an embodiment of the inventive concept.
  • the memory controller 210 of the storage device 200 B determines whether a device virtualization command is received.
  • the device virtualization command may be received from the host 100 A of FIG. 1 or the host 100 B of FIG. 2 .
  • the device virtualization command may be received via a manufacturer management tool during a device manufacturing process.
  • the memory controller 210 of the storage device 200 B in response to the device virtualization command, the memory controller 210 of the storage device 200 B generates a plurality of pieces of ID data so that one physical storage device is recognized as a plurality of storage devices.
  • the memory controller 210 divides a storage area into a plurality of storage areas and generates respective ID data in each storage area.
  • capacity of a virtual device may be set to be greater than a storage space assigned to an actual device. For example, only a block in which an actual write operation is performed is assigned when the write operation is performed. Therefore, the total capacity of all virtual devices provided by the storage device 200 B may be greater than physical storage capacity of an actual storage device.
  • a storage space of the virtual device is as large as the capacity of the virtual device and may be set differently for each virtual device.
  • the memory controller 210 may divide the storage region into storage regions the number of which is indicated by the device virtualization command, and may generate different pieces of ID data for the storage regions. In another example, the memory controller 210 may divide the storage region into storage regions the number of which is set by default, and may generate different pieces of ID data for the storage regions.
  • read/write access authority may be set with respect to a virtual block assigned to a virtual device by the memory controller 210 .
  • a virtual block permitting only a read operation may be set as a copy-on-write block.
  • a virtual block permitting only a write operation may also be set by the memory controller 210 .
  • the read/write access authority set with respect to the virtual block assigned to the virtual device may be included in storage area data.
  • the memory controller 210 stores information about the storage regions and the plurality of pieces of ID data in the memory device 220 B.
  • FIG. 14 is a flowchart illustrating a method of processing a device recognition command in the storage device 200 B according to an embodiment of the inventive concept.
  • the memory controller 210 of the storage device 200 B determines whether the ID command ID CMD is received.
  • the ID command ID CMD may be received from the host 100 A of FIG. 1 or the host 100 B of FIG. 2 .
  • the memory controller 210 of the storage device 200 B transmits to the host a plurality of pieces of ID data read from the memory device 220 B.
  • the plurality of pieces of ID data are the pieces of ID data for the virtual devices that are derived from one physical storage device via the device virtualization of FIG. 13 .
  • FIG. 15 is a flowchart illustrating an initialization and a device-recognition method in a storage virtualization system according to an exemplary embodiment of the inventive concept.
  • the initialization and device-recognition method of FIG. 15 may be performed in the host 100 A or 100 B of the computing system 1000 A or 1000 B of FIG. 1 or 2 .
  • a host for example, the host 100 A or 100 B, performs an initialization operation for transmission and reception with the storage device 200 connected to the host.
  • the host which the storage device 200 is connected to may perform the initialization operation for transmission and reception by using an OOB sequence based on the SATA standard.
  • the initialization operation for transmission and reception may be performed based on operations S 1 through S 6 of FIG. 12 .
  • the host 100 A or 100 B determines whether the initialization operation for transmission and reception has been successfully completed. For example, the host 100 A or 100 B determines whether operations S 1 through S 6 of FIG. 21 have been successfully completed.
  • the host 100 A or 100 B receives a plurality of pieces of ID data, from the storage device 200 for which the plurality of pieces of ID data are set via device virtualization, based on the ID command ID CMD.
  • the host 100 A or 100 B allocates virtual devices to VMs, based on the received plurality of pieces of ID data.
  • FIG. 16 is a flowchart illustrating a virtualization method in a storage virtualization system according to another exemplary embodiment of the inventive concept.
  • the virtualization method of FIG. 16 may be performed in the storage virtualization system 2000 A of FIG. 3 or the storage virtualization system 2000 B of FIG. 4 .
  • the storage device 200 A generates the PF and at least one virtual device in response to a virtualization request received from the VMM 300 or the CVM 400 and provides information about the PF and at least one virtual device to the VMM 300 or CVM 400 .
  • the VMM 300 or CVM 400 assigns the virtual function devices VF1 and VFj to the virtual machines VMs based on the information about the virtual function devices VF1 and VFj received from the storage device 200 A.
  • the VMM 300 or CVM 400 controls access setting corresponding to the at least one virtual device.
  • the VMM 300 or CVM 400 may set resource mapping and access authority corresponding to the virtual function device through an interface provided to the storage device 200 A.
  • FIG. 17 is a flowchart illustrating an access control method in a storage virtualization system according to an exemplary embodiment of the inventive concept.
  • the access control method of FIG. 17 may be performed in the virtual function device VF1 or VFj of the storage virtualization system 2000 A of FIG. 3 or the storage virtualization system 2000 B of FIG. 4 .
  • the virtual function device VF1 or VFj determines whether a write request is received from a VM.
  • the virtual function device VF1 or VFj determines whether the VM generating the write request has access authority. For example, the virtual function device VF1 or VFj determines whether identification information of the VM generating the write request matches identification information of a VM assigned to the virtual function device VF1 or VFj.
  • the virtual function device VF1 or VFj determines whether the write request corresponds to a storage area which may not be written to. For example, the virtual function device VF1 or VFj determines whether the write request for writing to an LBA for which access authority is set as “read-only (RO)”.
  • the virtual function device VF1 or VFj determines in the operation S 540 that the write request corresponds to a storage area which may not be written to, the virtual function device VF1 or VFj assigns a new PBA and LBA and copies the same. That is, the virtual function device VF1 or VFj copies data stored in the PBA corresponding to the LBA which received a write request to a newly assigned PBA.
  • mapping information of the virtual function device VF1 or VFj according to the copy operation is changed. That is, as described in FIG. 6 , mapping information of the virtual function device assigned before the copy operation is changed to the newly assigned PBA and LBA, according to the copy operation.
  • the virtual function device VF1 or VFj performs a write operation in response to a write request from the newly assigned PBA and LBA. Therefore, data may be copied to a storage area included among the updated list of storage areas shared by VMs. Therefore, it is possible to improve performance and extend the life of the storage virtualization system by reducing unnecessary writing/copying of data in the storage virtualization system.
  • FIG. 18 is a block diagram illustrating an electronic device 3000 to which a storage device is applied according to an exemplary embodiment of the inventive concept.
  • the electronic device 3000 includes a processor 3010 , a RAM 3020 , a storage device 3030 , an input/output (I/O) device 3040 , and a bus 3050 .
  • the electronic device 3000 may further include ports which are capable of communicating with a video card, a sound card, a memory card, a USB device or other electronic devices.
  • the electronic device 3000 may be implemented by using a personal computer, a laptop computer, a mobile device, a personal digital assistant (PDA), a digital camera, or the like.
  • PDA personal digital assistant
  • the bus 3050 refers to a transmission channel via which data, a command signal, an address signal, and control signals are transmitted between components of the electronic device 3000 other than the bus 3050 .
  • the processor 3010 may execute specific calculations or specific tasks.
  • the processor 3010 may be a micro-processor or a central processing unit (CPU).
  • the processor 3010 may communicate with the RAM 3020 , the storage device 3030 , and the I/O device 3040 through the bus 3050 , such as an address bus, a control bus, or a data bus.
  • the processor 3010 may be connected to an expansion bus such as a peripheral component interconnect (PCI) bus.
  • PCI peripheral component interconnect
  • the RAM 3020 may operate as a main memory, and may be implemented by using a DRAM or SRAM.
  • the storage device 3030 includes a memory controller 3031 and a memory device 3032 .
  • the storage device 3030 may be the storage device 200 B of FIG. 11 .
  • the memory controller 3031 and the memory device 3032 may be the memory controller 210 and the memory device 220 B of FIG. 7 , respectively.
  • the I/O device 3040 may include an input device, such as a keyboard, a keypad or a mouse, and an output device, such as a printer or a display.
  • an input device such as a keyboard, a keypad or a mouse
  • an output device such as a printer or a display.
  • the processor 3010 may perform a calculation or process data in accordance with a user command input via the I/O device 3040 . To perform a calculation or process data in accordance with a user command, the processor 3010 may transmit to the storage device 3030 a request to read data from the storage device 3030 or write data to the storage device 3030 .
  • the storage device 3030 may perform a read operation or a write operation according to the request received from the processor 3010 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Storage Device Security (AREA)
  • Memory System (AREA)
US15/216,312 2015-07-28 2016-07-21 Storage device and storage virtualization system Abandoned US20170031832A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/810,500 US11397607B2 (en) 2015-07-28 2020-03-05 Storage device and storage virtualization system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2015-0106773 2015-07-28
KR1020150106773A KR102473665B1 (ko) 2015-07-28 2015-07-28 스토리지 디바이스 및 스토리지 가상화 시스템

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/810,500 Continuation US11397607B2 (en) 2015-07-28 2020-03-05 Storage device and storage virtualization system

Publications (1)

Publication Number Publication Date
US20170031832A1 true US20170031832A1 (en) 2017-02-02

Family

ID=57883442

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/216,312 Abandoned US20170031832A1 (en) 2015-07-28 2016-07-21 Storage device and storage virtualization system
US16/810,500 Active 2036-08-02 US11397607B2 (en) 2015-07-28 2020-03-05 Storage device and storage virtualization system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/810,500 Active 2036-08-02 US11397607B2 (en) 2015-07-28 2020-03-05 Storage device and storage virtualization system

Country Status (2)

Country Link
US (2) US20170031832A1 (ko)
KR (1) KR102473665B1 (ko)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109032965A (zh) * 2017-06-12 2018-12-18 华为技术有限公司 一种数据读取方法、主机及存储设备
US10353826B2 (en) 2017-07-14 2019-07-16 Arm Limited Method and apparatus for fast context cloning in a data processing system
US10467159B2 (en) 2017-07-14 2019-11-05 Arm Limited Memory node controller
US10489304B2 (en) 2017-07-14 2019-11-26 Arm Limited Memory address translation
US10534719B2 (en) 2017-07-14 2020-01-14 Arm Limited Memory system for a data processing network
US10565126B2 (en) * 2017-07-14 2020-02-18 Arm Limited Method and apparatus for two-layer copy-on-write
US10592424B2 (en) 2017-07-14 2020-03-17 Arm Limited Range-based memory system
US10613989B2 (en) 2017-07-14 2020-04-07 Arm Limited Fast address translation for virtual machines
US10884850B2 (en) 2018-07-24 2021-01-05 Arm Limited Fault tolerant memory system
TWI764503B (zh) * 2018-03-14 2022-05-11 美商超捷公司 用於深度學習人工類神經網路中的類比類神經記憶體之解碼器
CN117369734A (zh) * 2023-12-08 2024-01-09 浪潮电子信息产业股份有限公司 一种存储资源管理系统、方法及存储虚拟化系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11068203B2 (en) * 2018-08-01 2021-07-20 Micron Technology, Inc. NVMe direct virtualization with configurable storage
CN110346295B (zh) * 2019-07-15 2022-03-04 北京神州同正科技有限公司 缺陷复合定位方法及装置、设备和存储介质
KR102568906B1 (ko) 2021-04-13 2023-08-21 에스케이하이닉스 주식회사 PCIe 디바이스 및 그 동작 방법
KR102570943B1 (ko) 2021-04-13 2023-08-28 에스케이하이닉스 주식회사 PCIe 디바이스 및 그 동작 방법
US11928070B2 (en) 2021-04-13 2024-03-12 SK Hynix Inc. PCIe device
CN114625484A (zh) * 2022-03-31 2022-06-14 苏州浪潮智能科技有限公司 虚拟化实现方法、装置、电子设备、介质及arm平台
US20240168905A1 (en) 2022-11-18 2024-05-23 Samsung Electronics Co., Ltd. Centralized storage device, in-vehicle electronic system including the same, and method of operating the same

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144731A1 (en) * 2007-12-03 2009-06-04 Brown Aaron C System and method for distribution of resources for an i/o virtualized (iov) adapter and management of the adapter through an iov management partition
US20100138592A1 (en) * 2008-12-02 2010-06-03 Samsung Electronics Co. Ltd. Memory device, memory system and mapping information recovering method
US20130097377A1 (en) * 2011-10-18 2013-04-18 Hitachi, Ltd. Method for assigning storage area and computer system using the same
US20140281040A1 (en) * 2013-03-13 2014-09-18 Futurewei Technologies, Inc. Namespace Access Control in NVM Express PCIe NVM with SR-IOV
US20140281150A1 (en) * 2013-03-12 2014-09-18 Macronix International Co., Ltd. Difference l2p method
US20160077975A1 (en) * 2014-09-16 2016-03-17 Kove Corporation Provisioning of external memory
US20160098367A1 (en) * 2014-09-07 2016-04-07 Technion Research And Development Foundation Ltd. Logical-to-physical block mapping inside the disk controller: accessing data objects without operating system intervention
US20160231929A1 (en) * 2015-02-10 2016-08-11 Red Hat Israel, Ltd. Zero copy memory reclaim using copy-on-write

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4574315B2 (ja) 2004-10-07 2010-11-04 株式会社日立製作所 ストレージ装置およびストレージ装置における構成管理方法
US8650342B2 (en) 2006-10-23 2014-02-11 Dell Products L.P. System and method for distributed address translation in virtualized information handling systems
WO2008120325A1 (ja) 2007-03-28 2008-10-09 Fujitsu Limited スイッチ、情報処理装置およびアドレス変換方法
US8010763B2 (en) * 2007-08-02 2011-08-30 International Business Machines Corporation Hypervisor-enforced isolation of entities within a single logical partition's virtual address space
US8443156B2 (en) 2009-03-27 2013-05-14 Vmware, Inc. Virtualization system using hardware assistance for shadow page table coherence
US8646028B2 (en) 2009-12-14 2014-02-04 Citrix Systems, Inc. Methods and systems for allocating a USB device to a trusted virtual machine or a non-trusted virtual machine
US8473947B2 (en) 2010-01-18 2013-06-25 Vmware, Inc. Method for configuring a physical adapter with virtual function (VF) and physical function (PF) for controlling address translation between virtual disks and physical storage regions
GB2478727B (en) 2010-03-15 2013-07-17 Advanced Risc Mach Ltd Translation table control
US8386749B2 (en) 2010-03-16 2013-02-26 Advanced Micro Devices, Inc. Address mapping in virtualized processing system
US9372812B2 (en) 2011-12-22 2016-06-21 Intel Corporation Determining policy actions for the handling of data read/write extended page table violations
EP2831715A1 (en) * 2012-04-26 2015-02-04 Hitachi, Ltd. Information storage system and method of controlling information storage system
TWI514140B (zh) * 2013-02-05 2015-12-21 Via Tech Inc 非揮發性記憶裝置及其操作方法
US9639476B2 (en) 2013-09-26 2017-05-02 Cavium, Inc. Merged TLB structure for multiple sequential address translations
KR101564293B1 (ko) 2013-10-02 2015-10-29 포항공과대학교 산학협력단 장치 가상화 방법 및 장치
US10031767B2 (en) * 2014-02-25 2018-07-24 Dynavisor, Inc. Dynamic information virtualization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144731A1 (en) * 2007-12-03 2009-06-04 Brown Aaron C System and method for distribution of resources for an i/o virtualized (iov) adapter and management of the adapter through an iov management partition
US20100138592A1 (en) * 2008-12-02 2010-06-03 Samsung Electronics Co. Ltd. Memory device, memory system and mapping information recovering method
US20130097377A1 (en) * 2011-10-18 2013-04-18 Hitachi, Ltd. Method for assigning storage area and computer system using the same
US20140281150A1 (en) * 2013-03-12 2014-09-18 Macronix International Co., Ltd. Difference l2p method
US20140281040A1 (en) * 2013-03-13 2014-09-18 Futurewei Technologies, Inc. Namespace Access Control in NVM Express PCIe NVM with SR-IOV
US20160098367A1 (en) * 2014-09-07 2016-04-07 Technion Research And Development Foundation Ltd. Logical-to-physical block mapping inside the disk controller: accessing data objects without operating system intervention
US20160077975A1 (en) * 2014-09-16 2016-03-17 Kove Corporation Provisioning of external memory
US20160231929A1 (en) * 2015-02-10 2016-08-11 Red Hat Israel, Ltd. Zero copy memory reclaim using copy-on-write

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109032965A (zh) * 2017-06-12 2018-12-18 华为技术有限公司 一种数据读取方法、主机及存储设备
US10353826B2 (en) 2017-07-14 2019-07-16 Arm Limited Method and apparatus for fast context cloning in a data processing system
US10467159B2 (en) 2017-07-14 2019-11-05 Arm Limited Memory node controller
US10489304B2 (en) 2017-07-14 2019-11-26 Arm Limited Memory address translation
US10534719B2 (en) 2017-07-14 2020-01-14 Arm Limited Memory system for a data processing network
US10565126B2 (en) * 2017-07-14 2020-02-18 Arm Limited Method and apparatus for two-layer copy-on-write
CN110869916A (zh) * 2017-07-14 2020-03-06 Arm有限公司 用于两层写时复制的方法和装置
US10592424B2 (en) 2017-07-14 2020-03-17 Arm Limited Range-based memory system
US10613989B2 (en) 2017-07-14 2020-04-07 Arm Limited Fast address translation for virtual machines
TWI764503B (zh) * 2018-03-14 2022-05-11 美商超捷公司 用於深度學習人工類神經網路中的類比類神經記憶體之解碼器
US10884850B2 (en) 2018-07-24 2021-01-05 Arm Limited Fault tolerant memory system
CN117369734A (zh) * 2023-12-08 2024-01-09 浪潮电子信息产业股份有限公司 一种存储资源管理系统、方法及存储虚拟化系统

Also Published As

Publication number Publication date
US20200201669A1 (en) 2020-06-25
KR20170013713A (ko) 2017-02-07
US11397607B2 (en) 2022-07-26
KR102473665B1 (ko) 2022-12-02

Similar Documents

Publication Publication Date Title
US11397607B2 (en) Storage device and storage virtualization system
US9817717B2 (en) Stripe reconstituting method performed in storage system, method of performing garbage collection by using the stripe reconstituting method, and storage system performing the stripe reconstituting method
US20210382864A1 (en) Key-value storage device and operating method thereof
KR102094334B1 (ko) 비휘발성 멀티-레벨 셀 메모리 시스템 및 상기 시스템에서의 적응적 데이터 백업 방법
KR102565895B1 (ko) 메모리 시스템 및 그것의 동작 방법
US9619176B2 (en) Memory controller, storage device, server virtualization system, and storage device recognizing method performed in the server virtualization system
US20160179422A1 (en) Method of performing garbage collection and raid storage system adopting the same
KR102491624B1 (ko) 데이터 저장 장치의 작동 방법과 상기 데이터 저장 장치를 포함하는 시스템의 작동 방법
US11226895B2 (en) Controller and operation method thereof
US10296233B2 (en) Method of managing message transmission flow and storage device using the method
US11543986B2 (en) Electronic system including host, memory controller and memory device and method of operating the same
KR20190083148A (ko) 데이터 저장 장치 및 그것의 동작 방법 및 그것을 포함하는 데이터 처리 시스템
CN110888597A (zh) 存储设备、存储系统以及操作存储设备的方法
US20170171106A1 (en) Quality of service management method in fabric network and fabric network system using the same
KR101515621B1 (ko) 반도체 디스크 장치 및 그것의 랜덤 데이터 처리 방법
CN116027965A (zh) 存储装置和电子系统
US11900102B2 (en) Data storage device firmware updates in composable infrastructure
US11366725B2 (en) Storage device and method of operating the same
US12001413B2 (en) Key-value storage device and operating method thereof
CN110851382A (zh) 存储控制器及其操作方法和具有存储控制器的存储器系统
US11210223B2 (en) Storage device and operating method thereof
KR20230172729A (ko) 스토리지 장치 및 이의 동작 방법
KR20220167996A (ko) 메모리 컨트롤러 및 그 동작 방법
KR20240058541A (ko) 파워 오프 요청을 처리할 때 라이트 버퍼를 제어하는 스토리지 장치 및 그 동작 방법
KR20210083081A (ko) 메모리 컨트롤러 및 그 동작 방법

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HWANG, JOO-YOUNG;REEL/FRAME:039215/0095

Effective date: 20160329

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION