US20140365539A1 - Performing direct data manipulation on a storage device - Google Patents

Performing direct data manipulation on a storage device Download PDF

Info

Publication number
US20140365539A1
US20140365539A1 US14265173 US201414265173A US2014365539A1 US 20140365539 A1 US20140365539 A1 US 20140365539A1 US 14265173 US14265173 US 14265173 US 201414265173 A US201414265173 A US 201414265173A US 2014365539 A1 US2014365539 A1 US 2014365539A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
file
command
data
source
destination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14265173
Inventor
Don Alvin Trimmer
Sandeep Yadav
Pratap Singh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NetApp Inc
Original Assignee
NetApp Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor ; File system structures therefor
    • G06F17/30067File systems; File servers
    • G06F17/30115File and folder operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor ; File system structures therefor
    • G06F17/30067File systems; File servers
    • G06F17/30129Details of further file system functionalities
    • G06F17/30138Details of free space management performed by the file system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor ; File system structures therefor
    • G06F17/30067File systems; File servers
    • G06F17/30182File system types
    • G06F17/30194Distributed file systems
    • G06F17/30197Distributed file systems implemented using NAS architecture
    • G06F17/302Details of management specifically adapted to network area storage [NAS]

Abstract

A method and system for performing data manipulation on a storage device is disclosed. A data manipulation command is created on a computing device, wherein the computing device is separate from the storage device. The computing device is a client or a server that requests services of a storage system to store data on a storage medium. The computing device and the storage device are connected over a network. The computing device executes a host application, and its data is stored on the medium. The computing device issues a command to the storage device to be performed on the data. The storage device executes the command and sends the result to the computing device. As a result, the data is not sent to the computing device for manipulation.

Description

    PRIORITY CLAIM
  • This application is a continuation of U.S. patent application Ser. No. 11/740,471 entitled “PERFORMING DIRECT DATA MANIPULATION ON A STORAGE DEVICE” and filed on Apr. 26, 2007, which is expressly incorporated by reference herein.
  • FIELD OF INVENTION
  • The present invention generally relates to networked storage, and more particularly, to a method and system for directly manipulating data on a storage device.
  • BACKGROUND
  • A data storage system is a computer and related storage medium that enables storage or backup of large amounts of data. Storage systems, also known as storage appliances or storage servers, may support a network attached storage (NAS) computing environment. A NAS is a computing environment where file-based access is provided through a network, typically in a client/server configuration. A storage server can provide clients with a block-level access to data stored in a set of mass storage devices, such as magnetic or optical storage disks.
  • A file server (also known as a “filer”) is a computer that provides file services relating to the organization of information on storage devices, such as disks. The filer includes a storage operating system that implements a file system to logically organize the information as a hierarchical structure of directories and files on the disks. Each “on-disk” file may be implemented as a set of disk blocks configured to store information, whereas the directory may be implemented as a specially-formatted file in which information about other files and directories are stored. A filer may be configured to operate according to a client/server model of information delivery to allow many clients to access files stored on the filer. In this model, the client may include an application, such as a file system protocol, executing on a computer that connects to the filer over a computer network. The computer network can include, for example, a point-to-point link, a shared local area network (LAN), a wide area network (WAN), or a virtual private network (VPN) implemented over a public network such as the Internet. Each client may request filer services by issuing file system protocol messages (in the form of packets) to the filer over the network.
  • A common file system type is a “write in-place” file system, in which the locations of the data structures (such as inodes and data blocks) on a disk are typically fixed. An inode is a data structure used to store information, such as metadata, about a file, whereas the data blocks are structures used to store the actual data for the file. The information contained in an inode may include information relating to ownership of the file, access permissions for the file, the size of the file, the file type, and references to locations on disk of the data blocks for the file. The references to the locations of the file data are provided by pointers, which may further reference indirect blocks. Indirect blocks, in turn, reference the data blocks, depending upon the quantity of data in the file. Changes to the inodes and data blocks are made “in-place” in accordance with the write in-place file system. If an update to a file extends the quantity of data for the file, an additional data block is allocated and the appropriate inode is updated to reference that data block.
  • Another file system type is a write-anywhere file system that does not overwrite data on disks. If a data block on a disk is read from the disk into memory and “dirtied” with new data, the data block is written to a new location on the disk to optimize write performance. A write-anywhere file system may initially assume an optimal layout, such that the data is substantially contiguously arranged on the disks. The optimal disk layout results in efficient access operations, particularly for sequential read operations. A particular example of a write-anywhere file system is the Write Anywhere File Layout (WAFL®) file system available from Network Appliance, Inc. The WAFL file system is implemented within a microkernel as part of the overall protocol stack of the filer and associated disk storage. This microkernel is supplied as part of Network Appliance's Data ONTAP® storage operating system, residing on the filer that processes file service requests from network-attached clients.
  • As used herein, the term “storage operating system” generally refers to the computer-executable code operable on a storage system that manages data access. The storage operating system may, in case of a filer, implement file system semantics, such as Data ONTAP® storage operating system. The storage operating system can also be implemented as an application program operating on a general-purpose operating system, such as UNIX® or Windows®, or as a general-purpose operating system with configurable functionality, which is configured for storage applications as described herein.
  • Disk storage is typically implemented as one or more storage “volumes” that comprise physical storage disks, defining an overall logical arrangement of storage space. Currently available filer implementations can serve a large number of discrete volumes.
  • The disks within a volume can be organized as a Redundant Array of Independent (or Inexpensive) Disks (RAID). RAID implementations enhance the reliability and integrity of data storage through the writing of data “stripes” across a given number of physical disks in the RAID group, and the appropriate storing of parity information with respect to the striped data. In the example of a WAFL® file system, a RAID-4 implementation is advantageously employed, which entails striping data across a group of disks, and storing the parity within a separate disk of the RAID group. As described herein, a volume typically comprises at least one data disk and one associated parity disk (or possibly data/parity) partitions in a single disk arranged according to a RAID-4, or equivalent high-reliability, implementation.
  • NAS devices provide access to stored data using standard protocols, e.g., Network File System (NFS), Common Internet File System (CIFS), Internet Small Computer System Interface (iSCSI), etc. To manipulate the data stored on these devices, clients have to fetch the data using an access protocol, modify the data, and then write back the resulting modified data. Bulk data processing sometimes requires small manipulations of the data that need to be processed as fast as possible. This process (fetch-modify-write) is inefficient for bulk data processing, as it wastes processor time on protocol and network processing and increases network utilization. The closer the processing is to the stored data, the less time the data processing will take.
  • Traditional file systems are not particularly adept at handling large numbers (e.g., more than one million) of small objects (e.g., one kilobyte (KB) files). The typical way of addressing this problem is to use a container to hold several of the small objects. However, this solution leads to the problems of how to manage the containers and how to manage the objects within the container. Managing the containers presents the typical file system problems from a higher level in the containers.
  • In applications that use files for storing a list of records, a deleted record is often marked as “deleted” instead of being physically removed from the file. The file is periodically repacked to purge all of the deleted records and to reclaim space. This process is traditionally carried out by reading the file by an application via NFS, for example; packing the records by the application; and writing the file back to storage via NFS, for example. Again, this process uses the typical fetch-modify-write pattern, which makes the entire repacking process inefficient for the storage device.
  • Another example of this type of IO-intensive task is reading a file and rewriting the data to another file, with the data being relocated within the destination file. In addition to using resources on the NAS device, this task also incurs a load on the network (sending the file back and forth) and a load on the client that is processing the data.
  • FIG. 1 is a flow diagram of an existing fetch-modify-write method 100 for manipulating data stored on a storage device. The method 100 operates between a server 102 and a data storage media 104. The server 102 and the data storage media 104 communicate over a network connection. The server 102 requests data to be manipulated from the storage media 104 (step 110). The storage media 104 retrieves the data and sends the data over the network to the server 102 (step 112). The server 102 manipulates the requested data (step 114) and sends the manipulated data back over the network to the storage media 104 (step 116).
  • As can be seen from FIG. 1, the method 100 requires that the data be sent over the network twice—once from the storage media 104 to the server 102 (step 112) and second from the server 102 to the storage media 104 (step 116).
  • Accordingly, there is a need for a technique for manipulating data on a storage device that avoids the limitations of the prior art solutions.
  • SUMMARY
  • The present invention describes a method and system for performing data manipulation on a storage device. A data manipulation command is created on a computing device, wherein the computing device is separate from the storage device. The computing device is a client or a server that requests services of a storage system to store data on a storage medium. The computing device and the storage device are connected over a network. The computing device stores a host application, and its data is stored on the medium. The computing device issues a command to the storage device to be performed on the data. The storage device executes the command and sends the result to the computing device.
  • The present invention provides advantages over existing solutions. Several of these advantages are described below by way of example. First, data manipulation performance is accelerated by moving the command execution as close to the data as possible. Second, because all of the data remains on the storage device, there is no network utilization in transmitting the data to and from the computer that requested the manipulation. Third, the requesting computer is not required to expend processing power to manipulate the data.
  • The present invention describes a set of high level commands that can be built for data manipulation and a mechanism to send the commands to the storage device. An exemplary command set may include input/output (IO) instructions (e.g., relocate, remove, etc.) that can be executed on the storage device. Each instruction has its own descriptor and a set of parameters that are relevant to it, e.g., a relocate instruction requires the following inputs: a range of data to relocate, the source of the data, and destination for the data. A logical data manipulation event can be composed of many such instructions. The instructions are composed and packed by the initiator of the event and sent to the target storage device over the network. The target storage device unpacks the instructions and executes the instructions in a data optimized manner to arrive at the final result. The set of commands is evaluated for correctness, the commands are executed, and the results are returned.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A more detailed understanding of the invention may be had from the following description of preferred embodiments, given by way of example, and to be understood in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is a flow diagram of an existing method for manipulating data stored on a storage device;
  • FIG. 2 is a block diagram of a network environment in which the present invention can be implemented;
  • FIG. 3 is a block diagram of the file server shown in FIG. 2;
  • FIG. 4 is a block diagram of the storage operating system shown in FIG. 3;
  • FIG. 5 is a flow diagram of a method for directly manipulating data on a storage device;
  • FIG. 6 is a flow diagram of a method for file repacking to be performed on a storage device; and
  • FIG. 7 is a diagram of a data file that is repacked according to the method shown in FIG. 6.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Network Environment
  • FIG. 2 is a block diagram of an exemplary network environment 200 in which the principles of the present invention are implemented. The environment 200 includes a number of clients 204 connected to a file server 206 over a network 202. The network 202 can be a local area network (LAN), a wide area network (WAN), a virtual private network (VPN) using communication links over the Internet, for example, or any combination of the three network types. For the purposes of this description, the term “network” includes any acceptable network architecture.
  • The file server 206, described further below, is configured to control storage of data and access to data that is located on a set 208 of interconnected storage volumes or disks 210. It is noted that the terms “storage volumes” and “disks” can be used interchangeably herein, without limiting the term “storage volumes” to disks. The term “storage volumes” can include any type of storage media, such as tapes or non-volatile memory.
  • Each of the devices attached to the network 202 includes an appropriate conventional network interface connection (not shown) for communicating over the network 202 using a communication protocol, such as Transport Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hyper Text Transport Protocol (HTTP), Simple Network Management Protocol (SNMP), or Virtual Interface (VI) connections.
  • File Server
  • FIG. 3 is a detailed block diagram of an exemplary file server (“filer”) 206. It will be understood by one skilled in the art that the inventive concepts described herein apply to any type of file server, wherever implemented, including on a special-purpose computer, a general-purpose computer, or a standalone computer.
  • The file server 206 includes a processor 302, a memory 304, a network adapter 306, a nonvolatile random access memory (NVRAM) 308, and a storage adapter 310, all of which are interconnected by a system bus 312. Contained within the memory 304 is a storage operating system 314 that implements a file system to logically organize the information as a hierarchical structure of directories and files on the disks 210. In an exemplary embodiment, the memory 304 is addressable by the processor 302 and the adapters 306, 310 for storing software program code. The operating system 314, portions of which are typically resident in the memory 304 and executed by the processing elements, functionally organizes the filer by invoking storage operations in support of a file service implemented by the filer.
  • The network adapter 306 includes mechanical, electrical, and signaling circuitry needed to connect the filer 206 to clients 204 over the network 202. The clients 204 may be general-purpose computers configured to execute applications, such as database applications. Moreover, the clients 204 may interact with the filer 206 in accordance with a client/server information delivery model. That is, the client 204 requests the services of the filer 206, and the filer 206 returns the results of the services requested by the client 204 by exchanging packets defined by an appropriate networking protocol.
  • The storage adapter 310 interoperates with the storage operating system 314 and the disks 210 of the set of storage volumes 208 to access information requested by the client 204. The storage adapter 310 includes input/output (I/O) interface circuitry that couples to the disks 210 over an I/O interconnect arrangement, such as a Fibre Channel link. The information is retrieved by the storage adapter 310 and, if necessary, is processed by the processor 302 (or the adapter 310 itself) prior to being forwarded over the system bus 312 to the network adapter 306, where the information is formatted into appropriate packets and returned to the client 204.
  • In one exemplary implementation, the filer 206 includes a non-volatile random access memory (NVRAM) 308 that provides fault-tolerant backup of data, enabling the integrity of filer transactions to survive a service interruption based upon a power failure or other fault.
  • Storage Operating System
  • To facilitate the generalized access to the disks 210, the storage operating system 314 implements a write-anywhere file system that logically organizes the information as a hierarchical structure of directories and files on the disks. As noted above, in an exemplary embodiment described herein, the storage operating system 314 is the NetApp® Data ONTAP® operating system available from Network Appliance, Inc., that implements the WAFL® file system. It is noted that any other appropriate file system can be used, and as such, where the terms “WAFL®” or “file system” are used, those terms should be interpreted broadly to refer to any file system that is adaptable to the teachings of this invention.
  • Referring now to FIG. 4, the storage operating system 314 includes a series of software layers, including a media access layer 402 of network drivers (e.g., an Ethernet driver). The storage operating system 314 further includes network protocol layers, such as an Internet Protocol (IP) layer 404 and its supporting transport mechanisms, a Transport Control Protocol (TCP) layer 406 and a User Datagram Protocol (UDP) layer 408.
  • A file system protocol layer 410 provides multi-protocol data access and includes support for the Network File System (NFS) protocol 412, the Common Internet File System (CIFS) protocol 414, and the Hyper Text Transfer Protocol (HTTP) 416. In addition, the storage operating system 314 includes a disk storage layer 420 that implements a disk storage protocol, such as a redundant array of independent disks (RAID) protocol, and a disk driver layer 422 that implements a disk access protocol such as, e.g., a Small Computer System Interface (SCSI) protocol.
  • Bridging the disk software layers 420-422 with the network and file system protocol layers 402-416 is a file system layer 430. Generally, the file system layer 430 implements a file system having an on-disk format representation that is block-based using data blocks and inodes to describe the files.
  • In the storage operating system 314, a data request path 432 between the network 202 and the disk 210 through the various layers of the operating system is followed. In response to a transaction request, the file system layer 430 generates an operation to retrieve the requested data from the disks 210 if the data is not resident in the filer's memory 304. If the data is not in the memory 304, then the file system layer 430 indexes into an inode file using the inode number to access an appropriate entry and retrieve a logical volume block number. The file system layer 430 then passes the logical volume block number to the disk storage layer 420. The disk storage layer 420 maps the logical number to a disk block number and sends the disk block number to an appropriate driver (for example, an encapsulation of SCSI implemented on a Fibre Channel disk interconnection) in the disk driver layer 422. The disk driver accesses the disk block number on the disks 210 and loads the requested data in the memory 304 for processing by the filer 206. Upon completing the request, the filer 206 (and storage operating system 314) returns a reply, e.g., an acknowledgement packet defined by the CIFS specification, to the client 204 over the network 202.
  • It is noted that the storage access request data path 432 through the storage operating system layers described above may be implemented in hardware, software, or a combination of hardware and software. In an alternate embodiment of this invention, the storage access request data path 432 may be implemented as logic circuitry embodied within a field programmable gate array (FPGA) or in an application specific integrated circuit (ASIC). This type of hardware implementation increases the performance of the file services provided by the filer 206 in response to a file system request issued by a client 204.
  • By way of introduction, the present invention provides advantages over existing solutions. Several of these advantages are described below by way of example. First, data manipulation performance is accelerated by moving the command execution as close to where the data is stored as possible. Second, because all of the data remains on the storage device, there is no network utilization in transmitting the data to and from the computer that requested the manipulation. Third, the requesting computer is not required to expend processing power to manipulate the data.
  • The present invention describes a set of high level commands that can be built for data manipulation and a mechanism to send the commands to the storage device. An exemplary command set may include IO instructions (e.g., relocate, remove, etc.) that can be executed on the storage device. Each instruction has its own descriptor and a set of parameters that are relevant to it, e.g., a relocate instruction requires the following inputs: a range of data to relocate, the source of the data, and destination for the data. A logical data manipulation event can be composed of many such instructions. The instructions are composed and packed by the initiator of the event and sent to the target storage device over the network. The target storage device unpacks the instructions and executes the instructions in a data optimized manner to arrive at the final result. As discussed in greater detail below, the concept of a “data optimized manner” can include the storage device reordering the instructions to improve (e.g., speed up) the performance of the instructions. The set of commands is evaluated for correctness, the commands are executed, and the results are returned.
  • In an exemplary embodiment, the present invention is implemented as an application executing on a computer operating system. For example, the storage device can include the NearStore® storage system running the NetApp® Data ONTAP® operating system available from Network Appliance, Inc. It is noted that the principles of the present invention are applicable to any type of storage device running any type of operating system.
  • FIG. 5 shows a flow diagram of a method 500 for directly manipulating data on a storage device. The method 500 utilizes a client 502 and a storage device 504, which communicate with each other over a network connection. A person of ordinary skill in the art would understand that the client 502 can be a client 204 as shown in FIG. 2 and that the storage device 504 can be a file server 206 shown in FIG. 2.
  • While the method 500 is described as using a client 502, any suitable computing device capable of communicating with the storage device 504 may be used. Client 502 utilizes services of the storage device 504 to store and manage data, such as, for example, files on a storage media 508, which can be a set of mass storage devices, such as magnetic or optical storage based disks or tapes. As used herein, the term “file” encompasses a container, an object, or any other storage entity. Interaction between the client 502 and the storage device 504 can enable the provision of storage services. That is, the client 502 may request the services of the storage device 504 and the storage device 504 may return the results of the services requested by the client 502, by exchanging packets over the connection system (not shown in FIG. 5). The client 502 may issue packets using file-based access protocols, such as the Common Internet File System (CIFS) protocol or the Network File System (NFS) protocol, over the Transmission Control Protocol/Internet Protocol (TCP/IP) when accessing information in the form of files and directories. Alternatively, the client 502 may issue packets including block-based access protocols, such as the Small Computer Systems Interface (SCSI) protocol encapsulated over TCP (iSCSI) and SCSI encapsulated over Fibre Channel (FCP), when accessing information in the form of blocks. The client 502 executes one or more host applications (not shown in FIG. 5).
  • The storage device 504 includes a storage manager 506 and a data storage media 508. In a preferred embodiment, the storage manager 506 is the file system layer 430 of the storage operating system 314 as shown in FIG. 4. Referring back to FIG. 5, a command with associated inputs (e.g., the source file name) to manipulate data is created at the client 502 (step 510) and the client 502 sends the command over the network to the storage manager 506 in the storage device 504 (step 512). The storage manager 506 unpacks the command and the associated inputs (step 514) and requests a source file from the storage media 508 that contains the data to be manipulated (step 516). The storage media 508 retrieves the source file (step 518) and the storage manager 506 reads the source file (step 520). The storage manager 506 manipulates the requested data from the source file as specified by the instructions in the command (step 522) and writes the manipulated data back to the storage media 508 (step 524). The storage manager 506 then sends the manipulation result back over the network to the client 502 (step 526).
  • One specific example of using the method 500 is in connection with Internet-based electronic mail. In these scenarios, electronic mail folders are often stored as single files, where each file contains all of the concatenated mail messages. When a user deletes a message, it is simply marked as “deleted” in the file. At a later time, the files need to be “repacked” in order to reclaim the space freed by the deleted messages.
  • A file repacking method 600 (described in connection with FIG. 6) can assist in repacking these files. The mail repacking application (not shown in FIG. 6), which is executed at a client 602, sends a list of valid offsets, the source file name, and destination file name to operate on to a storage device 604. The storage device 604 reads the list of offsets from the source file and copies the data from those offsets to the specified destination file. Once the entire list of offsets is processed, the storage device 604 returns an indication of success or failure to the client 602. The mail application can then update its internal system for tracking individual mail messages to reflect the newly packed file and delete the old file. Those of skill in the art would understand that client 602 can correspond to client 502 shown in FIG. 5; that storage device 604 can correspond to storage device 504; that data storage media 608 can correspond to data storage media 508; and that storage manager 606 can correspond to storage manager 506.
  • Sending a list of commands to the storage device 604 to repack the data directly on the storage device 604 provides the following advantages: the amount of data sent to the storage device 604 is small (only the commands are sent, and not the data), the storage device 604 can optimize the set of instructions and execute them in an efficient manner, and no protocol overhead is needed because the data is never moved off the storage device 604.
  • FIG. 6 is a flow diagram of a method 600 for file repacking to be performed on the storage device 604. The method 600 utilizes the client 602 and the storage device 604, which communicate with each other over a network connection. While the method 600 is described as using a client 602, any suitable computing device capable of communicating with the storage device 604 may be used. The storage device 604 includes a storage manager 606 and a data storage media 608. The client 602 identifies the source file to be repacked, the destination file, a list of segments in the source file to copy to the destination file, data to be inserted into the destination file (this inserted data is optional), and regions to skip in the destination file (holes) (step 610). It is noted that in the example of the mail repacking application, the client 602 identifies this information by using information already maintained by the mail repacking application. As noted above, a benefit of the method 600 is to transfer processing from the client 602 to the storage device 604; the basic operation of the underlying mail repacking application is not changed. One reason that a user may want to leave holes in the destination file is to leave space to write metafile information, such as the number of records, the time of the repacking, and similar information which would be useful at a later time. The client 602 then packs the information into a command (step 612).
  • Each command executed by the method 600 may consist of a single call to the storage device 604 that contains all the details to repack a file, such as the list of segments to be copied, any data to be inserted into the destination file, and any regions to skip in the destination file. It is noted that the list of segments to be copied from the source file to the destination file could alternatively be a list of segments from the source file that are not to be copied to the destination file, wherein all other segments of the source file are to be copied to the destination file. The choice is implementation-specific and does not affect the general operation of the method 600. Whether the list of segments indicates segments to include or segments to exclude from the destination file can be indicated by a flag, for example. One skilled in the art can readily identify other types of indicators for identifying these segments; all such indicators are within the scope of the present invention.
  • Furthermore, if the list of segments indicates a list of segments to be included in the destination file, a particular ordering for the inclusion list could be specified, whereby the segments in the destination file would be reordered from how the segments appear in the source file. For purposes of discussion of the method 600, the list of segments includes a list of segments to copy from the source file to the destination file.
  • The client 602 sends the packed command over the network to the storage manager 606 in the storage device 604 (step 614). The storage manager 606 unpacks the command (step 616) and requests a source file from the storage media 608 (step 618). The storage media 608 retrieves the source file (step 620) and the storage manager 606 reads the source file (step 622). The storage manager 606 copies the segments from the list of segments of the source file to the destination file (step 624).
  • The storage manager 606 can choose to reorder and optimize the set of instructions in the command. Whether the instructions are reordered depends on the implementation and the layout of the data. For example, data can be pre-fetched for the next instruction while the current instruction is in progress. The storage manager 606 knows best how to execute the instructions. The client 602 does not know where the data is physically located in the storage media 608. However, the storage manager 606 knows where the data is located in the storage media 608, and can use this information to accelerate the method 600. For example, the storage manager 606 could read blocks out of the order specified in the file repacking command in order to obtain better performance from the storage device 604.
  • If additional data was provided (in step 610) to be inserted into the destination file, the storage manager 606 inserts the data (step 626; this optional step is shown in dashed outline). The storage manager 606 writes the destination file to the storage media 608 (step 628). The storage manager 606 then sends the result of the file repacking command back over the network to the client 602 (step 630).
  • FIG. 7 is a diagram of a data file that is repacked according to the method shown in FIG. 6. A source file 702 includes a plurality of data segments 710, 712, 716, 718, 722, and several regions to be deleted 714, 720, 724. The method 600 copies the data segments 710, 712, 716, 718, 722 to a destination file 704, removes the regions to be deleted 716, 720, 724, and adds new data 730 to the destination file 704.
  • Another example of using the method 500 is in connection with database table repacking. In one implementation, the client 502 may execute a database management system, such as Microsoft™ SQL Server, by Microsoft Corporation of Redmond, Wash. Databases within database management systems maintain tables as files which contain fixed-size records. When a record is deleted, it is simply marked as “deleted” and is removed from the table index. Periodically, databases repack the table file to improve performance and to free up space held by deleted records. In particular, the repacking method 600 can also be used for repacking database tables. As described using the components shown in FIG. 6, the database management system generates a range of valid offsets in each table file (step 610) and sends the range of offsets to the storage device 604 (step 614). The storage device 604 uses the list of offsets to repack the table file (steps 616-624). Once completed, the database can update its indices and delete or archive the old table file.
  • While the method 600 was described in connection with repacking a file, other IO commands can be performed using a similar method, as generally shown by the method 500. The other IO commands can include, but are not limited to, the commands shown in Table 1.
  • TABLE 1
    IO Commands
    Command Corresponding Inputs
    read file, offset, length of read
    write file, offset, length of write
    resize file, offset
    delete file
    rename file1, file2
    relocate file1, offset, length of read, file2, offset,
    length of write
  • The present invention can be implemented in a computer program tangibly embodied in a computer-readable storage medium containing a set of instructions for execution by a processor or a general purpose computer; and method steps of the invention can be performed by a processor executing a program of instructions to perform functions of the invention by operating on input data and generating output data. Suitable processors include, by way of example, both general and special purpose processors. Typically, a processor will receive instructions and data from a ROM, a random access memory (RAM), and/or a storage device. Storage devices suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks and digital versatile disks (DVDs). In addition, while the illustrative embodiments may be implemented in computer software, the functions within the illustrative embodiments may alternatively be embodied in part or in whole using hardware components such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), or other hardware, or in some combination of hardware components and software components.
  • While specific embodiments of the present invention have been shown and described, many modifications and variations could be made by one skilled in the art without departing from the scope of the invention. The above description serves to illustrate and not limit the particular invention in any way.

Claims (21)

  1. 1-30. (canceled)
  2. 31. A method, comprising:
    receiving, at a storage server, a command in a network storage communication protocol from a client device, the command comprising information identifying a source file stored at the storage server and a list of file segments to include, the source file comprising the file segments identified by the list;
    executing, at the storage server, the command by copying the file segments identified by the list from the source file to a destination file stored at the storage server, without transferring any portion of the source file to the client; and
    in response to the command, sending a confirmation of execution of the command from the storage server to the client device.
  3. 32. The method of claim 31, wherein the receiving comprises:
    receiving, at the storage server, a command in a network storage communication protocol from a client device, the command comprising information identifying a source file stored at the storage server, a list of file segments to include and an instruction to add a new data segment, the source file comprising the file segments identified by the list.
  4. 33. The method of claim 32, further comprising:
    retrieving, at the storage server from the client device, data of the new data segment;
    and wherein the executing comprises:
    executing the command, at the storage server, by copying the file segments identified by the list from the source file to a destination file stored at the storage server, and by inserting the data of the new data segment to the source file, without transferring any portion of the source file to the client.
  5. 34. The method of claim 31, wherein the receiving comprises:
    receiving, at the storage server, a command in a network storage communication protocol from a client device, the command comprising information identifying a source file stored at the storage server, a list of file segments to include and file offsets of the file segments identified by the list within the destination file, the source file comprising the file segments identified by the list.
  6. 35. The method of claim 31, wherein the executing further comprises:
    retrieving from the source file the file segments identified by the list; and
    inserting data of the file segments identified by the list to the destination file at the file offsets indicated by the command.
  7. 36. The method of claim 35, wherein the executing further comprises:
    in an event that the command includes an instruction to remove a file segment of the destination file, removing the file system from the destination file stored in the storage server.
  8. 37. The method of claim 31, wherein the receiving comprises:
    receiving, at the storage server, a command in a network storage communication protocol from a client device, the command comprising information identifying a source file and a destination file stored at the storage server and a list of file segments to include in the destination file, the source file comprising the file segments identified by the list.
  9. 38. The method of claim 37, further comprising:
    avoiding copying file segments of the source file that are not identified by the list to the destination file.
  10. 39. The method of claim 37, wherein the receiving comprises:
    receiving, at the storage server, a command in a network storage communication protocol from a client device, the command comprising information identifying a source file and a destination file stored at the storage server, a list of file segments to include in the destination file, and a reorder instruction to specify an order how the file segments to include appear in the destination file, the source file comprising the file segments identified by the list;
    and wherein the executing comprises:
    executing the command, at the storage server, by copying the file segments to include identified by the list from the source file to a destination file stored at the storage server, without transferring any portion of the source file to the client, and by reordering the file segments to include in the destination file based on the reorder instruction.
  11. 40. A non-transitory machine readable medium having stored thereon instructions for performing a method of manipulating data files, comprising machine executable code which when executed by at least one machine, causes the machine to:
    receive at a storage server a command from a client device over a network, the command comprising information identifying a source file stored at the storage server and a destination file stored at the storage server, the command further comprising an exclusion list identifying file segments of the source file that are to be excluded from the destination file; and
    copy file segments of the source file that are not identified by the exclusion list from the source file to the destination file at the storage server, without transferring any portion of the source file to the client device.
  12. 41. The non-transitory machine readable medium of claim 40, wherein the machine executable code which when executed by at least one machine, further causes the machine to:
    send a message notifying a result of executing the command from the storage server to the client device, in response to the command.
  13. 42. The non-transitory machine readable medium of claim 40, wherein the command further comprises new data to be inserted into the destination file;
    and wherein the machine executable code which when executed by at least one machine, further causes the machine to:
    insert the new data into the destination file.
  14. 43. The non-transitory machine readable medium of claim 40, wherein the command comprises multiple file system operations in an order; and wherein the machine executable code which when executed by at least one machine, further causes the machine to:
    determine a reordering of executing the file system operations of the command by the storage server to improve a performance of the instructions, prior to executing the file system operations according to the reordering.
  15. 44. A computing device, comprising:
    a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of manipulating data sets;
    a processor coupled to the memory, the processor configured to execute the machine executable code to:
    receive from a client a repacking command identifying a source data set comprising multiple segments and a destination data set stored at the computing device, the command including information identifying at least a segment of the multiple segments of the source data set; and
    manipulate the destination data set based on the repacking command by using at least the segment of the source data set identified by the command, without transferring data of the destination data set or the source data set to the client.
  16. 45. The computing device of claim 44, wherein the source data set is a database table, and at least one segment of the database table comprises a database record marked as deleted.
  17. 46. The computing device of claim 45, wherein the repacking command instructs the computing device to copy database records of the database table to the destination data set, except database records of the database table that are marked as deleted.
  18. 47. The computing device of claim 45, wherein the processor is further configured to execute the machine executable code to update a database index to point to the destination data set and to remove the source data set from the storage media, after the repacking command has been executed.
  19. 48. The computing device of claim 44, wherein the source data set is a file storing a folder of electronic mails, and at least one segment of the database table comprises electronic mails that are marked as deleted.
  20. 49. The computing device of claim 48, wherein the repacking command instructs the computing device to copy data of the electronic mails stored in the source data set to the destination data set, except electronic mails that are marked as deleted.
  21. 50. The computing device of claim 48, wherein the repacking command comprises a list of file offsets identifying locations at which the electronic mails are stored in the file.
US14265173 2007-04-26 2014-04-29 Performing direct data manipulation on a storage device Abandoned US20140365539A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11740471 US8768898B1 (en) 2007-04-26 2007-04-26 Performing direct data manipulation on a storage device
US14265173 US20140365539A1 (en) 2007-04-26 2014-04-29 Performing direct data manipulation on a storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14265173 US20140365539A1 (en) 2007-04-26 2014-04-29 Performing direct data manipulation on a storage device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11740471 Continuation US8768898B1 (en) 2007-04-26 2007-04-26 Performing direct data manipulation on a storage device

Publications (1)

Publication Number Publication Date
US20140365539A1 true true US20140365539A1 (en) 2014-12-11

Family

ID=50982213

Family Applications (2)

Application Number Title Priority Date Filing Date
US11740471 Active 2029-10-13 US8768898B1 (en) 2007-04-26 2007-04-26 Performing direct data manipulation on a storage device
US14265173 Abandoned US20140365539A1 (en) 2007-04-26 2014-04-29 Performing direct data manipulation on a storage device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11740471 Active 2029-10-13 US8768898B1 (en) 2007-04-26 2007-04-26 Performing direct data manipulation on a storage device

Country Status (1)

Country Link
US (2) US8768898B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463333A (en) * 2016-06-03 2017-12-12 杭州海康威视数字技术股份有限公司 Network hard disk space recovery method, apparatus and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845280A (en) * 1995-09-25 1998-12-01 Microsoft Corporation Method and apparatus for transmitting a file in a network using a single transmit request from a user-mode process to a kernel-mode process
US6356863B1 (en) * 1998-09-08 2002-03-12 Metaphorics Llc Virtual network file server
US20030084075A1 (en) * 2001-11-01 2003-05-01 Verisign, Inc. Method and system for updating a remote database
US6574657B1 (en) * 1999-05-03 2003-06-03 Symantec Corporation Methods and apparatuses for file synchronization and updating using a signature list
US6631514B1 (en) * 1998-01-06 2003-10-07 Hewlett-Packard Development, L.P. Emulation system that uses dynamic binary translation and permits the safe speculation of trapping operations
US20030208529A1 (en) * 2002-05-03 2003-11-06 Sreenath Pendyala System for and method of real-time remote access and manipulation of data
US20050188151A1 (en) * 2004-02-21 2005-08-25 Samsung Electronics Co., Ltd. Method and apparatus for optimally write reordering
US20050216492A1 (en) * 2001-05-03 2005-09-29 Singhal Sandeep K Technique for enabling remote data access and manipulation from a pervasive device
US20060080370A1 (en) * 2004-09-29 2006-04-13 Nec Corporation Switch device, system, backup method and computer program
US20070094354A1 (en) * 2000-12-22 2007-04-26 Soltis Steven R Storage area network file system
US20070156842A1 (en) * 2005-12-29 2007-07-05 Vermeulen Allan H Distributed storage system with web services client interface
US20070220027A1 (en) * 2006-03-17 2007-09-20 Microsoft Corporation Set-based data importation into an enterprise resource planning system
US20070294308A1 (en) * 2006-06-12 2007-12-20 Megerian Mark G Managing Data Retention in a Database Operated by a Database Management System
US20080208806A1 (en) * 2007-02-28 2008-08-28 Microsoft Corporation Techniques for a web services data access layer

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5983239A (en) * 1997-10-29 1999-11-09 International Business Machines Corporation Storage management system with file aggregation supporting multiple aggregated file counterparts
US7581077B2 (en) * 1997-10-30 2009-08-25 Commvault Systems, Inc. Method and system for transferring data in a storage operation
WO2002069160A3 (en) 2001-02-28 2003-01-09 Crossroads Sys Inc Method and system for reconciling extended copy command target descriptor lengths
US6779063B2 (en) * 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
US7016982B2 (en) 2002-05-09 2006-03-21 International Business Machines Corporation Virtual controller with SCSI extended copy command
US8255548B2 (en) * 2002-06-13 2012-08-28 Salesforce.Com, Inc. Offline web services API to mirror online web services API
US20050188248A1 (en) * 2003-05-09 2005-08-25 O'brien John Scalable storage architecture
US7707374B2 (en) * 2003-10-22 2010-04-27 International Business Machines Corporation Incremental data storage method, apparatus, interface, and system
US7661103B2 (en) * 2005-04-27 2010-02-09 Jerry Glade Hayward Apparatus, system, and method for decentralized data conversion
US20070061540A1 (en) * 2005-06-06 2007-03-15 Jim Rafert Data storage system using segmentable virtual volumes
US7984084B2 (en) * 2005-08-03 2011-07-19 SanDisk Technologies, Inc. Non-volatile memory with scheduled reclaim operations
US7395416B1 (en) * 2006-09-12 2008-07-01 International Business Machines Corporation Computer processing system employing an instruction reorder buffer

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845280A (en) * 1995-09-25 1998-12-01 Microsoft Corporation Method and apparatus for transmitting a file in a network using a single transmit request from a user-mode process to a kernel-mode process
US6631514B1 (en) * 1998-01-06 2003-10-07 Hewlett-Packard Development, L.P. Emulation system that uses dynamic binary translation and permits the safe speculation of trapping operations
US6356863B1 (en) * 1998-09-08 2002-03-12 Metaphorics Llc Virtual network file server
US6574657B1 (en) * 1999-05-03 2003-06-03 Symantec Corporation Methods and apparatuses for file synchronization and updating using a signature list
US20070094354A1 (en) * 2000-12-22 2007-04-26 Soltis Steven R Storage area network file system
US20050216492A1 (en) * 2001-05-03 2005-09-29 Singhal Sandeep K Technique for enabling remote data access and manipulation from a pervasive device
US20030084075A1 (en) * 2001-11-01 2003-05-01 Verisign, Inc. Method and system for updating a remote database
US20030208529A1 (en) * 2002-05-03 2003-11-06 Sreenath Pendyala System for and method of real-time remote access and manipulation of data
US20050188151A1 (en) * 2004-02-21 2005-08-25 Samsung Electronics Co., Ltd. Method and apparatus for optimally write reordering
US20060080370A1 (en) * 2004-09-29 2006-04-13 Nec Corporation Switch device, system, backup method and computer program
US20070156842A1 (en) * 2005-12-29 2007-07-05 Vermeulen Allan H Distributed storage system with web services client interface
US20070220027A1 (en) * 2006-03-17 2007-09-20 Microsoft Corporation Set-based data importation into an enterprise resource planning system
US20070294308A1 (en) * 2006-06-12 2007-12-20 Megerian Mark G Managing Data Retention in a Database Operated by a Database Management System
US20080208806A1 (en) * 2007-02-28 2008-08-28 Microsoft Corporation Techniques for a web services data access layer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Notenboom, "Why isn't my outlook PST getting smaller after deleting emails" as of 2/26/2007, http://web.archive.org/web/20070226205647/http://ask-leo.com/why_isnt_my_outlook_pst_getting_smaller_after_deleting_emails.html *

Also Published As

Publication number Publication date Type
US8768898B1 (en) 2014-07-01 grant

Similar Documents

Publication Publication Date Title
US8145838B1 (en) Processing and distributing write logs of nodes of a cluster storage system
US8539008B2 (en) Extent-based storage architecture
US7424637B1 (en) Technique for managing addition of disks to a volume of a storage system
US6983296B1 (en) System and method for tracking modified files in a file system
US8126847B1 (en) Single file restore from image backup by using an independent block list for each file
US7757056B1 (en) System and method for efficiently calculating storage required to split a clone volume
US8095756B1 (en) System and method for coordinating deduplication operations and backup operations of a storage volume
US20100088349A1 (en) Virtual file system stack for data deduplication
US20070168693A1 (en) System and method for failover of iSCSI target portal groups in a cluster environment
US20080294696A1 (en) System and method for on-the-fly elimination of redundant data
US20090271418A1 (en) Computer file system with path lookup tables
US7730258B1 (en) System and method for managing hard and soft lock state information in a distributed storage system environment
US7152069B1 (en) Zero copy writes through use of mbufs
US7822758B1 (en) Method and apparatus for restoring a data set
US7689609B2 (en) Architecture for supporting sparse volumes
US20060248294A1 (en) System and method for proxying network management protocol commands to enable cluster wide management of data backups
US8099396B1 (en) System and method for enhancing log performance
US20050278383A1 (en) Method and apparatus for keeping a file system client in a read-only name space of the file system
US7743035B2 (en) System and method for restoring a virtual disk from a snapshot
US20070250552A1 (en) System and method for caching network file systems
US7111147B1 (en) Location-independent RAID group virtual block management
US8818951B1 (en) Distributed file system having separate data and metadata and providing a consistent snapshot thereof
US20070239793A1 (en) System and method for implementing a flexible storage manager with threshold control
US7373438B1 (en) System and method for reprioritizing high-latency input/output operations
US20090254592A1 (en) Non-Disruptive File Migration

Legal Events

Date Code Title Description
AS Assignment

Owner name: NETAPP, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TRIMMER, DON ALVIN;YADAV, SANDEEP;SINGH, PRATAP;SIGNING DATES FROM 20070420 TO 20070425;REEL/FRAME:036077/0936