WO2004105011A1 - Information storage basec on carbon nanotubes - Google Patents

Information storage basec on carbon nanotubes Download PDF

Info

Publication number
WO2004105011A1
WO2004105011A1 PCT/NL2003/000387 NL0300387W WO2004105011A1 WO 2004105011 A1 WO2004105011 A1 WO 2004105011A1 NL 0300387 W NL0300387 W NL 0300387W WO 2004105011 A1 WO2004105011 A1 WO 2004105011A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
information
shelf
data
archive
Prior art date
Application number
PCT/NL2003/000387
Other languages
French (fr)
Inventor
Nardy Cramm
Wim Versteegen
Original Assignee
Nardy Cramm
Wim Versteegen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nardy Cramm, Wim Versteegen filed Critical Nardy Cramm
Priority to AU2003237713A priority Critical patent/AU2003237713A1/en
Priority to PCT/NL2003/000387 priority patent/WO2004105011A1/en
Priority to PCT/NL2004/000377 priority patent/WO2004105012A2/en
Publication of WO2004105011A1 publication Critical patent/WO2004105011A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B11/00Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor
    • G11B11/16Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor using recording by mechanical cutting, deforming or pressing
    • G11B11/22Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor using recording by mechanical cutting, deforming or pressing with reproducing by capacitive means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B82NANOTECHNOLOGY
    • B82YSPECIFIC USES OR APPLICATIONS OF NANOSTRUCTURES; MEASUREMENT OR ANALYSIS OF NANOSTRUCTURES; MANUFACTURE OR TREATMENT OF NANOSTRUCTURES
    • B82Y10/00Nanotechnology for information processing, storage or transmission, e.g. quantum computing or single electron logic
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B11/00Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor
    • G11B11/16Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor using recording by mechanical cutting, deforming or pressing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • G11B9/1409Heads
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • G11B9/1418Disposition or mounting of heads or record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • G11B9/1418Disposition or mounting of heads or record carriers
    • G11B9/1427Disposition or mounting of heads or record carriers with provision for moving the heads or record carriers relatively to each other or for access to indexed parts without effectively imparting a relative movement
    • G11B9/1436Disposition or mounting of heads or record carriers with provision for moving the heads or record carriers relatively to each other or for access to indexed parts without effectively imparting a relative movement with provision for moving the heads or record carriers relatively to each other
    • G11B9/1454Positioning the head or record carrier into or out of operative position or across information tracks; Alignment of the head relative to the surface of the record carrier
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • G11B9/1463Record carriers for recording or reproduction involving the use of microscopic probe means
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • G11B9/1463Record carriers for recording or reproduction involving the use of microscopic probe means
    • G11B9/1472Record carriers for recording or reproduction involving the use of microscopic probe means characterised by the form
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B9/00Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor
    • G11B9/12Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor
    • G11B9/14Recording or reproducing using a method not covered by one of the main groups G11B3/00 - G11B7/00; Record carriers therefor using near-field interactions; Record carriers therefor using microscopic probe means, i.e. recording or reproducing by means directly associated with the tip of a microscopic electrical probe as used in Scanning Tunneling Microscopy [STM] or Atomic Force Microscopy [AFM] for inducing physical or electrical perturbations in a recording medium; Record carriers or media specially adapted for such transducing of information
    • G11B9/1463Record carriers for recording or reproduction involving the use of microscopic probe means
    • G11B9/149Record carriers for recording or reproduction involving the use of microscopic probe means characterised by the memorising material or structure
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B11/00Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor
    • G11B11/16Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor using recording by mechanical cutting, deforming or pressing
    • G11B11/18Recording on or reproducing from the same record carrier wherein for these two operations the methods are covered by different main groups of groups G11B3/00 - G11B7/00 or by different subgroups of group G11B9/00; Record carriers therefor using recording by mechanical cutting, deforming or pressing with reproducing by optical means
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B5/00Recording by magnetisation or demagnetisation of a record carrier; Reproducing by magnetic means; Record carriers therefor
    • G11B2005/0002Special dispositions or recording techniques
    • G11B2005/0005Arrangements, methods or circuits
    • G11B2005/0021Thermally assisted recording using an auxiliary energy source for heating the recording layer locally to assist the magnetization reversal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2213/00Indexing scheme relating to G11C13/00 for features not covered by this group
    • G11C2213/70Resistive array aspects
    • G11C2213/81Array wherein the array conductors, e.g. word lines, bit lines, are made of nanowires
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C23/00Digital stores characterised by movement of mechanical parts to effect storage, e.g. using balls; Storage elements therefor

Definitions

  • the invention refers generally to information mass storage and mass memory.
  • the invention also refers to a method of information mass storage and an apparatus creating information mass storage devices, based on Carbon Nanotubes.
  • the main objectives of the information technology and computer manufacturers are to provide viable solutions for storage capacity for electronic and digital devices to achieve low cost, low power consumption and high track density high volumetric density.
  • Carbon Nanotubes are extremely durable, both mechanical and electrical. So far, no one has been able to create storage out of Carbon Nanotubes. This invention will describe and prove the working of an information mass storage device, utilizing Carbon Nanotubes for semiconductor, transmitter and mechanical devices.
  • Carbon Nanotubes are directed in contact to the storage medium, enforcing bit-writing, bit-reading and positioning.
  • the present invention will unveil an information storage and/or memory device that can be used with a computational system as a type of 'Carbon Nanotube-based', information storage and/or memory device.
  • the dimensions and housing of the memory and/or storage device may vary broadly.
  • the same storage device can easily be adjusted for use of data and/or information mass storage in PC's, supercomputers, servers, mimcomputers, (video) camera's, communicators, mobile and cellular phones, notebooks, PDA's, watches, audio equipments, as well as any other technology that require digital electronic or electro-magnetic information mass storage and/or mass memory. It has always been the scope of the inventors to provide an unified, general information storage and/or memory device with aim on cost reduction, functionality and ease.
  • the mass storage device uses Carbon Nanotubes according to the hereafter-mentioned qualifications; an outer single wall Carbon Nanotube, molecule weight of preferably > 850, either left or right chiral, and a (twisted) Carbon Nanotube, housed within the outer single wall Nanotube. Utilization advantages of these Carbon Nanotubes as semiconductors are that they are ideal transistors, very small compared to the nowadays-used silicon transistors, and consume very little power. The last feature benefits low cost and is responsible for minimal temperature raise.
  • Carbon Nanotubes will be called 'Smartey Tubes' to distinguish them from other sorts of Carbon Nanotubes.
  • the mass storage consists of a layered structure system.
  • the 'layer structure' comprises six independent layers, each layer equipped with certain features and functions, of which a ' detailed description is enclosed. It consists of the following layers:
  • stepper motors composed of an equal amount of lateral and correction motors for fme-tuning, temperature sensors, voltage sensors, physical access sensors, laser diodes, connectors to the outside of the device, dedicated processors, memory, amplifiers, tri-state wiring to the Silicon wafer and wiring to stepper motors, and boards for containing memory, processors, amplifiers and parts of the internal EMU (this all can multiple exist in one embodiment to improve reliability -mirroring- and transfer, access positioning and erasing speed and the speed of keeping up the whole system during erasing and reading cycles able to be done at the same time); 6.
  • the storage medium consists of dual polymer foil, one film damped with a thin film of carbon and another polymer film, together molded in a frame of duraluminium.
  • the wafer of Silicon is internally divided in a big number of isolatable electrical areas, also important for identification determination of the Smartey Tubes.
  • Deleting the deleting process is established by logically moving the data to the deleted data queue by the 'General Unified Storage' software.
  • Erasing is the real destroying of deleted info on the medium to make room for new data.
  • the firmware keeps track on the existence of unused places, not- deleted data and deleted data on the storage medium and continuous arranges to sort it as geographically near as possible.
  • the erasing process is done by temporarily increasing foil temperature by laser diodes on big enough local areas, not containing non-deleted data. After heating the part of the storage medium by laser diodes, to a temperature of approximate 180 ° C, for a short time, it will pull tight to its original proportion, making the area available for reuse. Storing is established and managed by the firmware.
  • the medium is continuously moving in x and y directions, driven by stepper motors in a frame, using software corrections for movement anomalies.
  • a problem, foreseen with mass storage of information, is fast transfer time, also related to the mode of recovery of information and/or data stored.
  • fast transfer time also related to the mode of recovery of information and/or data stored.
  • the management firmware responsible for access to certain parts of the written data, as well as controlling and keeping up the communications to the user-owner will allow for high speed access and recovery of data and/or information stored.
  • the limiting factor for fast transfer rate right now is the input/output device.
  • utilizing an SCSI- interface allows for a transfer rate of 160 MB/s
  • using a fiber optic connector allows for a transfer rate of 1 GB/s.
  • the 'layer structure' comprises six independent layers, each layer equipped with certain features and functions, of which a detailed description is enclosed. The independence is obvious from the fact that one layer is allowed only to communicate with the layer directly above or below. The many features functions and advantages of the 'layer structure' system will become apparent from the written description. Layers structure amenities include clear overview, flexibility, ecologically sound because of developing possibilities in clean environments and easily adjustable and correctable. The definition in sequence order from top down:
  • GUS provides a configuration option that you use to manage the GUS environment.
  • GUS concepts and configuration options structured around the following managed entities in the system: Facility; Shelf; Archive class; Device; Volume; Cache; Policy and; Schedule.
  • the relationships among the managed entities and provide guidelines for their definition to create an optimal GUS environment.
  • the GUS environment consists of the definitions you create and the relationships that exist among the definitions.
  • the definitions described in the following sections are maintained in definition databases.
  • the GUS facility entity allows you to control GUS functions across the entire storage fabric. You can control the following functions at the facility level: GUS mode; GUS operations ; Shelf servers; Event logging.
  • GUS Mode You can specify whether GUS operates in Basic or Plus mode.
  • the Basic mode provides shelving, pre-shelving, and un-shelving functionality using simple devices. All interaction occurs through commands.
  • the Plus mode provides shelving, pre-shelving, and un-shelving functionality using the full suite of devices including robotically-controlled.' Considerations for choosing GUS Operating Mode: When deciding whether to operate in Basic or Plus mode, consider the following: If you use GUS Plus mode, you then have one interface for media and device management across the storage management products. If you require support for large automated tape libraries, use GUS Plus mode. If you do not require additional device support and are not using other product functionality, use GUS Basic mode.
  • GUS Operations You can specify whether shelving or un-shelving operations are enabled across the storage fabric as a whole. This includes operations initiated as result of policy triggers, cache flush operations, and manually initiated GUS commands.
  • the shelving parameter controls shelving, pre-shelving and cache flush operations.
  • the un-shelving parameter controls un-shelving and automatically generated file faults. Under normal circumstances, you should enable both shelving and un-shelving across your storage fabric. This allows GUS to maintain desired storage usage through automatic policy operations and also allows users access to shelved data at all times.
  • a shelf server is a single GUS node in a fabric that performs all operations to near line and offline devices on behalf of all nodes in the fabric.
  • the shelf server consolidates requests from all nodes and optimizes operations to minimize loading and positioning, as well as to support dedicated device access. Eligible Servers; Although many nodes can be authorized for shelf server operation, only one GUS node functions as the shelf server at any given time. This way, if the current shelf server node fails, operations are immediately transferred and recovered by another authorized shelf server node. You can specify up to 10 specific nodes to be authorized for shelf server operation.
  • the shelf server undertakes the bulk of shelving operations for the fabric. To support transparent operations when a node fails, multiple shelf servers should be authorized. Scheduled policy execution should be run on an authorized shelf server node for optimal performance (unless a cache is defined). Using the default authorization of all nodes is acceptable if the above conditions are met and all your nodes have similar capabilities.
  • Catalog Server GUS gives you the option of directing all GUS operations and all catalog updates through the shelf server by enabling the Catalog Server option. With this option, all cache operations and catalog updates are performed by the shelf server node in a similar manner to tape operations. There are two main reasons you may want to enable this feature: If you choose to protect your catalogs using after-image Journaling, enabling the catalog server allows it only for the eligible server nodes.
  • the catalog server option allows you to mount the devices on only the eligible shelf server nodes.
  • caching speed is somewhat reduced due to extra intra-fabric communications, and possible delays in shelf server response time.
  • Event Logging; GUS provides four event log files that enable you to monitor and tune the GUS environment, as well as to detect errors in GUS operation:
  • SHP_AUDIT The shelf handler audit log, containing information on the parameters and final status of all requests.
  • PEP_AUDIT The policy audit log, containing information on the parameters, number of files processed, and final status of all policy executions.
  • SHP_ERROR The shelf handler error log, containing detailed information about any serious errors encountered during request processing, including exception information.
  • PEP_ERROR The policy error log, containing detailed info ⁇ nation about any serious e ⁇ ors encountered during policy execution, including exception information.
  • Event logging can be enabled and disabled within the following categories: Audit log: Records all GUS requests; E ⁇ or log: Provides information on important errors; Exception log: Provides e ⁇ or information that is useful in the e ⁇ or logs. We recommend that you enable all logging at all times to keep track of all activity. This is especially important when you have to report a problem.
  • a shelf is a named entity that relates a set of online volumes, on which shelving is enabled, to a set of archive classes that contains the shelved file data for those storage volumes. For each shelf, you can control the following: Shelf copies; Shelving operations; Shelf catalog; Delete save time; Number of updates to retain. You can define any number of shelves, but any specific online storage volume can be associated with only one shelf.
  • the Default Shelf: GUS provides a default shelf, to which all volumes are associated if no other associations are defined. If your data reliability requirements are the same across all storage volumes, you can simply use the default shelf and specify the desired number of copies to use on that shelf. All volumes acquire the data reliability specified by the default shelf. If your data reliability requirements differ from volume to volume, you can define multiple shelves, each of which can contain different numbers of copies for data reliability purposes. You can then relate each volume to the shelf that has the appropriate number of copies. We recommend that you specify at least two copies for each volume.
  • Shelved data is not normally backed up in the normal backup regimen because the most back up utilities work in the following way: An image backup saves only the headers of shelved files; An incremental backup does save the entire file, but the files that are selected for backup are those that have been recently modified and are not the files that usually are shelved. In other words, after a file is shelved, it is likely that its data will not be backed up again. A typical backup strategy recycles the backup tapes when a certain number of more recent copies have been made.
  • This cycle may be anywhere from a few days to several years. However, there eventually will come a time when all of the backup tapes contain only the headers of shelved files. Unless the tapes are never recycled, the shelved file data on the backup media will eventually be lost. As such, the easy way to enhance reliability of shelved file data is to make duplicate copies of the data by using multiple shelf copies.
  • Shelf copies are defined using a concept called an archive class.
  • An archive class is a named entity that represents a single copy of shelf data. Identical copies of the data are written to each archive class when a file is shelved. For each shelf, you can specify the archive classes to be used for shelf copies for all volumes associated with the shelf. The minimum recommended number of copies (archive classes) for each shelf is two. Archive classes are represented by both an archive name and an archive identifier. Archive identifiers are used in Shelf Management Utility commands for ease of use.
  • GUS Basic mode supports 36 archive classes named GUS$ARCHTVE01 to GUS$ARCHTVE36, with associated archive identifiers of 1 to 36 respectively.
  • GUS Plus mode supports up to 9999 archive classes, named GUS$ARCHIVE01 through GUS$ARCHTVE9999, with associated archive identifiers of 1 to 9999.
  • Archive Lists and Restore Archive Lists For each shelf, you must specify two lists of archive identifiers: The archive list, representing the desired number of shelf copies. Up to 10 archive identifiers can be specified in this hst. The restore archive hst, representing an ordered list of archive classes from which restore attempts are made. Up to 36 archive identifiers can be specified in this list. The archive and restore archive lists are defined using the 'set shelf command with the 'archive' and 'restore' qualifiers. Restore archive classes are used for un- shelving files in the order specified in the restore archive list. The first attempt to restore a file's data is made from the first archive class specified in the restore list. If this fails, an attempt is made from the next archive class, and so on.
  • archive classes Although only 10 archive classes are supported for shelf copies, up to 36 are supported for restore, because the restore list must contain a complete hst of all archive classes that have ever been used for shelving on the shelf. This enables files to be restored not only from the cu ⁇ ent list of shelf archive classes, but also from all previously defined shelf archive classes. In this way, you can add or change archive classes for a shelf by: Changing the archive classes in the archive hst, which affects subsequent shelving operations only; Adding new archive classes to the restore list, while keeping the existing definitions in place, so that files shelved under those definitions still can be restored Archive classes also are related to media types and devices.
  • Shelving Operations You can control the same operations for a shelf as you can for the facility, except that the operations defined for the shelf affect only the volumes associated with the shelf. This gives you a finer level of shelving control, which might be useful if certain classes of volumes are not regularly accessed at certain times, and you want to disable shelving activity. However, as with the facility control, it is expected that shelving and unshelving operations usually are enabled.
  • shelf Catalog contains information regarding the location of near-line and off-line data for all volumes associated with the shelf. We recommend that you define a separate catalog for each shelf, but it is possible for several shelves to share a catalog, or for all shelves to use the default catalog. Defining a separate catalog for each shelf has the following advantages: • It restricts the impact of a temporary loss of a catalog to a known set of volumes associated with the shelf; It reduces the size of the catalog file, allowing more flexible placement in your storage sub-system; It increases catalog access performance, since the catalog is smaller and there are fewer records to scan; It reduces the time for a restoration of a catalog from back up tapes.
  • each shelf be associated with between 10 and 50 volumes, and that each shelf has its own catalog.
  • a shelf catalog needs to be protected with a similar level of protection as the default catalog, namely: The catalog should be in a shadow-set or RAID-set; • The catalog should be backed up on a regular basis. It is also recommended that the catalog for a shelf be placed on a storage volume other than one associated with the shelf itself. In very large environments, it might be appropriate to dedicate one or more shadowed storage sets for GUS catalogs, and to disable shelving on those storages. When defining a new catalog for a shelf, or a new shelf for a volume, GUS automatically splits all associated shelving data from the old catalog, and merges it into the new catalog.
  • Save Time You can specify a delete save option for shelved files that have been deleted. This option allows the specification of a delta time, which keeps a file's shelved data in the GUS subsystem for this period after the file is deleted. The actual purging of deleted files (after the specified delay) is performed by the 'repack' function.
  • This option allows the specification of a number of updates to a shelved file that will be kept in the GUS subsystem. This option applies to files that have been updated in place, not new versions of files that have been created after an update. New versions are controlled by online maintenance outside the scope of GUS. The actual purging of obsolete shelf data is performed by the 'repack' function.
  • the “Management layer” is the layer between the “General Unified Storage” (GUS) layer and the “Interfacing layer”.
  • the management layer has a lot of responsibilities and functions: It passes all storage data between GUS and Interfacing; It communicates via “interfacing” with the "Environment Monitoring Unit” (EMU), the Human Command Interface (HCI) and the other storage devices in a fabric; It accepts and processes the user-
  • EMU Endoment Monitoring Unit
  • HCI Human Command Interface
  • EMU Environment Monitoring Unit
  • An EMU should minimal signal the following events: Environment temperature is out of bounds; Environment electrical power is out of bounds; A physical (human) access to the storage device; An out of bounds condition of a power supply; A failure of an internal device;
  • the EMU minimally should accept and handle: Audio or visual alarm, logging of events to a connected printer or terminal and a storage system shutdown and startup
  • the events for the EMU are handled by the event routines, setup by the ML and governed by the Technical Rules and possibly added User Management Rules.
  • Human Command Interface The Human Command Interface consists of the following elements: A interface to a terminal, connected via the SCSI port to the storage unit; Access to it is protected by a RSA scrambled password and a timeout period; Acceptance mterpreting and handling of Technical Rules, as defined in the TR language definition; Acceptance, mterpreting and handling of Human Command Interface (HCI) rules, as defined in the HCI language definition; A LARL, extendable, language parser and interpreter, accompanied by an inline compiler to compile and setup possible Event Handling routines; A database of accepted living and sleeping rules and language elements that can be investigated by the user; A definition of the HCI language; A definition of the TR language; A debugging tool (protected by a special password).
  • Intra Fabric Communication conforms the standard IEEE-ENSA rules for RAID storage devices extended with the "General Unified Storage” communication between management layers of other (Unified) Storage devices.
  • Events The ML is totally "event driven", which means that every action or decision the management system will take stems from other sources. Because of that event driven nature the ML is implemented in a, huge, series of interrupt multi-threaded routines of which the top is started by a significant event. Any event will put some data, describing the event, on a stack (LiFo), and determine if an event trap should be taken or that a generic interpretation of accepted rules should be taken.
  • the number of wired-in event routines is kept to a minimum (14) and the majority of the event routines in the storage system are the ones that stem from the user defined HCI rules.
  • the number of wired-in generic interpretation rules (17) is also kept to the bare minimum, because wired-in routines contribute to inflexibility of the whole system.
  • the Timer Queue There are a lot of functions in storage systems that are time dependent. E.g. there should be some back-up system that saves data on a regularly, time scheduled, basis.
  • the ML has, and governs, a double ended, "timer queue", in which queue, sorted on time for future time-based events, clock-based functions will be kept.
  • timer queue in which queue, sorted on time for future time-based events, clock-based functions will be kept.
  • clock-based functions can, and will, be generated by HCI rules
  • the nature of clock-based requests is exactly the same as "significant events" used on other places in the system.
  • the storage system has a hardware time clock, which tics in intervals less than the time an "atomair" operation in the system takes.
  • the cu ⁇ ent implementation of the storage system therefore has a clock that tics every 10 nanoseconds. As storage systems develop, future implementations could require a shorter tic interval.
  • a queue-ed timed event When a queue-ed timed event is started, its entry in the timer queue is deleted. Is the responsibility of the co ⁇ esponding event routine to generate needed new time-based entries and events that will warn the ML that en event has finished.
  • the Interfacing Layer consists of a number of routines for communicating with the Management Layer and the Firmware Layer. Interface has a number of functions for communicating: Communicating with the interface to the host computer; Communicating with the EMU hardware; Communicating with the Smartey Tubes, by the driving Voice-coil and driving Capacitor plates; Communicating with the silicon wafer chip and switching the barriers between the area's on and off to simulate a "voice coil”; Switching the heating laser beam on and of (for the erasing phase) on demand of "management”; Getting the multiplexed info of the VERTICAL position of all the tubes (a read); Putting force on the "1 "tubes on writing; Moving the foil to the right position (x, y) with co ⁇ ections for elasticity deformations in the foil by switching the correction stepper motors; Continuous keeping the coordinates of the foil in the "topographic table”. On startup, this layer reads out the systems parameters (number of detected, usable, tubes, deformation parameters of the foil of a previous 'life
  • layer 3 Firmware, consisting of tables to direct the driving of the mechanical and electronic parts.
  • stepper motors composed of an equal amount of lateral and co ⁇ ection motors for fme-tuning, temperature sensors, voltage sensors, physical access sensors, laser diodes, connectors to the outside of the device, dedicated processors, memory, amplifiers, tri-state wiring to the Silicon wafer and wiring to stepper motors, and boards for containing memory, processors, amplifiers and parts of the internal EMU (this all can multiple exist in one embodiment to improve reliability - mirroring- and transfer, access positioning and erasing speed and the speed of keeping up the whole system during erasing and reading cycles able to be done at the same time).
  • layer 1 The storage medium and the Smartey Tubes.
  • a layer consisting of a storage medium and several Smartey Tubes, integrated in a wafer of Silicon.
  • the storage medium consists of dual polymer foil, the top-film damped with a thin film of carbon, together molded in a frame of duraluminium.
  • the wafer of Silicon is internally divided in a big number of isolatable electrical areas, also important for identification determination of the Smartey Tubes.
  • Carbon Nanotubes are cheap to buy and largely available. Their solid structure makes sure that mechanical wearing is nrinimal/eliminated. Used as transmitters and semiconductor they consume minimal energy. In addition, advances in nanotechnology are rapidly progressing, increasingly reducing manufacturing and labor cost.
  • Another advantage with Carbon Nanotubes is that the molecules can arrange themselves into patterns like snowflakes, individual chip circuits to contain the Smartey Tubes, necessary within the storage device, would no longer have to be drawn: a drastic change that would dramatically drop the labor, factory and equipment costs in the semiconductor industry.
  • the stepper machines in conjunction with the firmware, used to move the 'Carbon Nanotubes' to the storage medium, as well as the Carbon Nanotubes and the storage medium itself, can be relatively inexpensively mass-produced.
  • the stepper motors and storage medium can be improved or replaced by even better performing items.
  • the specific Smartey Tubes can be manufactured in the same fabrication plants as silicon-based semiconductors with just a handful of modifications, the storage system could well be very competitively priced.
  • the Carbon Nanotubes, the total packing, the storing medium as well as the architectural layered structure, including the soft-and firmware, as well as the housing could all be micro fabricated
  • Multi-purpose applicability also includes (micro) processors and other information storage and/or nonvolatile memory needing devices, such as chips to be used in or on passports, driver's licenses, identification cards, credit cards and banking cards. Making this storage technology unified will logically lead to mass production of different sizes of storage devices.

Abstract

An information mass storage and non-volatile mass memory device and the method, apparatus and system hereof. An information mass storage device, that can also be used as non-volatile mass memory, having a very fast response time of at least 1 GB/s, that can reach storage or memory densities of at least of 50 TB/inch2, consuming minimal power and which is not subject to near future limitations, for digital electronic or electro-magnetic devices that require storage and/or memory. In accordance with the invention, a revolutionary new nanoscale technology, based on Carbon Nanotubes, as the foundation of the mass storage and memory is enclosed. Also revealed is a 'layered structure' storage system consisting of several layers to deal with high transfer rates and high volumes and high density of information, and the management of it.

Description

TITLE OF THE INVENΗON
INFORMATION STORAGE BASEC ON CARBON NANOTUBES
CROSS-REFERENCE TO RELATED APPLICATION
Not applicable.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not applicable.
REFERENCE TO SEQUENCE LISTING, A TABLE, OR A COMPUTER PROGRAM LISTING COMPACT DKS APPENDIX
Not applicable.
BACKGROUND OF THE INVENTION
The invention refers generally to information mass storage and mass memory. The invention also refers to a method of information mass storage and an apparatus creating information mass storage devices, based on Carbon Nanotubes. The main objectives of the information technology and computer manufacturers are to provide viable solutions for storage capacity for electronic and digital devices to achieve low cost, low power consumption and high track density high volumetric density.
It is a general aim for the computer and information technology industry to increase the storage density of information storage devices used by electronic or electro-magnetic (digital) devices. For decades researchers have been working to increase storage density and reduce cost of data storage devices such as magnetic hard-drives, chips, optical drives, and random access memory. However, increasing the storage density is becoming increasingly difficult because conventional technologies appear to be approaching fundamental limits on storage density. For instance, information and/or data storage based on magnetic recording is rapidly approaching fundamental physical limits such as the super paramagnetic limit, below which magnetic bits are not stable at room temperature.
In addition, even if one has increased the storage density, one still has to overcome another major hurdle, which is the time required to access (a certain part of) the information and/or data. The storage device's usefulness is limited if it takes a too long time to store or retrieve information. In other words, in addition to high storage density, one must also find a way to increase fast access time.
Last but certainly not least, every new technology, which is a good candidate to replace today's storage methods, should offer long-term perspectives in order to give room for continued improvements within this new technology over at least one or more decades. With a fundamental change in storage technology, the information technology and computer industry would have to undertake remarkable investments in order to adapt existing production capacity or to replace existing machinery by new ones for any technical purpose involved with said new technology. BRIEF SUMMARY OF THE INVENTION
It is an object of the present invention to develop (1) an information mass storage device, that can be used as non-volatile mass memory, having a very fast response time of at least 1 GB/s, that can reach storage or memory densities of at least of 10 TB/inch2, consuming minimal power and which is not subject to near future limitations, such as mechanical wearing or fundamental physical limits of the foundation; (2) an 'unified mass storage method' to read, write and delete information and/or data; and (3) a mass storage/mass memory apparatus, to be used as information storage or non-volatile memory for (digital) electronic or electromagnetic devices using digital information mass storage/ mass memory.
These objects are achieved by the features stated in the enclosed independent claims. In accordance with the invention, a revolutionary new nanoscale technology of mass storage is enclosed. Also revealed is a 'layered structure' storage system consisting of several layers to deal with high transfer rates and high volumes and high density of information, and the management of it. Also explained will be the specific Carbon Nanotubes to be used as well as semiconductors and transmitter for information mass storage and memqry.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Not applicable.
DETAILED DESCRIPTION OF THE INVENTION
When faced with the constraints imposed by the super paramagnetic effect, how to enhance storage densities, performance, functionality and reliability of mass storage, and how to create storage devices that are better suited to new emerging applications and new customer requirements, focus inclines to switch over to other promising technologies that can provide at least the same requirements without approaching these fundamental physical limits. One of these technologies looks into the possibilities of using Carbon Nanotubes as semiconductors, mechanical devices or/and transmitters.
We choose to embark on storage capacity, based on these Carbon Nanotubes because of their performance and density characteristics. Another advantage of using Carbon Nanotubes is, that they consume very little power, solving also the problem of overheating that traditional transistor at the nanoscale encounter. In addition, Carbon Nanotubes are extremely durable, both mechanical and electrical. So far, no one has been able to create storage out of Carbon Nanotubes. This invention will describe and prove the working of an information mass storage device, utilizing Carbon Nanotubes for semiconductor, transmitter and mechanical devices.
The scope of using Carbon Nanotubes, according to specific specification as further described in this patent, is to produce an information storage and mass memory device to provide significantly increased low/cost storage density, fast access times and high transfer rates for devices that use industrial standard interfacing for in and/or output to (digital) electronic or electro-magnetic devices, such as computers, communicators, audio, video and other devices, with preferred embodiments, not herein limited. The Carbon Nanotubes are directed in contact to the storage medium, enforcing bit-writing, bit-reading and positioning. The present invention will unveil an information storage and/or memory device that can be used with a computational system as a type of 'Carbon Nanotube-based', information storage and/or memory device. The dimensions and housing of the memory and/or storage device may vary broadly. Therefore, the same storage device can easily be adjusted for use of data and/or information mass storage in PC's, supercomputers, servers, mimcomputers, (video) camera's, communicators, mobile and cellular phones, notebooks, PDA's, watches, audio equipments, as well as any other technology that require digital electronic or electro-magnetic information mass storage and/or mass memory. It has always been the scope of the inventors to provide an unified, general information storage and/or memory device with aim on cost reduction, functionality and ease.
Other objects, features and advantages of the invention will.be apparent from the following specifications. Further advantageous arrangements of the invention are set forth in the claims, respective sub claims. Hereafter, the term 'storage device' will refer to the combined description of information mass storage and/or mass non-volatile memory device.
The mass storage device uses Carbon Nanotubes according to the hereafter-mentioned qualifications; an outer single wall Carbon Nanotube, molecule weight of preferably > 850, either left or right chiral, and a (twisted) Carbon Nanotube, housed within the outer single wall Nanotube. Utilization advantages of these Carbon Nanotubes as semiconductors are that they are ideal transistors, very small compared to the nowadays-used silicon transistors, and consume very little power. The last feature benefits low cost and is responsible for minimal temperature raise. Three characteristics of these Carbon Nanotubes are important for identification determination: location of these Carbon Nanotubes; the variable depths of the dents, caused by the Carbon Nanotube bitting the storage medium, dependent of the vertical position of the inner tube in relation of the outer tube; bottom depths of the dents in the storage medium, caused by the Carbon Nanotubes. This depends of the correspondence of the chirality between the inner and outer tubes.
Hereafter these Carbon Nanotubes will be called 'Smartey Tubes' to distinguish them from other sorts of Carbon Nanotubes.
The mass storage consists of a layered structure system. The 'layer structure' comprises six independent layers, each layer equipped with certain features and functions, of which a ' detailed description is enclosed. It consists of the following layers:
1. 'General Unified Storage' system software delivering a logical and manageable set of storage entities;
2. 'Management' firmware to guard the integrity of the whole system, driven by rules, dictated by the user-owner of the system. Second function is keeping up the communications to the user-owner. Function three is passing and controlling all storage data between the interface layer and the general unified layer. Function four is protecting the system by keeping track on internal and external Environment Monitoring Units (EMU's)
3. 'Interfacing' firmware consisting of routines to drive the hereafter mentioned firmware, to gather the data into buffers and to warn the other two layers in the system of upcoming events, and to gather and deliver data in the appropriate form with the connectors to the outside of the device;
4. Firmware, consisting of tables to direct the driving of the mechanical and electronic parts;
5. Mechanical and electronic parts comprising a preferred embodiment, stepper motors composed of an equal amount of lateral and correction motors for fme-tuning, temperature sensors, voltage sensors, physical access sensors, laser diodes, connectors to the outside of the device, dedicated processors, memory, amplifiers, tri-state wiring to the Silicon wafer and wiring to stepper motors, and boards for containing memory, processors, amplifiers and parts of the internal EMU (this all can multiple exist in one embodiment to improve reliability -mirroring- and transfer, access positioning and erasing speed and the speed of keeping up the whole system during erasing and reading cycles able to be done at the same time); 6. A layer consisting 'of a storage medium and a several Smartey Tubes, integrated in a wafer of Silicon. The storage medium consists of dual polymer foil, one film damped with a thin film of carbon and another polymer film, together molded in a frame of duraluminium. The wafer of Silicon is internally divided in a big number of isolatable electrical areas, also important for identification determination of the Smartey Tubes. In the Silicon wafer a variable number of at least 300, preferably more, Smartey Tubes, all functioning as semiconductors, transmitters, and mechanical devices, fixed in the wafer.
Dining the operation, the following essential functions of the Smartey Tubes are executed: Writing: while in writing operation, the medium moves very fast in x and y directions, meanwhile the firmware causes the tubes to bit the storage medium on numerous places. Writing is performed by increasing the kinetic energy of the Carbon Nanotubes to deform the storage medium. This will cause a write having a variable imprint, which in turn will be typically registered on the proper place on the storage medium, using software corrections for movement aberrations. Reading: the Carbon Nanotubes detect the depth of the occurred imprint on the medium to perform reading. After the sensing of these imprints, further processing of the reading is performed by the firmware, using the depths of the indents and the, corrected, determined place of the imprint on the medium. Deleting the deleting process is established by logically moving the data to the deleted data queue by the 'General Unified Storage' software. Erasing: erasing is the real destroying of deleted info on the medium to make room for new data. The firmware keeps track on the existence of unused places, not- deleted data and deleted data on the storage medium and continuous arranges to sort it as geographically near as possible. The erasing process is done by temporarily increasing foil temperature by laser diodes on big enough local areas, not containing non-deleted data. After heating the part of the storage medium by laser diodes, to a temperature of approximate 180 ° C, for a short time, it will pull tight to its original proportion, making the area available for reuse. Storing is established and managed by the firmware.
As mentioned before, while the write/read operation is performed, the medium is continuously moving in x and y directions, driven by stepper motors in a frame, using software corrections for movement anomalies.
As will be described hereafter, a problem, foreseen with mass storage of information, is fast transfer time, also related to the mode of recovery of information and/or data stored. Apart from the speed of movement aberrations of the storage medium, as the speed of the storage medium itself, as the erasing speed and the speed of keeping up the whole system during erasing and reading cycles, able to be done at the same time, as well as the high speed of Smartey Tubes, hitting the storage medium, as the management firmware, responsible for access to certain parts of the written data, as well as controlling and keeping up the communications to the user-owner will allow for high speed access and recovery of data and/or information stored. The limiting factor for fast transfer rate right now is the input/output device. For example, with the current interfacing technology: utilizing an SCSI- interface allows for a transfer rate of 160 MB/s, using a fiber optic connector allows for a transfer rate of 1 GB/s. As technology in this area develops, allowing even faster connector access technologies, testing hereof will prove high speed rates are increased significantly.
Before explaining the 'layered structure' of which the storage device has been put together, it should be made clear that this part of the invention is not necessarily limited to the "Carbon Nanotubes' technology. The described structure, including its firm- and software, and its way of dealing with reading, writing, erasing and storing of information and/or data herein is unified, and embedded in the higher positioned layers, therefore also applicable on other (past and future) mass storage and mass memory technologies. This in turn allows for improved fast access time and high transfer data rate for all mass storage devices in which this structure is used. It is to be understood that the 'layer structure' can be completely downsized to e.g. little memory sticks, chips, microprocessors or even smaller.
The 'layer structure' comprises six independent layers, each layer equipped with certain features and functions, of which a detailed description is enclosed. The independence is obvious from the fact that one layer is allowed only to communicate with the layer directly above or below. The many features functions and advantages of the 'layer structure' system will become apparent from the written description. Layers structure amenities include clear overview, flexibility, ecologically sound because of developing possibilities in clean environments and easily adjustable and correctable. The definition in sequence order from top down:
Description of layer 6: General Unified Storage (GUS).
Before running GUS in your production environment, you need to understand various definitions and concepts. For each concept, GUS provides a configuration option that you use to manage the GUS environment. We here present an explication of the GUS concepts and configuration options, structured around the following managed entities in the system: Facility; Shelf; Archive class; Device; Volume; Cache; Policy and; Schedule. We also define the relationships among the managed entities, and provide guidelines for their definition to create an optimal GUS environment.
The GUS environment consists of the definitions you create and the relationships that exist among the definitions. The definitions described in the following sections are maintained in definition databases.
The GUS facility entity allows you to control GUS functions across the entire storage fabric. You can control the following functions at the facility level: GUS mode; GUS operations ; Shelf servers; Event logging.
GUS Mode: You can specify whether GUS operates in Basic or Plus mode. The Basic mode, provides shelving, pre-shelving, and un-shelving functionality using simple devices. All interaction occurs through commands. The Plus mode provides shelving, pre-shelving, and un-shelving functionality using the full suite of devices including robotically-controlled.' Considerations for choosing GUS Operating Mode: When deciding whether to operate in Basic or Plus mode, consider the following: If you use GUS Plus mode, you then have one interface for media and device management across the storage management products. If you require support for large automated tape libraries, use GUS Plus mode. If you do not require additional device support and are not using other product functionality, use GUS Basic mode. If you are using only magneto-optical-nano devices and no tape devices, use Basic Mode. GUS Operations: You can specify whether shelving or un-shelving operations are enabled across the storage fabric as a whole. This includes operations initiated as result of policy triggers, cache flush operations, and manually initiated GUS commands. The shelving parameter controls shelving, pre-shelving and cache flush operations. The un-shelving parameter controls un-shelving and automatically generated file faults. Under normal circumstances, you should enable both shelving and un-shelving across your storage fabric. This allows GUS to maintain desired storage usage through automatic policy operations and also allows users access to shelved data at all times. Considerations for Disabling Shelving and Un-shelving; You may need to disable GUS operations at certain times if they conflict with other activities (such as backups) and there are limited offline devices available. For example, if backups are performed nightly at midnight, you could set up a policy to disable shelving at that time. When necessary, you can disable shelving and probably not cause problems with storage usage exceeding the defined goals. However, if you disable un- shelving, your users and applications may experience errors accessing shelved files. You should disable un-shelving only if you do not anticipate needing access to shelved data. Shelf Servers: A shelf server is a single GUS node in a fabric that performs all operations to near line and offline devices on behalf of all nodes in the fabric. It also coordinates fabric- wide operations such as check pointing archive classes and resetting event logs. If the facility Catalog Server is enabled, all cache operations and catalog updates are also performed by the shelf server. By default, cache operations are performed by the requesting client node for performance reasons. Such operations are passed from other (client) nodes to the shelf server for processing. iThe shelf server consolidates requests from all nodes and optimizes operations to minimize loading and positioning, as well as to support dedicated device access. Eligible Servers; Although many nodes can be authorized for shelf server operation, only one GUS node functions as the shelf server at any given time. This way, if the current shelf server node fails, operations are immediately transferred and recovered by another authorized shelf server node. You can specify up to 10 specific nodes to be authorized for shelf server operation. By default, all nodes in the fabric are authorized. The current shelf server node can be displayed using a 'show facility' command. When deciding whether to authorize a node as a shelf server, consider the following: In Basic mode, all specified near line and offline devices must be accessible to all shelf server nodes. By contrast, they do not need to be accessible to client nodes .
The shelf server undertakes the bulk of shelving operations for the fabric. To support transparent operations when a node fails, multiple shelf servers should be authorized. Scheduled policy execution should be run on an authorized shelf server node for optimal performance (unless a cache is defined). Using the default authorization of all nodes is acceptable if the above conditions are met and all your nodes have similar capabilities. Catalog Server: GUS gives you the option of directing all GUS operations and all catalog updates through the shelf server by enabling the Catalog Server option. With this option, all cache operations and catalog updates are performed by the shelf server node in a similar manner to tape operations. There are two main reasons you may want to enable this feature: If you choose to protect your catalogs using after-image Journaling, enabling the catalog server allows it only for the eligible server nodes. Otherwise, it would be required on all nodes in the fabric; If you are using magneto-optical-nano cache devices as a permanent shelf, the catalog server option allows you to mount the devices on only the eligible shelf server nodes. The downside of enabling the catalog server option is that caching speed is somewhat reduced due to extra intra-fabric communications, and possible delays in shelf server response time. Event Logging; GUS provides four event log files that enable you to monitor and tune the GUS environment, as well as to detect errors in GUS operation:
SHP_AUDIT: The shelf handler audit log, containing information on the parameters and final status of all requests.
PEP_AUDIT: The policy audit log, containing information on the parameters, number of files processed, and final status of all policy executions.
SHP_ERROR: The shelf handler error log, containing detailed information about any serious errors encountered during request processing, including exception information. PEP_ERROR: The policy error log, containing detailed infoπnation about any serious eπors encountered during policy execution, including exception information. Event logging can be enabled and disabled within the following categories: Audit log: Records all GUS requests; Eπor log: Provides information on important errors; Exception log: Provides eπor information that is useful in the eπor logs. We recommend that you enable all logging at all times to keep track of all activity. This is especially important when you have to report a problem.
The Shelf: A shelf is a named entity that relates a set of online volumes, on which shelving is enabled, to a set of archive classes that contains the shelved file data for those storage volumes. For each shelf, you can control the following: Shelf copies; Shelving operations; Shelf catalog; Delete save time; Number of updates to retain. You can define any number of shelves, but any specific online storage volume can be associated with only one shelf. The Default Shelf: GUS provides a default shelf, to which all volumes are associated if no other associations are defined. If your data reliability requirements are the same across all storage volumes, you can simply use the default shelf and specify the desired number of copies to use on that shelf. All volumes acquire the data reliability specified by the default shelf. If your data reliability requirements differ from volume to volume, you can define multiple shelves, each of which can contain different numbers of copies for data reliability purposes. You can then relate each volume to the shelf that has the appropriate number of copies. We recommend that you specify at least two copies for each volume.
If you have a very large number of online storage volumes, we recommend that you define multiple shelves, each with a separate catalog. This prevents any particular catalog from becoming so large that catalog access performance degrades. We recommend that you associate between 10 and 50 online storage volumes with each shelf, depending on the amount of shelving you plan to do. The shelf entity does not define the volumes associated with the shelf. Instead, you associate individual volume entities with the shelf. You can associate a particular volume with exactly one shelf. If you do not define volumes explicitly, all volumes implicitly use the default shelf.
Using Multiple Shelf Copies: This section explains why you need multiple shelf copies and how to define them. One of the most important decisions that you need to make concerns the number of copies of shelved file data that you need for data safety purposes. Shelved data is not normally backed up in the normal backup regimen because the most back up utilities work in the following way: An image backup saves only the headers of shelved files; An incremental backup does save the entire file, but the files that are selected for backup are those that have been recently modified and are not the files that usually are shelved. In other words, after a file is shelved, it is likely that its data will not be backed up again. A typical backup strategy recycles the backup tapes when a certain number of more recent copies have been made. This cycle may be anywhere from a few days to several years. However, there eventually will come a time when all of the backup tapes contain only the headers of shelved files. Unless the tapes are never recycled, the shelved file data on the backup media will eventually be lost. As such, the easy way to enhance reliability of shelved file data is to make duplicate copies of the data by using multiple shelf copies.
Defining Shelf Copies: Shelf copies are defined using a concept called an archive class. An archive class is a named entity that represents a single copy of shelf data. Identical copies of the data are written to each archive class when a file is shelved. For each shelf, you can specify the archive classes to be used for shelf copies for all volumes associated with the shelf. The minimum recommended number of copies (archive classes) for each shelf is two. Archive classes are represented by both an archive name and an archive identifier. Archive identifiers are used in Shelf Management Utility commands for ease of use. GUS Basic mode supports 36 archive classes named GUS$ARCHTVE01 to GUS$ARCHTVE36, with associated archive identifiers of 1 to 36 respectively. GUS Plus mode supports up to 9999 archive classes, named GUS$ARCHIVE01 through GUS$ARCHTVE9999, with associated archive identifiers of 1 to 9999.
Archive Lists and Restore Archive Lists: For each shelf, you must specify two lists of archive identifiers: The archive list, representing the desired number of shelf copies. Up to 10 archive identifiers can be specified in this hst. The restore archive hst, representing an ordered list of archive classes from which restore attempts are made. Up to 36 archive identifiers can be specified in this list. The archive and restore archive lists are defined using the 'set shelf command with the 'archive' and 'restore' qualifiers. Restore archive classes are used for un- shelving files in the order specified in the restore archive list. The first attempt to restore a file's data is made from the first archive class specified in the restore list. If this fails, an attempt is made from the next archive class, and so on. Although only 10 archive classes are supported for shelf copies, up to 36 are supported for restore, because the restore list must contain a complete hst of all archive classes that have ever been used for shelving on the shelf. This enables files to be restored not only from the cuπent list of shelf archive classes, but also from all previously defined shelf archive classes. In this way, you can add or change archive classes for a shelf by: Changing the archive classes in the archive hst, which affects subsequent shelving operations only; Adding new archive classes to the restore list, while keeping the existing definitions in place, so that files shelved under those definitions still can be restored Archive classes also are related to media types and devices. When a shelf is first created, the archive classes specified in the archive list are copied to the restore hst if the restore list is not specified. Thereafter, the two lists must be maintained separately. Primary and Secondary Archive Classes: When defining your restore archive hst, it is useful to think of the first archive class in the restore list as a primary archive class and all the others as secondary archive classes. For shelving operations, all of the archive classes in the archive list receive the same amount of operations, because GUS copies data to all archive classes, at the time of shelving. However, for un-shelving, this is different. In most cases, GUS only needs to read from the primary archive class to restore the data. These concepts are useful when deciding how to relate your archive classes to media types and devices. Multiple Shelf Copies: You need to determine the appropriate number of shelf copies for your shelved file data, depending on the importance of the data being shelved. We recommend a minimum of at least two shelf copies of all data, because media can be lost or destroyed. If the data is especially critical, you can make additional copies, some of which might be taken offsite and stored in a vault. GUS provides a mechanism called check pointing to synchronize your shelved data media and backup media so that they can be removed to an offline location together.
Shelving Operations: You can control the same operations for a shelf as you can for the facility, except that the operations defined for the shelf affect only the volumes associated with the shelf. This gives you a finer level of shelving control, which might be useful if certain classes of volumes are not regularly accessed at certain times, and you want to disable shelving activity. However, as with the facility control, it is expected that shelving and unshelving operations usually are enabled.
Shelf Catalog: The shelf catalog contains information regarding the location of near-line and off-line data for all volumes associated with the shelf. We recommend that you define a separate catalog for each shelf, but it is possible for several shelves to share a catalog, or for all shelves to use the default catalog. Defining a separate catalog for each shelf has the following advantages: • It restricts the impact of a temporary loss of a catalog to a known set of volumes associated with the shelf; It reduces the size of the catalog file, allowing more flexible placement in your storage sub-system; It increases catalog access performance, since the catalog is smaller and there are fewer records to scan; It reduces the time for a restoration of a catalog from back up tapes.
As a guideline, we recommend that each shelf be associated with between 10 and 50 volumes, and that each shelf has its own catalog. A shelf catalog needs to be protected with a similar level of protection as the default catalog, namely: The catalog should be in a shadow-set or RAID-set; • The catalog should be backed up on a regular basis. It is also recommended that the catalog for a shelf be placed on a storage volume other than one associated with the shelf itself. In very large environments, it might be appropriate to dedicate one or more shadowed storage sets for GUS catalogs, and to disable shelving on those storages. When defining a new catalog for a shelf, or a new shelf for a volume, GUS automatically splits all associated shelving data from the old catalog, and merges it into the new catalog. Save Time: You can specify a delete save option for shelved files that have been deleted. This option allows the specification of a delta time, which keeps a file's shelved data in the GUS subsystem for this period after the file is deleted. The actual purging of deleted files (after the specified delay) is performed by the 'repack' function.
Number of Updates for Retention: This option allows the specification of a number of updates to a shelved file that will be kept in the GUS subsystem. This option applies to files that have been updated in place, not new versions of files that have been created after an update. New versions are controlled by online maintenance outside the scope of GUS. The actual purging of obsolete shelf data is performed by the 'repack' function.
Description of layer 5: Management Firmware
The "Management layer" (ML) is the layer between the "General Unified Storage" (GUS) layer and the "Interfacing layer". The management layer has a lot of responsibilities and functions: It passes all storage data between GUS and Interfacing; It communicates via "interfacing" with the "Environment Monitoring Unit" (EMU), the Human Command Interface (HCI) and the other storage devices in a fabric; It accepts and processes the user-
SUBSTITUTE SHEET-tRUtE-26)- management rules; It accepts and processes the technical rules; It governs the setup and passing of all "significant events"; It governs the handling of a "timer queue"
Environment Monitoring Unit (EMU); A standard Environment Monitoring Unit should be connected to the same hardware interface as the internal EMU does. An EMU should minimal signal the following events: Environment temperature is out of bounds; Environment electrical power is out of bounds; A physical (human) access to the storage device; An out of bounds condition of a power supply; A failure of an internal device; The EMU minimally should accept and handle: Audio or visual alarm, logging of events to a connected printer or terminal and a storage system shutdown and startup The events for the EMU are handled by the event routines, setup by the ML and governed by the Technical Rules and possibly added User Management Rules.
Human Command Interface (HCI): The Human Command Interface consists of the following elements: A interface to a terminal, connected via the SCSI port to the storage unit; Access to it is protected by a RSA scrambled password and a timeout period; Acceptance mterpreting and handling of Technical Rules, as defined in the TR language definition; Acceptance, mterpreting and handling of Human Command Interface (HCI) rules, as defined in the HCI language definition; A LARL, extendable, language parser and interpreter, accompanied by an inline compiler to compile and setup possible Event Handling routines; A database of accepted living and sleeping rules and language elements that can be investigated by the user; A definition of the HCI language; A definition of the TR language; A debugging tool (protected by a special password).
Intra Fabric Communication: Intra fabric communication conforms the standard IEEE-ENSA rules for RAID storage devices extended with the "General Unified Storage" communication between management layers of other (Unified) Storage devices. Events: The ML is totally "event driven", which means that every action or decision the management system will take stems from other sources. Because of that event driven nature the ML is implemented in a, huge, series of interrupt multi-threaded routines of which the top is started by a significant event. Any event will put some data, describing the event, on a stack (LiFo), and determine if an event trap should be taken or that a generic interpretation of accepted rules should be taken. The number of wired-in event routines is kept to a minimum (14) and the majority of the event routines in the storage system are the ones that stem from the user defined HCI rules. The number of wired-in generic interpretation rules (17) is also kept to the bare minimum, because wired-in routines contribute to inflexibility of the whole system.
The Timer Queue: There are a lot of functions in storage systems that are time dependent. E.g. there should be some back-up system that saves data on a regularly, time scheduled, basis. The ML has, and governs, a double ended, "timer queue", in which queue, sorted on time for future time-based events, clock-based functions will be kept. For the sake of simplicity (clock-based functions can, and will, be generated by HCI rules), the nature of clock-based requests is exactly the same as "significant events" used on other places in the system. For the time-based events the storage system has a hardware time clock, which tics in intervals less than the time an "atomair" operation in the system takes. The cuπent implementation of the storage system therefore has a clock that tics every 10 nanoseconds. As storage systems develop, future implementations could require a shorter tic interval. When a queue-ed timed event is started, its entry in the timer queue is deleted. Is the responsibility of the coπesponding event routine to generate needed new time-based entries and events that will warn the ML that en event has finished.
Description of layer 4: 'Interfacing'
The Interfacing Layer consists of a number of routines for communicating with the Management Layer and the Firmware Layer. Interface has a number of functions for communicating: Communicating with the interface to the host computer; Communicating with the EMU hardware; Communicating with the Smartey Tubes, by the driving Voice-coil and driving Capacitor plates; Communicating with the silicon wafer chip and switching the barriers between the area's on and off to simulate a "voice coil"; Switching the heating laser beam on and of (for the erasing phase) on demand of "management"; Getting the multiplexed info of the VERTICAL position of all the tubes (a read); Putting force on the "1 "tubes on writing; Moving the foil to the right position (x, y) with coπections for elasticity deformations in the foil by switching the correction stepper motors; Continuous keeping the coordinates of the foil in the "topographic table". On startup, this layer reads out the systems parameters (number of detected, usable, tubes, deformation parameters of the foil of a previous 'life'). Also, it redetects, checks and calibrates this information.
Description of layer 3: Firmware, consisting of tables to direct the driving of the mechanical and electronic parts.
Description of layer 2: Mechanical and electronic parts.
Mechanical and electronic parts comprising a prefeπed embodiment, stepper motors composed of an equal amount of lateral and coπection motors for fme-tuning, temperature sensors, voltage sensors, physical access sensors, laser diodes, connectors to the outside of the device, dedicated processors, memory, amplifiers, tri-state wiring to the Silicon wafer and wiring to stepper motors, and boards for containing memory, processors, amplifiers and parts of the internal EMU (this all can multiple exist in one embodiment to improve reliability - mirroring- and transfer, access positioning and erasing speed and the speed of keeping up the whole system during erasing and reading cycles able to be done at the same time).
Description of layer 1: The storage medium and the Smartey Tubes. A layer consisting of a storage medium and several Smartey Tubes, integrated in a wafer of Silicon. The storage medium consists of dual polymer foil, the top-film damped with a thin film of carbon, together molded in a frame of duraluminium. The wafer of Silicon is internally divided in a big number of isolatable electrical areas, also important for identification determination of the Smartey Tubes. In the Silicon wafer a variable number of at least 150, preferably more, Smartey Tubes, all functioning as semiconductors, transmitters, and mechanical devices, fixed in the wafer.
As far as low cost of labor, manufacturing and equipment cost of producing these kind of storage devices, in prefeπed embodiments, the following aspects will be of significant influence. Carbon Nanotubes are cheap to buy and largely available. Their solid structure makes sure that mechanical wearing is nrinimal/eliminated. Used as transmitters and semiconductor they consume minimal energy. In addition, advances in nanotechnology are rapidly progressing, increasingly reducing manufacturing and labor cost. Another advantage with Carbon Nanotubes, is that the molecules can arrange themselves into patterns like snowflakes, individual chip circuits to contain the Smartey Tubes, necessary within the storage device, would no longer have to be drawn: a drastic change that would dramatically drop the labor, factory and equipment costs in the semiconductor industry. The stepper machines in conjunction with the firmware, used to move the 'Carbon Nanotubes' to the storage medium, as well as the Carbon Nanotubes and the storage medium itself, can be relatively inexpensively mass-produced.
As technology progresses, the stepper motors and storage medium can be improved or replaced by even better performing items. One might also consider replacing the Silicon with a crystal of Carbon Nanotubes, because in the case of Carbon Nanotubes, molecules arrange themselves into patterns like snowflakes, making it easier and thus cheaper to integrate the Smartey Tubes herein. Because the specific Smartey Tubes can be manufactured in the same fabrication plants as silicon-based semiconductors with just a handful of modifications, the storage system could well be very competitively priced. In fact, it is very possible that the Carbon Nanotubes, the total packing, the storing medium as well as the architectural layered structure, including the soft-and firmware, as well as the housing could all be micro fabricated
SUBSTITUTE SHEETfRUITE Sπ" "*'" in the same process sequence, which would allow the device to be even more, inexpensively manufactured.
Furthermore: One of the biggest advantages includes the high-density capacity rate, which, leads to considerable downsizing, depending on the size or capacity needed. Because the size of the storage device may vary broadly, this may inevitably result in multiple-purpose storage-intensive applications and systems for e.g. multimedia, mobile/cellular systems, video streaming, 3D graphics, PDA's, watches, digital photo and/or video albums, audio devices and storing large images or data downloaded from the Internet or intranets. Multi-purpose applicability also includes (micro) processors and other information storage and/or nonvolatile memory needing devices, such as chips to be used in or on passports, driver's licenses, identification cards, credit cards and banking cards. Making this storage technology unified will logically lead to mass production of different sizes of storage devices.
The many features and advantages of the present invention are apparent from the written description, and thus, it is intended by the appended claims to cover all such features and advantages of this invention. Further, since numerous modifications and changes will readily occur, it is not desired to limit the invention to the exact construction and operation as illustrated and described. Although the description above use language that is specific to structural features and or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features, or acts described. Rather, the specific features, acts and functions are disclosed hereby, are mend to act as exemplary forms of implementing the invention. Therefore, all suitable modifications and equivalents can resort to as falling within the scope of the invention.

Claims

CLAIM OR CLAIMS What is claimed is:
(1) An information storage and or memory device characterized by:
• An information storage technology;
• A layered structure of the information storage technology;
• A digital electronic or electro-magnetic device (somehow) mounted on an information storage and/or memory technology with a layered structure (system).
(2) An information mass storage device, according to claim 1, characterized by to be used, on or otherwise connected, single or multi-purpose digital electronic or electro-magnetic device as one or more storage device(s), intend to read, write, erase and deleting information upon, comprising the following functions and features:
• Writing, being achieved by temporarily increasing the kinetic energy of the Carbon Nanotubes to deform the storage medium. This will deliver a write, having a variable imprint, which in turn will typically be registered in the storage medium, using software coπectives for movement aberrations;
• Reading is executed by the firmware, determining the dept of dents, caused by Smartey Tubes hitting the storage medium, coπected for movement abeπations of the storage medium;
• Erasing is destroying info by placing it in the deleted info queue. The firmware keeps track on the existence of not-deleted data and deleted data on the topographic of the storage medium;
• The deleting process itself is established by logically moving the data to the deleted data queue by the 'General Unified Storage' software. The erasing process is done by temporarily increasing temperature by laser diodes on big enough local areas, that contain deleted data only;
• A storage medium to read, write, erase and store data upon;
• A variable amount of firmware means for driving, coπecting, inhibiting, utilizing, and storing of said instructions;
• General firmware to unify storage devices;
• Several mechanics and electronic hardware;
• Prefeπed embodiment and/or housing.
(3) An information mass storage device according to claim 1 and 2, characterized by a 'layered structure' in the scope of provided patent and appended claims herein.
(4) Information mass storage devices, according to claim 1,2 and 3 characterized by the many features and advantages of the present invention are apparent from the written description, and thus, intending by the appended claims to cover all such features and advantages of the appended information mass storage device.
(5) An information mass storage device according to claim 1,2,3 and 4, characterized to be used in, on or otherwise connected a single or multi-purpose digital electronic or electromagnetic device as one or more storage device(s), intend read, write, erase and store information and or data upon, and since numerous modifications and changes will readily occur, the appended claims are not limited to the housing, construction and operation as illustrated and described and although the features, functions, acts and descriptions mentioned above are specific, claim 1 and 2 are not limited to said items, rather, it should be clearly understood that the specific features, acts and descriptions disclosed hereby, are mend to act as exemplary forms of implementing the invention in, on or otherwise connected a digital electronic or electro-magnetic device.
(6) An information mass storage device according to' claim 1,2,3,4 and 5, characterized by all suitable modifications, alterations and/or improvements of functions may resort to as falling Vvdthin the scope of the invention, as mentioned in the appended claims are not to be limited and this whole patent description as such.
(7) An information storage system, in which the 'layered structure' characterized in the scope of this patent and claims, used by information mass storage devices.
(8) The use of Carbon Nanotubes characterized by: an outer single wall Carbon Nanotube, and a twisted Carbon Nanotube, housed within the outer single wall Carbon Nanotube, as semiconductor, transmitter, mechanical device and their transducer functionality, to create information storage and/or non-volatile memory, because of their ideal transistor and mechanical characteristics, being small, durable, causing minimal temperature raise and consuming very little power.
PCT/NL2003/000387 2003-05-26 2003-05-26 Information storage basec on carbon nanotubes WO2004105011A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2003237713A AU2003237713A1 (en) 2003-05-26 2003-05-26 Information storage basec on carbon nanotubes
PCT/NL2003/000387 WO2004105011A1 (en) 2003-05-26 2003-05-26 Information storage basec on carbon nanotubes
PCT/NL2004/000377 WO2004105012A2 (en) 2003-05-26 2004-05-26 Information storage based on carbon nanotubes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/NL2003/000387 WO2004105011A1 (en) 2003-05-26 2003-05-26 Information storage basec on carbon nanotubes

Publications (1)

Publication Number Publication Date
WO2004105011A1 true WO2004105011A1 (en) 2004-12-02

Family

ID=33476092

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/NL2003/000387 WO2004105011A1 (en) 2003-05-26 2003-05-26 Information storage basec on carbon nanotubes
PCT/NL2004/000377 WO2004105012A2 (en) 2003-05-26 2004-05-26 Information storage based on carbon nanotubes

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/NL2004/000377 WO2004105012A2 (en) 2003-05-26 2004-05-26 Information storage based on carbon nanotubes

Country Status (2)

Country Link
AU (1) AU2003237713A1 (en)
WO (2) WO2004105011A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110095858B (en) * 2018-12-12 2021-06-08 中国科学院紫金山天文台 Self-adaptive optical deformable mirror elastic modal aberration characterization method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835477A (en) * 1996-07-10 1998-11-10 International Business Machines Corporation Mass-storage applications of local probe arrays

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6542400B2 (en) * 2001-03-27 2003-04-01 Hewlett-Packard Development Company Lp Molecular memory systems and methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835477A (en) * 1996-07-10 1998-11-10 International Business Machines Corporation Mass-storage applications of local probe arrays

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AKITA S ET AL: "Nanoindentation of polycarbonate using carbon nanotube tip", CONFERENCE PROCEEDINGS ARTICLE, 11 July 2000 (2000-07-11), pages 228 - 229, XP010513574 *
LOZOVIK Y E ET AL: "Nanomachines based on carbon nanotubes", PHYSICS LETTERS A, NORTH-HOLLAND PUBLISHING CO., AMSTERDAM, NL, vol. 313, no. 1-2, 23 June 2003 (2003-06-23), pages 112 - 121, XP004430533, ISSN: 0375-9601 *
SAITO R ET AL: "Anomalous potential barrier of double-wall carbon nanotube", CHEMICAL PHYSICS LETTERS, 9 NOV. 2001, ELSEVIER, NETHERLANDS, vol. 348, no. 3-4, pages 187 - 193, XP002269018, ISSN: 0009-2614 *

Also Published As

Publication number Publication date
AU2003237713A1 (en) 2004-12-13
WO2004105012A2 (en) 2004-12-02
WO2004105012A3 (en) 2005-06-02

Similar Documents

Publication Publication Date Title
US6691137B1 (en) Cache management system utilizing cascading tokens
US6502108B1 (en) Cache-failure-tolerant data storage system storing data objects with version code equipped metadata tokens
US7945733B2 (en) Hierarchical storage management (HSM) for redundant array of independent disks (RAID)
Shoshani et al. Scientific data management: challenges, technology, and deployment
Gait The optical file cabinet: a random-access file system for write-once optical disks
US8055631B2 (en) Reducing data loss and unavailability by integrating multiple levels of a storage hierarchy
US7761426B2 (en) Apparatus, system, and method for continuously protecting data
Morris et al. The evolution of storage systems
JP4547357B2 (en) Redundancy for stored data structures
US7721143B2 (en) Method for reducing rebuild time on a RAID device
US7865473B2 (en) Generating and indicating incremental backup copies from virtual copies of a data set
CN101233517B (en) Maintaining an aggregate including active files in a storage pool
US7908512B2 (en) Method and system for cache-based dropped write protection in data storage systems
CN104049908B (en) Intermediate storage based on dynamic particle
US8255637B2 (en) Mass storage system and method of operating using consistency checkpoints and destaging
KR100621446B1 (en) Autonomic power loss recovery for a multi-cluster storage sub-system
US20070159897A1 (en) Method and apparatus for preventing permanent data loss due to single failure of a fault tolerant array
KR20010015722A (en) Hybrid data storage and reconstruction system and method for a data storage device
CN101617295A (en) After failover to the preservation of cached data
WO2007020121A1 (en) Maintaining an aggregate including active files in a storage pool in a random access medium
CN102968460A (en) Database storage system based on optical disk and method using database storage system
CN102687120B (en) Selective write protect for disaster recovery testing
US7171518B2 (en) Data storage system recovery from disk failure during system off-line condition
US7529966B2 (en) Storage system with journaling
WO2004105011A1 (en) Information storage basec on carbon nanotubes

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP