CN1866164A - Hard disk drive power reducing module - Google Patents

Hard disk drive power reducing module Download PDF

Info

Publication number
CN1866164A
CN1866164A CNA2005100771950A CN200510077195A CN1866164A CN 1866164 A CN1866164 A CN 1866164A CN A2005100771950 A CNA2005100771950 A CN A2005100771950A CN 200510077195 A CN200510077195 A CN 200510077195A CN 1866164 A CN1866164 A CN 1866164A
Authority
CN
China
Prior art keywords
data
power
lpdd
hpdd
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2005100771950A
Other languages
Chinese (zh)
Other versions
CN100418039C (en
Inventor
S·苏塔迪亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marvell Asia Pte Ltd
Original Assignee
Mawier International Trade Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mawier International Trade Co Ltd filed Critical Mawier International Trade Co Ltd
Publication of CN1866164A publication Critical patent/CN1866164A/en
Application granted granted Critical
Publication of CN100418039C publication Critical patent/CN100418039C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B19/00Driving, starting, stopping record carriers not specifically of filamentary or web form, or of supports therefor; Control thereof; Control of operating function ; Driving both disc and head
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3215Monitoring of peripheral devices
    • G06F1/3221Monitoring of peripheral devices of disk drive devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3268Power saving in hard disk drive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Power Sources (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Digital Magnetic Recording (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

There is provided a hard disk drive power reduction module. A data storage system for a computer including low power and high power modes, comprising low power (LP) nonvolatile memory; high power (HP) nonvolatile memory; and a drive power reduction module that communicates with said LP and HP nonvolatile memory, wherein when read data is read from said HP nonvolatile memory during said low power mode and said read data includes a sequential access data file, said drive power reduction module calculates a burst period for transfers of segments of said read data from said HP nonvolatile memory to LP nonvolatile memory.

Description

Hard disk drive power reducing module
The application's dividing an application that be the denomination of invention submitted on May 17th, 2005 for the Chinese patent application 2005100709131 of " adaptive memory system ".
The cross reference of related application
This application and on February 13rd, 2004 submit to, name is called: the U.S. Patent application No.10/779544 of " Computer withLow-Power Secondary Processer and Secondary Display " and on June 10th, 2004 submit to, name is called: the U.S. Patent application No.10/856368 of " LowPower Computer with Main and Auxilliary Processers " is relevant, and the full content that is incorporated herein these two applications as a reference.
Technical field
The application relates to data-storage system, relates more specifically to the low-power data storage system.
Background technology
Notebook is to use line power and battery power.The processor of notebook, graphic process unit, storer and display consume sizable electric weight in the course of the work.Important limitations of notebook and notebook under the situation that battery does not charge, use battery the time quantum that can work relevant.The high-power relatively consumption of notebook is usually corresponding to short relatively battery life.
With reference now to Figure 1A,, it has shown an example computer architecture 4 that comprises processor 6 and storer 7 such as buffer memory.Processor 6 is communicated by letter with I/O (I/O) interface 8.Volatile memory 9 is also communicated by letter with interface 8 such as random-access memory (ram) 10 and/or other suitable electronic data memories.Graphic process unit 11 and storer 12 have improved the speed and the performance of graphics process such as buffer memory.
One or more I/O equipment is communicated by letter with interface 8 with indicating equipment 14 (such as mouse and/or other suitable device) such as keyboard 13.High power magnetic disk drive (HPDD) 15 provides permanent storer such as having the hard disk drive of one or more diameters greater than 1.8 inches motherboard, and the storage data are also communicated by letter with interface 8.HPDD15 generally consumes bigger electric weight during operation.When dependence is battery operated, frequently use HPDD15 will greatly shorten battery life.Computer architecture 4 comprises that also display 16, audio output apparatus 17 totally are identified in 18 input-output apparatus such as audio tweeter and/or other.
With reference now to Figure 1B,, example computer architecture 20 comprises process chip group 22 and I/O chipset 24.For example, computer architecture can be northbridge/southbridge framework (process chip group corresponding to north bridge chipset and I/O chipset corresponding to the South Bridge chip group) or other similar frameworks.Process chip group 22 is communicated by letter with graphic process unit 26 with processor 25 via system bus 27.22 controls of process chip group are mutual with volatile memory 28 (such as DRAM or other storeies of outside), Peripheral Component Interconnect (PCI) bus 30 and/or level 2 cache memory 32.1 grade of buffer memory 33 can be related with processor 25 and/or graphic process unit 26 respectively with 34.In an alternate embodiment, Accelerated Graphics Port (AGP) (not shown) is communicated by letter with process chip group 22 rather than with graphic process unit 26, and/or except that with it is also communicated by letter with process chip group 22 graphic process unit 26 is communicated by letter.Typically but be not to use a plurality of chips to realize process chip group 22.PCI slot and pci bus 30 are joined.
The citation form of I/O chipset 24 management I/O (I/O).I/O chipset 24 is communicated by letter with basic input/output (BIOS) 43 with USB (universal serial bus) (USB) 40, audio frequency apparatus 41, keyboard (KBD) and/or indicating equipment 42 via ISA(Industry Standard Architecture) bus 44.Different with process chip group 22, I/O chipset 24 typically (but not necessarily) uses single chip to realize that it is connected to pci bus 30.HPDD50 also communicates by letter with I/O chipset 24 such as hard disk drive.The operating system that the HPDD50 memory function is complete (OS) is such as Windows XP , Windows 2000 , Linux and based on MAC Operating system, it is carried out by processor 25.
Summary of the invention
According to the present invention, the disk drive system that is used to have the computing machine of high power and low-power mode comprises low-power disc driver (LPDD) and high power magnetic disk drive (HPDD).Control module comprises a minimum use piece (LUB) module, the LUB among its identification LPDD.During in receiving data storage request and data retrieval request at least one, control module selectively is sent to HPDD to LUB during low-power mode.
In other features, during the storage request of write data, if there are enough spaces to be used for write data on the LPDD, control module is sent to LPDD to write data so.If there are not enough spaces to be used for write data on the LPDD, control module is sent to HPDD to the HPDD power supply and LUB from LPDD so, and write data is sent to LPDD.
In other other features, control module comprises an adaptive memory module, and it determines whether may use write data before LUB when not having enough spaces to be used for write data on the LPDD.If possible use write data after LUB, control module stores write data on the HPDD into so.If possible used write data before LUB, control module is sent to HPDD to the HPDD power supply and LUB from LPDD so, and write data is sent to LPDD.
In other other features, during the data retrieval request of read data, control module is retrieved read data from LPDD, if read data is stored among the LPDD.Control module comprises an adaptive memory module, and it determines whether may use read data once when read data is not positioned on the LPDD.If possible use read data once, control module is retrieved read data from HPDD so.If described adaptive memory module determines and may repeatedly use read data that control module is sent to LPDD to read data from HPDD so, if there are enough spaces to be used for read data on the LPDD.If described adaptive memory module determines and may repeatedly use read data that control module is sent to HPDD to LUB from LPDD so, and read data is sent to LPDD from HPDD, if there are not enough spaces to be used for read data on the LPDD.
In other other features, if there are enough spaces to be used for read data on the LPDD, control module is sent to LPDD to read data from HPDD so.If there are not enough spaces to be used for read data on the LPDD, control module is sent to HPDD to LUB from LPDD so, and read data is sent to LPDD from HPDD.If read data is not positioned on the LPDD, control module is retrieved read data from HPDD so.
In other other features, HPDD comprises one or more motherboards, and the diameter of wherein said one or more motherboards is greater than 1.8 inches.LPDD comprises one or more motherboards, and the diameter of wherein said one or more motherboards is less than or equal to 1.8 inches.
According to the present invention, the disk drive system that is used to have the computing machine of high power and low-power mode comprises low-power disc driver (LPDD) and high power magnetic disk drive (HPDD).Control module is communicated by letter with HPDD with LPDD.During the storage request of the write data in low-power mode, whether control module determines to have on the LPDD enough spaces to be used for write data, and if enough spaces are arranged, its write data is sent to LPDD.
In other features, if there are enough spaces to use, control module stores write data on the HPDD into so.Control module further comprises a LPDD maintenance module, and it is sent to HPDD to data file from LPDD during high-power mode, to increase the free disk space on the LPDD.Described LPDD maintenance module is based on one of possibility of using in life-span, size and future at least, transfer file data in low-power mode.HPDD comprises that one or more diameters are greater than 1.8 inches motherboard.LPDD comprises that one or more diameters are less than or equal to 1.8 inches motherboard.
According to the present invention, be used to comprise that the data-storage system of the computing machine of high power and low-power mode comprises low-power (LP) permanent memory and high power (HP) permanent memory.The buffer memory control module is communicated by letter with the high power permanent memory with low-power, and comprises a self-adaptation memory module.When write data is written on one of low-power and high power permanent memory, produce adaptive storage decision in the self-adaptation memory module, it selects one of low-power and high power permanent memory.
One of in other features, below described adaptive decision is based at least: the power mode related, the size of write data, the last use date of write data and the manual covering state of write data with the previous use of write data.The LP permanent memory comprises at least one in flash memory and the low-power disc driver (LPDD).LPDD comprises one or more motherboards, and the diameter of wherein said one or more motherboards is less than or equal to 1.8 inches.The HP permanent memory comprises hard disk drive, and it comprises one or more motherboards, and the diameter of wherein said one or more motherboards is greater than 1.8 inches.
According to the present invention, be used to comprise that the data-storage system of the computing machine of high power and low-power mode comprises low-power (LP) permanent memory and high power (HP) permanent memory.The buffer memory control module is communicated by letter with the high power permanent memory with low-power, and comprises that a driving power reduces module.When during the low-power mode when the high power permanent memory reads read data, and described read data comprises a sequential access data file, described driving power reduces module and calculates a Burst Period (burst period), is used for the read data segment is sent to the LP permanent memory from the HP permanent memory.
In other features, described driving power reduces module and selects described Burst Period, with the power consumption in the readout of read data during the minimizing low-power mode.The LP permanent memory comprises at least one in flash memory and the low-power disc driver (LPDD).LPDD comprises one or more motherboards, and the diameter of wherein said one or more motherboards is less than or equal to 1.8 inches.The HP permanent memory comprises high power magnetic disk drive (HPDD).HPDD comprises one or more motherboards, and the diameter of wherein said one or more motherboards is greater than 1.8 inches.One of below described Burst Period is based at least: the rotation starting time of the rotation starting time of LPDD (spin-up time), HPDD, the power consumption of LPDD, the power consumption of HPDD, the capacity of reading length and LPDD of read data.
Comprise a high power magnetic disk drive (HPDD) according to many disk drive systems of the present invention, it comprises one or more motherboards, the diameter of wherein said one or more motherboards is greater than 1.8 inches, with a low-power disc driver (LPDD), it comprises one or more motherboards, and the diameter of wherein said one or more motherboards is less than or equal to 1.8 inches.The drive control module centralized control is to the data access of LPDD and HPDD.
Foundation Redundant Array of Independent Disks of the present invention system comprises one first disk array, and it comprises X high power magnetic disk drive (HPDD), and wherein X is more than or equal to 2.The second disk array comprises Y low-power disc driver (LPDD), and wherein Y is more than or equal to 1.The array management module is communicated by letter with first and second disk arrays, and it is data cached to first disk array and/or the buffer memory data from first disk array to utilize the second disk array.
Other aspects of application of the present invention will become obvious from the following detailed description that provides.It should be understood that detailed description and the specific example of pointing out the preferred embodiments of the present invention only are for illustrative purposes, rather than be used to limit the scope of the invention.
Description of drawings
From the detailed description and the accompanying drawings, can more fully understand the present invention, wherein:
Figure 1A and 1B according to DESCRIPTION OF THE PRIOR ART the computer architecture of example;
Fig. 2 A has illustrated the computer architecture of first example according to the present invention, its have primary processor, main graphic processor and the main volatile memory of during high-power mode, working and communicate by letter with primary processor from processor with from graphic process unit, they are working during the low-power mode and utilizing main volatile memory during low-power modes;
Fig. 2 B has illustrated the computer architecture of second example according to the present invention, itself and Fig. 2 category-A seemingly, and comprise be connected to from processor and/or from graphic process unit from volatile memory;
Fig. 2 C has illustrated the computer architecture of the 3rd example according to the present invention, and itself and Fig. 2 category-A seemingly and comprise the volatile memory of embedding, its with from processor and/or related from graphic process unit;
Fig. 3 A has illustrated the computer architecture of the 4th example according to the present invention, computing machine have primary processor, main graphic processor and the main volatile memory of during high-power mode, working and with the process chip group communication from processor with from graphic process unit, they are being worked during the low-power mode and utilizing main volatile memory during low-power modes;
Fig. 3 B has illustrated the computer architecture of the 5th example according to the present invention, itself and Fig. 3 category-A seemingly, and comprise be connected to from processor and/or from graphic process unit from volatile memory;
Fig. 3 C has illustrated the computer architecture of the 6th example according to the present invention, and itself and Fig. 3 category-A seemingly and comprise the volatile memory of embedding, its with from processor and/or related from graphic process unit;
Fig. 4 A has illustrated the computer architecture of the 7th example according to the present invention, and computing machine has from processor with from graphic process unit, and it is communicated by letter with the I/O chipset, is working during the low-power mode and utilize main volatile memory during low-power mode;
Fig. 4 B has illustrated the computer architecture of the 8th example according to the present invention, itself and Fig. 4 category-A seemingly, and comprise be connected to from processor and/or from graphic process unit from volatile memory;
Fig. 4 C has illustrated the computer architecture of the 9th example according to the present invention, and itself and Fig. 4 category-A seemingly and comprise the volatile memory of embedding, its with from processor and/or related from graphic process unit; With
Fig. 5 has illustrated the cache layer aggregated(particle) structure of the computer architecture that is used for Fig. 2 A-4C according to the present invention;
Fig. 6 is the FBD (function block diagram) of drive control module, and it comprises minimum use piece (LUB) module, and storage and the transmission of management data between low-power disc driver (LPDD) and high power magnetic disk drive (HPDD);
Fig. 7 A is the process flow diagram of explanation by the step of the drive control module execution of Fig. 6;
Fig. 7 B is the process flow diagram of explanation by the alternative steps of the drive control module execution of Fig. 6;
Fig. 7 C and 7D are the process flow diagram of explanation by the alternative steps of the drive control module execution of Fig. 6;
Fig. 8 A has illustrated the buffer memory control module, and it comprises an adaptive storage control module, and data storage and transmission between control LPDD and the HPDD;
Fig. 8 B has illustrated an operating system, and it comprises an adaptive storage control module, and data storage and transmission between control LPDD and the HPDD;
Fig. 8 C has illustrated a host computer control module, and it comprises an adaptive storage control module, and data storage and transmission between control LPDD and the HPDD;
Fig. 9 has illustrated the step of being carried out by the adaptive storage control module of Fig. 8 A-8C;
Figure 10 is a sample table, and a definite program or file method with the possibility that is used during low-power mode has been described;
Figure 11 A has illustrated and has comprised that a disk drive power reduces the buffer memory control module of module;
Figure 11 B has illustrated and has comprised that a disk drive power reduces the operating system of module;
Figure 11 C has illustrated and has comprised that a disk drive power reduces the host computer control module of module;
Figure 12 has illustrated that the disk drive power by Figure 11 A-11C reduces the step that module is carried out;
Figure 13 has illustrated the many disk drive system that comprise high power magnetic disk drive (HPDD) and low-power disc driver (LPDD);
Figure 14-17 has illustrated the embodiment of other exemplary type of many disk drive system of Figure 13;
Figure 18 has illustrated the use of low-power permanent memory such as flash memory or low-power disc driver (LPDD), is used to increase the virtual store of computing machine;
Figure 19 and 20 has illustrated the step of being carried out by operating system, to distribute and to use the virtual store of Figure 18;
Figure 21 is the FBD (function block diagram) according to the Redundant Array of Independent Disks system of prior art;
Figure 22 A is the FBD (function block diagram) of foundation example RAID of the present invention system, the disk array that it has the disk array that comprises X HPDD and comprises Y LPDD;
Figure 22 B is the FBD (function block diagram) of the RAID system of Figure 22 A, and wherein X and Y equal Z;
Figure 23 A is the FBD (function block diagram) of foundation another example RAID of the present invention system, and it has the disk array that comprises Y LPDD, and this array is communicated by letter with the disk array that comprises X HPDD;
Figure 23 B is the FBD (function block diagram) of the RAID system of Figure 23 A, and wherein X and Y equal Z;
Figure 24 A is the FBD (function block diagram) of foundation another example RAID of the present invention system, and it has the disk array that comprises X HPDD, and this array is communicated by letter with the disk array that comprises Y LPDD;
Figure 24 B is the FBD (function block diagram) of the RAID system of Figure 24 A, and wherein X and Y equal Z;
Figure 25 is the FBD (function block diagram) of network attached storage (NAS) system according to prior art; With
Figure 26 is the FBD (function block diagram) according to network attached storage of the present invention (NAS) system, and it comprises the RAID system of Figure 22 A, Figure 22 B, Figure 23 A, Figure 23 B, Figure 24 A and/or 24B and/or according to the multiple driver system of Fig. 6-17.
Embodiment
Preferred embodiment described below only is exemplary, and does not plan to limit the present invention, its application or use.For clear, use identical reference numerals to identify similar elements in the accompanying drawings.As used herein, term module and/or equipment refer to ASIC(Application Specific Integrated Circuit) (ASIC), electronic circuit, processor (shared, special-purpose or group) and carry out one or more softwares or firmware program storer, unite logical circuit and/or other suitable assemblies of institute's representation function be provided.
As used herein, term " high-power mode " refers to the activity operations of the main graphic processor (GPU) of host-processor and/or main process equipment.Term " low-power mode " refers to low-power hibernate mode, " shut " mode" and/or when from processor with when graphic process unit can be worked, the non-response modes of primary processor and/or main graphic processor." " shut " mode" " refers to the situation when principal and subordinate processor is all closed.
Term " low-power disc driver " or LPDD refer to has disc driver and/or the microdrive that one or more diameters are less than or equal to 1.8 inches motherboard.Term " high power magnetic disk drive " or HPDD refer to has the hard disk drive of one or more diameters greater than 1.8 inches motherboard.The power ratio HPDD that LPDD typically has low memory capacity and consumption is few.The speed of LPDD rotation is also fast than HPDD.For example, LPDD can reach 10000-20000RPM or higher rotational speed.
Comprise primary processor, main graphic processor and primary memory (as describing in conjunction with Figure 1A and 1B) according to computer architecture of the present invention, they are worked during high-power mode.Work during low-power mode from processor with from graphic process unit.Can be connected to the various assembly of computing machine from processor with from graphic process unit, as described below.During low-power mode, can use main volatile memory from processor with from graphic process unit.Alternatively, can use from volatile memory, such as DRAM and/or embedding from volatile memory such as the DRAM that embeds, as will be described below.
When in high-power mode, working, the relative high power of primary processor with the main graphic processor consumption.Primary processor and main graphic processor are carried out the operating system (OS) of telotism, the exterior storage that this operating system call is a large amount of relatively.Primary processor and main graphic processor are supported high performance operation, comprise complicated calculating and senior figure.The operating system of telotism can be based on Windows OS such as Windows XP , based on the OS of Linux with based on MAC OS or the like.The operating system of telotism is stored in HPDD15 and/or 50.
Consume less power (lacking) when during low-power mode, working than primary processor and main graphic processor from processor with from graphic process unit.From processor with from the limited operating system of graphic process unit operating function, the outside volatile storage of this operating system call relatively small amount.Also can use and primary processor identical operations system from processor with from graphic process unit.The reduction version of for example, can functions of use complete operating system.Support the operation of lower-performance, lower computation rate and not too senior figure from processor with from graphic process unit.For example, the operating system of function limitation can be Windows CE Or the operating system of any other suitable function limitation.The operating system of function limitation preferably is stored in the permanent memory, such as flash memory and/or LPDD.In a preferred embodiment, the common data layout of the operation systems share of telotism and function limitation is to reduce complicacy.
Primary processor and/or main graphic processor preferably include transistor, and its use has the manufacturing process manufacturing of relatively little characteristic dimension.In one embodiment, these transistors are to use the advanced CMOS manufacturing process to make.The transistor that uses in primary processor and/or main graphic processor has high relatively standby electric leakage (standby leakage), the passage of lacking relatively and is manufactured into size at a high speed suitable.Primary processor and main graphic processor preferably mainly utilize dynamic logic.In other words, they can not be closed.Transistor is in the duty cycle that is less than about 20%, and preferably is less than about 10% duty cycle and is switched, though can use other duty cycle.
On the contrary, preferably include transistor, the manufacturing process manufacturing that its use characteristic size is bigger than the technology that is used for primary processor and/or main graphic processor from processor and/or from graphic process unit.In one embodiment, these transistors are to use the conventional cmos manufacturing process to make.Have low relatively standby electric leakage, long relatively passage from processor and/or the transistor that from graphic process unit, uses and be determined size to be fit to low power consumption.Preferably mainly utilize static logic rather than dynamic logic from processor with from graphic process unit.Transistor is greater than 80% duty cycle, and preferably is switched greater than 90% duty cycle, though can use other duty cycle.
When in high-power mode, working, the relative high power of primary processor with the main graphic processor consumption.When in low-power mode, working, consume less power from processor with from graphic process unit.But in low-power mode, computer architecture can be supported the feature and calculating and the more uncomplicated figure that reduce when working in high-power mode.Can understand as the technician, many methods that realize according to computer architecture of the present invention are arranged.Therefore, the technician will understand, and only be exemplary and not restrictive below in conjunction with the description of Fig. 2 A-4C.
With reference now to Fig. 2 A,, it has shown the computer architecture 60 of first example.During high-power mode, primary processor 6, volatile memory 9 and main graphic processor 11 are communicated by letter with interface 8, and support complicated data and graphics process.During low-power mode, from processor 62 with communicate by letter with interface 8 from graphic process unit 64, and support more uncomplicated data and graphics process.During low-power and/or high-power mode, optionally permanent memory 65 is communicated by letter with interface 8 such as LPDD66 and/or flash memory 68, and the low-power permanent storage of data is provided.HPDD15 provides high power/capacity permanent memory.Permanent memory 65 and/or HPDD15 are used to limited operating system and/or other data and the file of memory function during low-power mode.
In this embodiment, when working, low-power mode utilizes volatile memory 9 (or primary memory) from processor 62 with from graphic process unit 64.So, during low-power mode, be powered to support the communication between components of communicating by letter and/or during low-power mode, being powered with primary memory to small part interface 8.For example, keyboard 13, indicating equipment 14 and basic display unit 16 can be powered and use during low-power mode.In all embodiment that describe in conjunction with Fig. 2 A-4C, during low-power mode, also can provide and use function with minimizing from display (such as monochrome display) and/or from input-output apparatus.
With reference now to Fig. 2 B,, shown with Fig. 2 A in the computer architecture 70 of similar second example of framework.In this embodiment, from processor 62 with from graphic process unit 64 with communicate by letter from volatile memory 74 and/or 76.From volatile memory 74 and 76 can be DRAM or other suitable storeies.During low-power mode, the main volatile memory 9 that in Fig. 2 A, shows and describe, utilize respectively from volatile memory 74 and/or 76 from processor 62 with from graphic process unit 64, and/or utilize respectively from volatile memory 74 and/or 76 rather than main volatile memory 9 from processor 62 with from graphic process unit 64.
With reference now to Fig. 2 C,, shown with Fig. 2 category-A like the computer architecture 80 of the 3rd example.From processor 62 and/or comprise the volatile memory 84 and 86 of embedding respectively from graphic process unit 64.During low-power mode, utilize the volatile memory 84 and/or 86 that embeds respectively from processor 62 with from graphic process unit 64, except main volatile memory and/or replace main volatile memory.In one embodiment, the volatile memory 84 and 86 of embedding is the DRAM (eDRAM) that embeds, though can use the volatile memory of the embedding of other types.
With reference now to Fig. 3 A,, it has shown the computer architecture 100 according to the 4th example of the present invention.During high-power mode, primary processor 25, main graphic processor 26 and main volatile memory 28 are communicated by letter with process chip group 22, and support complicated data and graphics process.When computing machine is in low-power mode, support more uncomplicated data and graphics process from processor 104 with from graphic process unit 108.In this embodiment, from processor 104 with when graphic process unit 108 is worked, utilized main volatile memory 28 low-power mode.Therefore, during low-power mode, process chip group 22 can be powered and/or the part power supply fully, so that the communication between them.During low-power mode, HPDD50 can be powered so that the high power volatile memory to be provided.Low-power permanent memory 109 (LPDD110 and/or flash memory 112) is connected to process chip group 22, I/O chipset 24 or in other positions, and is the limited operating system of low-power mode memory function.
Process chip group 22 can be powered and/or part power supply fully, with the operation of other assemblies of supporting HPDD50, LPDD110 and/or using during low-power mode.For example, during low-power mode, can use keyboard and/or indicating equipment 42 and basic display unit.
With reference now to Fig. 3 B,, shown with Fig. 3 category-A like the computer architecture 150 of the 5th example.Be connected respectively to from processor 104 and/or from graphic process unit 108 from volatile memory 154 and 158.During low-power mode, utilize respectively from volatile memory 154 and 158 from processor 104 with from graphic process unit 108, rather than main volatile memory 28 and/or also utilize main volatile memory 28.If desired, can during low-power mode, close process chip group 22 and main volatile memory 28.From volatile memory 154 and 158 can be DRAM or other suitable storeies.
With reference now to Fig. 3 C,, shown with Fig. 3 category-A like the computer architecture 170 of the 6th example.From processor 104 and/or comprise the storer 174 and 176 of embedding respectively from graphic process unit 108.During low-power mode, utilize the storer 174 and 176 that embeds respectively from processor 104 with from graphic process unit 108, rather than main volatile memory 28 and/or also utilize main volatile memory 28.In one embodiment, the volatile memory 174 and 176 of embedding is the DRAM (eDRAM) that embeds, though can use the storer of the embedding of other types.
With reference now to Fig. 4 A,, it has shown the computer architecture 190 according to the 7th example of the present invention.During low-power mode, from processor 104 with communicate by letter with I/O chipset 24 from graphic process unit 108, and utilize main volatile memory 28 as volatile memory.Process chip group 22 is power supply and/or part power supply fully still, to allow the main volatile memory 28 of visit during low-power mode.
With reference now to Fig. 4 B,, shown with Fig. 4 category-A like the computer architecture 200 of the 8th example.Be connected respectively to from processor 104 with from graphic process unit 108 from volatile memory 154 and 158, and during low-power mode, be used to replace main volatile memory 28 and/or also utilize main volatile memory 28.During low-power mode, can close process chip group 22 and main volatile memory 28.
With reference now to Fig. 4 C,, shown with Fig. 4 category-A like the computer architecture 210 of the 9th example.Except main volatile memory 28 and/or what replace main volatile memory 28 is that the volatile memory 174 and 176 of embedding is provided for respectively from processor 104 and/or from graphic process unit 108.In this embodiment, during low-power mode, can close process chip group 22 and main volatile memory 28.
With reference now to Fig. 5,, shown the cache layer aggregated(particle) structure 250 of the computer architecture that is used for Fig. 2 A-4C example.HP permanent memory HPDD50 is positioned at the lowermost layer 254 of cache layer aggregated(particle) structure 250.If HPDD50 is disabled, during low-power mode, may use or not use layer 254 so, and if during low-power mode HPDD50 be activated, use layer 254 so.The LP permanent memory is positioned at following one deck 258 of cache layer aggregated(particle) structure 250 such as LPDD110 and/or flash memory 112.Outside volatile memory is following one deck 262 of cache layer aggregated(particle) structure 250 such as main volatile memory, from volatile memory and/or from the storer that embeds, and this depends on configuration.Following one deck 266 of cache layer aggregated(particle) structure 250 is drawn together in the level 2 cache memory or the bag deposit of postponing.1 grade of buffer memory is following one deck 268 of cache layer aggregated(particle) structure 250.CPU (main or from) is last one deck 270 of cache layer aggregated(particle) structure.Principal and subordinate's graphic process unit is used similar hierarchical structure.
According to the provide support low-power mode of more uncomplicated processing and figure of computer architecture of the present invention.As a result, the power consumption of computing machine can significantly reduce.For notebook computer applications, prolonged battery life.
With reference now to Fig. 6,, is used for the drive control module 300 of many disk drive system or host computer control module and comprises a minimum use piece (LUB) module 304, a self-adaptation memory module 306 and/or a LPDD maintenance module 308.Part is based on LUB information, and drive control module 300 control high power magnetic disk drive (HPDD) 310 are such as hard disk drive, and low-power disc driver (LPDD) 312 transmits such as storage between the microdrive and data.By data storage and the transmission between HPDD and the LPDD during the management high-low power pattern, drive control module 300 has reduced power consumption.
Minimum use piece module 304 is followed the tracks of the minimum use piece of the data among the LPDD312.During low-power mode, the minimum use piece of the data (such as file and/or program) among the minimum use piece module 304 identification LPDD312 is so that can replace it when needs.Some data block or file can be exempted minimum use piece supervision, such as only relevant with the operating system of a function limitation file, quilt piece that is stored among the LPDD312 and/or alternative document and the program of only moving at low-power mode are set manually.Can use other criterion to select the data block that will be capped, as will be described below.
During low-power mode, self-adaptation memory module 306 determines whether more may use write data before minimum use piece in the data storage request process.Whether self-adaptation memory module 306 is also determined during low-power mode may only use read data once in the data retrieval request process.During the high-power mode and/or under other situations, LPDD maintenance module 308 is sent to HPDD to old data from LPDD, as will be described below.
Selection has shown the step of being carried out by drive control module 300 with reference now to Fig. 7 A.Control starts from step 320.In step 324, drive control module 300 determines whether to exist data storage request.If step 324 is true, whether drive control module 300 determines to have on LPDD312 enough spaces to use in step 328 so.If no, drive control module 300 is powered to HPDD310 in step 330 so.In step 334, drive control module 300 sends the data block of minimum use to HPDD310.In step 336, whether drive control module 300 determines to have on LPDD312 enough spaces to use.If no, so Control Circulation to step 334.Otherwise drive control module 300 proceeds to step 340 and closes HPDD310.In step 344, stored data (for example from main frame) to be sent to LPDD312.
If step 324 is false, drive control module 300 proceeds to step 350 and determines whether to exist data retrieval request so.If no, control turns back to step 324.Otherwise control proceeds to step 354 and whether specified data is arranged in LPDD312.If step 354 is true, drive control module 300 from the LPDD312 retrieve data, and proceeds to step 324 in step 356 so.Otherwise drive control module 300 is powered to HPDD310 in step 360.In step 364, whether drive control module 300 is determined to have on LPDD312 is the data that enough spaces can be used for asking.If no, drive control module 300 data block minimum use in step 366 sends HPDD310 to, and proceeds to step 364.When step 364 is a true time, drive control module 300 sends data to LPDD312, and in step 368 from the LPDD312 retrieve data.In step 370, when transmitting to the data of LPDD312 when finishing, HPDD310 is closed in control.
With reference now to Fig. 7 B,, the method for similarly revising shown in a kind of and Fig. 7 A, it comprises the one or more self-adaptation steps of being carried out by adaptive memory module 306.In step 328, but there are enough space times spent on LPDD, are controlled at step 372 and determine to want stored data whether may or be used in the data front in the minimum use piece by the data front in the piece of minimum use piece module identification.If step 372 is false, so drive control module 300 step 374 data storage to HPDD, and control proceeds to step 324.By doing like this, saved and given the power that LPDD consumed minimum use block transfer.If step 372 is true, control proceeds to step 330 so, as above described about Fig. 7 A.
In the process of data retrieval request,, control so and proceed to step 376, and determine whether to use data once when step 354 is false.If step 376 is true, drive control module 300 from the HPDD retrieve data, and proceeds to step 324 in step 378 so.By doing like this, saved data have been sent to the power that LPDD consumes.If step 376 is false, control proceeds to step 360 so.As understanding, if possible use data once, do not need so data are moved to LPDD.But the power consumption of HPDD can not be avoided.
With reference now to Fig. 7 C,, during low-power operation, also can carry out the control forms of more simplifying.During high power and/or low-power mode, also can carry out and safeguard step (using LPDD maintenance module 308).In step 328, but arranged enough space times spent on LPDD, data are sent to LPDD in 344 steps, and control turns back to step 324.Otherwise when step 328 is false, data are stored on the HPDD in step 380, and control turns back to step 324.As can understanding, but the method for Fig. 7 C explanation is used LPDD in the capacity time spent, and uses HPDD when the LPDD capacity is unavailable.The technician is understood that the method that can utilize mixing, and it can use the various combinations of the step of Fig. 7 A-7D.
In Fig. 7 D, drive control module 300 and/or is carried out at other times and to be safeguarded and step be stored in the unused file on the LPDD or use few file with deletion when returning high-power mode.This safeguard step also can be in low-power mode, when the incident of generation during such as the full incident of disk and/or in other cases, in use regularly carry out.Control starts from step 390.In step 392, control determines whether using high-power mode.If not, Control Circulation is got back to step 392 so.If step 392 is true, is controlled at step 394 so and determines whether a last pattern is low-power mode.If not, step 392 is returned in control so.If step 394 is true, is controlled at step 396 so and carries out and safeguard, such as old file or use few file to move to HPDD from LPDD.Also can make the adaptive determining that to use which file about future, the criterion of for example using criterion described above and describing below in conjunction with Fig. 8 A-10.
With reference now to Fig. 8 A and 8B,, storage control system 400-1,400-2,400-3 have been shown.In Fig. 8 A, storage control system 400-1 comprises the buffer memory control module 410 with self-adaptation storage control module 414.The use of self-adaptation storage control module 414 Monitoring Files and/or program is to determine whether and may use them in low-power mode or high-power mode.Buffer memory control module 410 is communicated by letter with one or more data bus 416, and data bus is communicated by letter such as DRAM and/or other volatibility electronic data memories such as L1 buffer memory, L2 buffer memory, volatibility RAM with volatile memory 422 then.Bus 416 is also communicated by letter such as HPDD426 with low-power permanent memory 424 (such as flash memory and/or LPDD) and/or high power permanent memory 426.Shown the operating system 430 of telotism and/or function limitation in Fig. 8 B, it comprises self-adaptation storage control module 414.Appropriate interface and/or controller (not show) are between data bus and HPDD, and/or between data bus and/or the LPDD.
In Fig. 8 C, host computer control module 440 comprises self-adaptation storage control module 414.Host computer control module 440 and LPDD426 ' and hard disk drive 426 ' communicate by letter.Host computer control module 440 can be drive control module, Integrated Device Electronics (IDE), ATA, serial ATA (SATA) or other controllers.
With reference now to Fig. 9,, shown the step that the storage control system among Fig. 8 A-8C is carried out.In Fig. 9, control starts from step 460.In step 462, control determines whether to exist the data storage request of permanent memory.If no, Control Circulation is got back to step 462 so.Otherwise self-adaptation storage control module 414 determines whether and may use data at low-power mode in step 464.If step 464 is false, so step 468 data storage in HPDD.If step 464 is true, so step 474 data storage in permanent memory 444.
With reference now to Figure 10,, shown a kind of method that determines whether in low-power mode, to use data block.Table 490 comprises buffer descriptor symbol field 492, low power counter field 493, high power counter field 494, size field 495, uses field 496 and/or manually cover field 497 at last.When using specific program or file during low-power mode or high-power mode, counter field 493 and/or 494 just increases progressively.When permanent memory requires the data storage of program or file, with regard to access list 492.Can use threshold percentage and/or count value to assess.For example, if file or program time that low-power mode is used more than 80%, file can be stored in the low-power permanent memory so, such as flash memory and/or microdrive.If do not reach threshold value, file or procedure stores are in the high power permanent memory so.
As understanding, regularly counter reset behind predetermined sample size (in other words, providing the rolling window), and/or uses any other criterion.In addition, possibility can be weighted or revise and/or be replaced by size field 495.In other words, along with file size increases, because the limited capacity of LPDD, required threshold value may increase.
Can based on by last use field 496 records since the time that file is used at last, further revise the possibility of using decision.Can use a threshold date and/or a factor in the time of last use determines as possibility.Though Figure 10 has shown table, the one or more fields that are used can be stored in other positions and/or other data structures.Can use the weight sampling of algorithm and/or two or more fields.
Using manually, covering field 497 allows users and/or operating system manually to cover the possibility of using decision.For example, manually cover field and can allow the L state to be used for the LPDD default storage, the H state is used for the default storage of HPDD, and/or A condition is used for storage decision (as described above) automatically.Can define other manual covering classification.Except above criterion, the current power rank of the computing machine of working in LPDD can be used to adjust described decision.The technician will be understood that and have additive method, is used for determining using at high-power mode or low-power mode the possibility of file or program, and these methods belong to concept of the present invention.
With reference now to Figure 11 A and 11B,, shown that driving power reduces the 500-1 of system, 500-2,500-3 (being generically and collectively referred to as 500).Driving power minimizing system 500 periodically or otherwise bursts or the big sequential access file that happens suddenly arrives the low-power permanent memory, such as the segment of audio frequency and/or video file, but is not limited to these files.In Figure 11 A, driving power reduces the 500-1 of system and comprises having the buffer memory control module 520 that driving power reduces control module 522.Buffer memory control module 520 is communicated by letter with one or more data bus 526, and data bus 526 is communicated by letter with HPDD538 such as flash memory and/or LPDD such as DRAM and/or other volatibility electronic data memories, permanent memory 534 such as L1 buffer memory, L2 buffer memory, volatibility RAM with volatile memory 530 then.In Figure 11 B, driving power reduces the 500-2 of system and comprises that having driving power reduces the telotism of control module 522 and/or the operating system 542 of function limitation.Appropriate interface and/or controller (not show) are between data bus and HPDD, and/or between data bus and the LPDD.
In Figure 11 C, driving power reduces the 500-3 of system and comprises the host computer control module 560 with self-adaptation storage control module 522.Host computer control module 560 is communicated by letter with one or more data bus 564, data bus 564 and LPDD534 ' and hard disk drive 538 ' communicate by letter.Host computer control module 560 can be drive control module, Integrated Device Electronics (IDE), ATA, serial ATA (SATA) and/or other controllers or interface.
With reference now to Figure 12,, shown that the driving power of Figure 11 A-11C reduces the step that system 500 carries out.Control starts from step 582.In step 584, control determines whether system is in low-power mode.If not, Control Circulation is got back to step 584 so.If step 584 is true, control proceeds to step 586 so, determines that in this control big data block access is whether typically from the request of HPDD.If not, then Control Circulation is got back to step 584.If step 586 is true, whether control proceeds to step 590 and specified data piece by sequential access so.If not, then Control Circulation is got back to step 584.If step 590 is true, control proceeds to step 594 and determines to read length so.In step 598, Burst Period and frequency are determined in control, are used for data are sent to the low-power permanent memory from the high power permanent memory.
In one embodiment, Burst Period and frequency are optimized to reduce power consumption.Burst Period and frequency are preferably based on the length of reading of the rotation starting of capacity, read-out speed (Playback Rate), HPDD and/or the LPDD of the rotation of HPDD and/or LPDD starting (spin-up) time, permanent memory and steady state power consumption and/or alphabetic data piece.
For example, the high power permanent memory is HPDD, and it consumes the 1-2 watts during operation, has 4-10 rotation starting time and the general capacity greater than 20Gb of second.The low-power permanent memory is a microdrive, and it consumes the 0.3-0.5 watts during operation, has the rotation starting time of 1-3 second and the capacity of 1-6Gb.As understanding, aforesaid performance number and/or capacity can change for other embodiment.HPDD can have the data transfer rate of 1Gb/s to microdrive.Read-out speed can be 10Mb/s (for example for a video file).As understanding, the Burst Period of HPDD multiply by the capacity that transfer rate should be no more than microdrive.Time between the burst should add Burst Period greater than the rotation starting time.In these parameters, but the power consumption of optimization system.In low-power mode, if HPDD work can consume considerable power so to play whole video such as film.Use method described above, by selectively data are sent to LPDD from HPDD with very high speed (for example 100 of read-out speed times) in a plurality of burst sections of fixed intervals, can greatly reduce power consumption, HPDD can be closed then.Can easily realize power-saving greater than 50%.
With reference now to Figure 13,, shown that it comprises a drive control module 650 and one or more HPDD644 and one or more LPDD648 according to many disk drive system 640 of the present invention.Drive control module 650 is communicated by letter with main process equipment via host computer control module 651.For main frame, many disk drive system 640 are operated HPDD644 and LPDD648 disc driver as a whole effectively, to reduce complicacy, improve performance and to reduce power consumption, as will be described below.Host computer control module 651 can be IDE, ATA, SATA and/or other control module or interface.
With reference now to Figure 14,, in one embodiment, drive control module 650 comprises a hard disk controller (HDC) 653, and it is used to control one of HPDD and/or LPDD, perhaps all controls for these two.The data that buffer zone 656 storages are related with the control of HPDD and/or LPDD, and/or on one's own initiative buffered data to HPDD and/or LPDD or buffering data, to improve rate of data signalling by optimizing the data block size from HPDD and/or LPDD.The processing that processor 657 execution are relevant with the operation of HPDD and/or LPDD.
HPDD648 comprises one or more motherboards 652, and motherboard 652 has the magnetic coating of storage tape.The spindle drive motor rotation that motherboard 652 is shown in 654 signals.General spindle drive motor 654 rotates motherboard 652 with fixing speed during read/write operation.One or more read/write arm 658 moves with respect to motherboard 652, to read from the data of motherboard 652 and/or to write data to motherboard 652.Because the motherboard of HPDD648 is bigger than LPDD's, so spindle drive motor 654 needs more power to rotate starting HPDD and safeguards HPDD at a high speed.Usually, the rotation starting time of HPDD is also longer.
Read/write device 659 is positioned at the end near read/write arm 658.Read/write device 659 comprises that writing component is such as the inductor that produces magnetic field.Read/write device 659 also comprises the reading component (such as diamagnetic MR element) in the magnetic field of induction on motherboard 652.Pre-amplification circuit 660 amplifies analog read/write signals.
When reading of data, the low level signal that pre-amplification circuit 660 amplifies from reading component, and the output amplifying signal is to read/write channel equipment.When write data, generate write current, it flows through the writing component of read/write device 659, and is switched the magnetic field that has positive and negative level with generation.Positive and negative level is stored by motherboard 652, and is used to indicate data.LPDD644 also comprises one or more motherboards 662, spindle drive motor 664, one or more read/write arm 668, read/write device 669 and pre-amplification circuit 670.
HDC653 communicates by letter with first spindle drive motor/voice coil motor (VCM) driver 672, first read/write channel circuit 674, second spindle drive motor/VCM driver 676 and second read/write channel circuit 678 with host computer control module 651.Host computer control module 651 and drive control module 650 can be realized by SOC (system on a chip) (SOC) 684.As can understanding, main shaft VCM driver 672 and 676 and/or read/ write channel circuit 674 and 678 can merge.Spindle drive motor/ VCM driver 672 and 676 control spindle drive motors 654 and 664, it rotates motherboard 652 and 662 respectively.Spindle drive motor/ VCM driver 672 and 676 also produces the control signal of locating read/write arm 658 and 668 respectively, for example uses voice coil actuator, stepping motor or any other suitable actuator.
With reference now to Figure 15-17,, other versions of many disk drive system have been shown.In Figure 15, drive control module 650 can comprise a direct interface 680, is used to be provided to the outside connection of one or more LPDD.In one embodiment, direct interface is Peripheral Component Interconnect (PCI) bus, PCI (PCIX) bus and/or any other suitable bus or interface fast.
In Figure 16, host computer control module 651 had not only been communicated by letter with LPDD644 but also with HPDD648.Low-power drive control module 650LP directly communicates by letter with host computer control module with high power magnetic disk drive control module 650HP.One of 0, LP and/or HP drive control module or both can realize as SOC.
In Figure 17, shown the LPDD682 of an example, it comprises the interface 690 that a support is communicated by letter with direct interface 680.As above elaboration, interface 680 and 690 can be Peripheral Component Interconnect (PCI) bus, PCI (PCIX) bus and/or any other suitable bus or interface fast.LPDD682 comprises HDC692, buffer zone 694 and/or processor 696.LPDD682 also comprises spindle drive motor/VCM driver 676, read/write channel circuit 678, motherboard 662, spindle drive motor 665, read/write arm 668, reading component 669 and preceding applier 670, as mentioned above.Alternatively, HDC653, buffer zone 656 and processor 658 can be merged, and are used for two drivers.Similarly, can merge spindle drive motor/VCM driver and read channel circuit alternatively.In the embodiment of Figure 13-17, the active buffer of LPDD is used to improve performance.For example, buffer zone is used to optimize the data block size, is used for surpassing the optimum velocity of host data bus.
In traditional computer system, the branch page file is the hidden file on HPDD or the HP permanent memory, and HPDD or HP permanent memory are operated system and are used to preserve program and/or the data file that part is not suitable for the volatile memory of computing machine.Divide page file and physical storage, or RAM defines the virtual memory of computing machine.Operating system sends data to storer from a minute page file as required, and data are returned to the branch page file from volatile memory, thinks the new data vacating space.Divide page file to be also referred to as swap file.
With reference now to Figure 18-20,, the present invention utilizes the LP permanent memory to increase the virtual store of computer system such as LPDD and/or flash memory.In Figure 18, operating system 700 allows the user to limit virtual memory 702.During operation, operating system 700 is via one or more bus, 704 addressing virtual memory 702.Virtual memory 702 not only comprises volatile memory 708 but also comprise that LP permanent memory 710 is such as flash memory and/or LPDD.
With reference now to Figure 19,, operating system allows user's distribution portion or whole LP permanent memory 710 as paging memory, to increase virtual store.In step 720 control beginning.In step 724, operating system has determined whether to ask extra paging storage.If no, Control Circulation is returned step 724 so.Otherwise operating system distribution portion LP permanent memory is used for the branch page file, to increase virtual store in step 728.
In Figure 20, the extra LP permanent memory of operating system utilization is as paging memory.Control starts from step 740.In step 744, control determines that whether operating system is just at the request msg write operation.If control proceeds to step 748, and has determined whether to exceed the capacity of volatile memory.If no, use volatile memory to carry out write operation in step 750 so.If step 748 is true, be stored in the branch page file of LP permanent memory in step 754 data so.If step 744 is false, control proceeds to step 760, and determines whether to have asked data reading operation.If be false, Control Circulation is returned step 744.Otherwise, be controlled at step 764 and determine that whether the address is corresponding to address ram.If step 764 is true, be controlled at step 766 from the volatile memory reading of data and proceed to step 744.If step 764 is false, be controlled at the branch page file reading of data of step 770 from the LP permanent memory, and control proceeds to step 744.
As understanding, compare with the system that utilizes HPDD, the big young pathbreaker who uses the LP permanent memory to increase virtual memory such as flash memory and/or LPDD improves the performance of computing machine.In addition, divide the power consumption of page file lower than the system that uses HPDD.Because the size of its increase, HPDD needs the extra rotation starting time, and comparing this with flash memory and/or LPDD has increased data time, and wherein flash memory does not rotate the bootwait time, and the rotation starting time of LPDD is shorter and power consumption is lower.
With reference now to Figure 21,, shown Redundant Array of Independent Disks system 800, it comprises one or more and disk array 808 server in communication and/or client computer 804.One or more servers and/or client computer 804 comprise disk array controller 812 and/or array management module 814.The logic that disk array controller 812 and/or array management module 814 receives data and carry out data arrives disk array 808 to physical address map.Disk array typically comprises a plurality of HPDD816.
The data access rate that a plurality of HPDD816 provide fault-tolerant (redundancy) and/or improve.RAID system 800 provides the method for a plurality of independent HPDD of a kind of access, just look like disk array 808 are big hard disk drives.Disk array 808 can provide hundreds of Gb to the 10 times of data storage to 100 times of Tb altogether.Data are stored on a plurality of HPDD816 in every way, if lose the risk of all data to reduce by a drives fail, and improve data time.
The method of data storage on HPDD816 typically and call the RAID rank.There are various RAID ranks, comprise RAID0 level or disk staging.In the RAID0 level system, data are written in the piece of crossing over a plurality of drivers, and another driver is sought next piece simultaneously to allow a driver write or read data block.The advantage of disk staging comprises faster access rate and makes full use of array capacity.Shortcoming be do not have fault-tolerant.If a drives fail, just become can not access for the whole contents of array so.
RAID1 level or disk mirroring provide redundant by writing twice---and each driver is once.If a drives fail, another comprises the accurate backup of data so, and the RAID system can switch to the use mirrored drive, and does not have mistake in the user access.Shortcoming comprises and does not improve data access speed and owing to the number of drives of needs increases the higher expense that (2N) causes.But the RAID1 level provides the best protection of data, because when one of HPDD lost efficacy, array management software only is directed to the HPDD that exists to all application request.
RAID3 level segment data is crossed over a plurality of drivers, and has an extra driver and be specifically designed to parity checking, is used for error correction/recovery.The RAID5 level provides segmentation and parity checking to be used for the mistake recovery.In the RAID5 level, parity block is assigned with between the driver of array, and it provides between the driver the more access of balance burden.If a drives fail, parity information is used to restore data.Shortcoming is slow relatively write cycle time (needs read for twice and write for twice for each piece of being write).Array capacity is N-1,3 drivers of minimum needs.
The RAID0+1 level comprises segmentation and mirror image and does not have parity checking.Advantage is fast data access (as the RAID0 level) and single driver fault-tolerant (as the RAID1 level).The RAID0+1 level still needs the disk (as the RAID1 level) of twice quantity.As understanding, can have other the RAID rank and/or method data storage to array 808.
With reference now to Figure 22 A and 22B,, comprise disk array 836 according to the RAID 834-1 of system of the present invention, it comprises X HPDD and disk array 838, it comprises Y LPDD.One or more client computer and/or server 840 comprise disk array controller 842 and/or array management module 844.Though shown the equipment 842 and 844 that separates, if desired, these equipment can be integrated.As understanding, X is more than or equal to 2, and Y is more than or equal to 1.X can be greater than Y, less than Y and/or equal Y.For example, the 834-1 ' of RAID system that Figure 22 B shows, wherein X=Y=Z.
With reference now to Figure 23 A, 23B, 24A and 24B,, 834-2 of RAID system and 834-3 have been shown.At Figure 23 A, LPDD disk array 838 is communicated by letter with server/client 840, and HPDD disk array 836 is communicated by letter with LPDD disk array 838.The 834-2 of RAID system can comprise the management path of detouring, and it selectively avoids LPDD disk array 838.As understanding, X is more than or equal to 2, and Y is more than or equal to 1.X can be greater than Y, less than Y and/or equal Y.For example, the 834-2 ' of RAID system that Figure 23 B shows, wherein X=Y=Z.In Figure 24 A, HPDD disk array 836 is communicated by letter with server/client 840, and LPDD disk array 838 is communicated by letter with HPDD disk array 836.The 834-2 of RAID system can comprise that it selectively avoids LPDD disk array 838 by the management of the dotted line 846 expression path of detouring.As understanding, X is more than or equal to 2, and Y is more than or equal to 1.X can be greater than Y, less than Y and/or equal Y.For example, the 834-3 ' of RAID system that Figure 24 B shows, wherein X=Y=Z.The strategy that uses in Figure 23 A-24B can comprise directly to be write and/or write-back.
Array management module 844 and/or Magnetic Disk Controller 842 utilize LPDD disk array 838 to reduce the power consumption of HPDD disk array 836.Typically, the HPDD disk array 808 in the traditional RAID of Figure 21 system always stays open during operation, to support required data time.As understanding, HPDD disk array 808 consumes high relatively power.In addition, because lot of data is stored in the HPDD disk array 808, so the motherboard of HPDD is typically big as much as possible, this needs the more spindle drive motor of high power capacity, and has increased data time, and is farther because read/write arm on average moves.
According to the present invention, more than the technology described in conjunction with Fig. 6-17 in the RAID system 834 shown in Figure 22 B, selectively used, to reduce power consumption and data time.Though in Figure 22 A and 23A-24B, do not show, can use these technology yet according to other RAID systems of the present invention.In other words, the LUB module of describing in Fig. 6 and 7A-7D 304, self-adaptation memory module 306 and/or LPDD maintenance module are selectively realized by disk array controller 842 and/or array management controller 844, selectively data storage is reduced power consumption and data time on LPDD disk array 838.The self-adaptation storage control module of describing among Fig. 8 A-8C, 9 and 10 414 also can selectively be realized by disk array controller 842 and/or array management controller 844, to reduce power consumption and data time.The driving power of describing among Figure 11 A-11C and 12 reduces module 522 also can be realized by disk array controller 842 and/or array management controller 844, to reduce power consumption and data time.In addition, multiple driver system that shows among Figure 13-17 and/or direct interface can be realized with the one or more HPDD in the HPDD disk array 836, to increase function and to reduce power consumption and access time.
With reference now to Figure 25,, shown network attached storage (NAS) system 850 according to prior art, it comprises memory device 854, storage requester 858, file server 862 and communication system 866.Memory device 854 typically comprises disc driver, RAID system, tape drive, tape library, CD drive, automatic planter and any other memory device that will be shared.Memory device 854 preferably but not necessarily OO equipment.Memory device 854 can comprise the I/O interface, is used for the data storage and the retrieval of requester 858.Requester 858 typically comprises the server and/or the client computer of shared and/or direct access storage device 854.
File server 862 is carried out management and security function, such as requests verification and resources allocation.Memory device 854 depends on the management indication of file server 862, and requester 858 is not born storage administration, but file server 862 is born this responsibility.In littler system, may not require special file server.Under this situation, requester can be born the responsibility of the operation that monitors NAS system 850.Similarly, file server 862 and requester 858 all are illustrated, comprising administration module 870 and 872 respectively, though one or another and/or two administration modules can be provided.Communication system 866 is basic facilities of physics, and the assembly of NAS system 850 is by its communication.It had preferably not only had network but also the attribute of passage had been arranged, and can connect all component in the network, and have the low time delay of typically finding in passage.
When NAS system 850 was energized, himself is discerned or discerned to memory device 854 mutually to common reference point, such as file server 862, one or more requester 858 and/or to communication system 866.Communication system 866 typically is provided for the network management technology of this point, and by being connected to the medium related with described communication system, these technology are to obtain easily.Memory device 854 and requester 858 sign in on the described medium.Determine that any assembly of operative configuration can the working medium service discern every other assembly.Requester 858 is understood the memory device 854 that whether exists them to visit from file server 862, and memory device 854 needs another equipment of location or when calling management service such as backup, where it can know.Similarly, file server 862 can understand whether there is memory device 854 from media services.Depend on the security of specific installation, requester may be rejected certain equipment of visit.From addressable set of memory device, it can discern file, database and available free space then.
Simultaneously, each NAS assembly can be discerned any special consideration that it should understand to file server 862.Any device level Service Properties should once be communicated to file server 862, and every other assembly can be understood them from file server 862.For example, requester may wish to be apprised of after starting and introduced extra storer, the attribute triggering that is provided with when this signs in on the file server 862 by requester.When new memory device is added to when configuration, file server 862 is just done automatically like this, comprise transmitting important feature, such as its be RAID5, by mirror image or the like.
When requester must open a file, it can be directly may have to arrive to memory device 845 or it, and file server secured permission and positional information.The degree of file server 854 control reference-to storage is functions of the safety requirements of installation.
With reference now to Figure 26,, shown that it comprises memory device 904, requester 908, file server 912 and communication system 916 according to network attached storage of the present invention (NAS) system 900.Memory device 904 comprises RAID system 834 and/or many disk drive system 930, and is described as Fig. 6-19.Memory device 904 typically also comprises disc driver, RAID system, tape drive, tape library, CD drive, automatic planter and/or aforesaid any other memory device that will be shared.As understanding, use improved RAID system and/or many disk drive system 930 will reduce the power consumption and the data time of NAS system 900.
Those skilled in the art can understand cardinal principle of the present invention from the description of front and can realize with various forms.Therefore; though the present invention is described in connection with particular examples thereof; but true scope of the present invention should not be to be subject to this; because those skilled in the art are after attached instructions of research and appended claim; it is obvious that other modification will become; therefore, protection scope of the present invention is as the criterion with the claimed scope of appending claims.

Claims (10)

1. data-storage system, it is used to have the computing machine of high power and low-power mode, and described data-storage system comprises:
Low-power LP permanent memory;
High power HP permanent memory; With
Reduce module with the driving power that described low-power is communicated by letter with the high power permanent memory, wherein, during the described low-power mode when when the high power permanent memory reads read data, and described read data comprises the sequential access data file, described driving power reduces module and calculates a Burst Period, is used for described read data segment is sent to described LP permanent memory from described HP permanent memory.
2. data-storage system as claimed in claim 1, wherein, described driving power reduces module and selects described Burst Period, with the power consumption in the readout that reduces described read data during the described low-power mode.
3. data-storage system as claimed in claim 1, wherein, described LP permanent memory comprises at least one among flash memory and the low-power disc driver LPDD.
4. data-storage system as claimed in claim 3, wherein, described LPDD comprises one or more motherboards, the diameter of described one or more motherboards is less than or equal to 1.8 inches.
5. data-storage system as claimed in claim 3, wherein, described HP permanent memory comprises high power magnetic disk drive HPDD.
6. data-storage system as claimed in claim 5, wherein, described HPDD comprises one or more motherboards, the diameter of wherein said one or more motherboards is greater than 1.8 inches.
7. data-storage system as claimed in claim 5, one of below wherein, described Burst Period is based at least: the capacity of reading length and described LPDD of the power consumption of the rotation starting time of the rotation starting time of described LPDD, described HPDD, described LPDD, the power consumption of described HPDD, described read data.
8. data-storage system as claimed in claim 1 further comprises: the buffer memory control module, it comprises that described driving power reduces module.
9. data-storage system as claimed in claim 1 further comprises: host computer control module, it comprises that described driving power reduces module.
10. data-storage system as claimed in claim 1 further comprises: operating system, it comprises that described driving power reduces module.
CNB2005100771950A 2004-06-10 2005-05-17 Hard disk drive power reducing module Active CN100418039C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/865,368 2004-06-10
US10/865,368 US7634615B2 (en) 2004-06-10 2004-06-10 Adaptive storage system

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
CNB2005100709131A Division CN100541410C (en) 2004-06-10 2005-05-17 Disk drive system
CN200510000709.1 Division 2005-05-17

Publications (2)

Publication Number Publication Date
CN1866164A true CN1866164A (en) 2006-11-22
CN100418039C CN100418039C (en) 2008-09-10

Family

ID=34936604

Family Applications (4)

Application Number Title Priority Date Filing Date
CNA2005100771946A Pending CN1866163A (en) 2004-06-10 2005-05-17 Multi-disk drive system with high power and low power disk drive
CNB2005100709131A Active CN100541410C (en) 2004-06-10 2005-05-17 Disk drive system
CNB2005100771950A Active CN100418039C (en) 2004-06-10 2005-05-17 Hard disk drive power reducing module
CNB2005100771965A Active CN100541411C (en) 2004-06-10 2005-05-17 Redundant Array of Independent Disks system with high power and low-power disc driver

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CNA2005100771946A Pending CN1866163A (en) 2004-06-10 2005-05-17 Multi-disk drive system with high power and low power disk drive
CNB2005100709131A Active CN100541410C (en) 2004-06-10 2005-05-17 Disk drive system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CNB2005100771965A Active CN100541411C (en) 2004-06-10 2005-05-17 Redundant Array of Independent Disks system with high power and low-power disc driver

Country Status (7)

Country Link
US (2) US7634615B2 (en)
EP (4) EP1605456B1 (en)
JP (4) JP5059298B2 (en)
CN (4) CN1866163A (en)
DE (2) DE602005005557T2 (en)
HK (1) HK1094259A1 (en)
TW (4) TWI363293B (en)

Families Citing this family (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003901454A0 (en) * 2003-03-28 2003-04-10 Secure Systems Limited Security system and method for computer operating systems
US7702848B2 (en) * 2004-06-10 2010-04-20 Marvell World Trade Ltd. Adaptive storage system including hard disk drive with flash interface
US7788427B1 (en) 2005-05-05 2010-08-31 Marvell International Ltd. Flash memory interface for disk drive
US20070083785A1 (en) * 2004-06-10 2007-04-12 Sehat Sutardja System with high power and low power processors and thread transfer
US7634615B2 (en) 2004-06-10 2009-12-15 Marvell World Trade Ltd. Adaptive storage system
US7617359B2 (en) * 2004-06-10 2009-11-10 Marvell World Trade Ltd. Adaptive storage system including hard disk drive with flash interface
US20070094444A1 (en) * 2004-06-10 2007-04-26 Sehat Sutardja System with high power and low power processors and thread transfer
US7730335B2 (en) 2004-06-10 2010-06-01 Marvell World Trade Ltd. Low power computer with main and auxiliary processors
US7469336B2 (en) * 2005-06-24 2008-12-23 Sony Corporation System and method for rapid boot of secondary operating system
CN101118460A (en) * 2006-05-10 2008-02-06 马维尔国际贸易有限公司 Adaptive storage system including hard disk drive with flash interface
TWI329811B (en) * 2006-08-03 2010-09-01 Via Tech Inc Core logic unit having raid control function and raidcontrol method
US8681159B2 (en) * 2006-08-04 2014-03-25 Apple Inc. Method and apparatus for switching between graphics sources
KR100767605B1 (en) 2006-08-09 2007-10-17 주식회사 휴맥스 Digital video recorder having hierarchical memories and method for implementing hierarchical memories
US20080263324A1 (en) 2006-08-10 2008-10-23 Sehat Sutardja Dynamic core switching
US7702853B2 (en) * 2007-05-04 2010-04-20 International Business Machines Corporation Data storage system with power management control and method
US7941682B2 (en) 2007-05-09 2011-05-10 Gainspan, Inc. Optimum power management of system on chip based on tiered states of operation
US8046597B2 (en) * 2007-08-14 2011-10-25 Dell Products L.P. System and method for managing storage device capacity use
TWI362612B (en) * 2007-09-05 2012-04-21 Htc Corp System and electronic device using multiple operating systems and operating method thereof
US20090079746A1 (en) * 2007-09-20 2009-03-26 Apple Inc. Switching between graphics sources to facilitate power management and/or security
US20090083483A1 (en) * 2007-09-24 2009-03-26 International Business Machines Corporation Power Conservation In A RAID Array
US8166326B2 (en) * 2007-11-08 2012-04-24 International Business Machines Corporation Managing power consumption in a computer
US20090132842A1 (en) * 2007-11-15 2009-05-21 International Business Machines Corporation Managing Computer Power Consumption In A Computer Equipment Rack
US8041521B2 (en) * 2007-11-28 2011-10-18 International Business Machines Corporation Estimating power consumption of computing components configured in a computing system
JP5180613B2 (en) * 2008-02-19 2013-04-10 キヤノン株式会社 Information processing apparatus and control method thereof
JP4819088B2 (en) * 2008-04-25 2011-11-16 富士通株式会社 Storage device and method for starting the storage device
US8103884B2 (en) 2008-06-25 2012-01-24 International Business Machines Corporation Managing power consumption of a computer
KR101465099B1 (en) * 2008-09-11 2014-11-25 시게이트 테크놀로지 엘엘씨 A hybrid hard disk drive for reading files having specified conditions rapidly, and a control method adapted to the same, a recording medium adapted to the same
US8041976B2 (en) * 2008-10-01 2011-10-18 International Business Machines Corporation Power management for clusters of computers
US8514215B2 (en) 2008-11-12 2013-08-20 International Business Machines Corporation Dynamically managing power consumption of a computer with graphics adapter configurations
TWI384365B (en) * 2009-01-19 2013-02-01 Asustek Comp Inc Control system and control method of virtual memory
US8285948B2 (en) * 2009-03-23 2012-10-09 International Business Machines Corporation Reducing storage system power consumption in a remote copy configuration
KR101525589B1 (en) * 2009-04-23 2015-06-03 삼성전자주식회사 Data storage device and data processing system having the same
US8665601B1 (en) 2009-09-04 2014-03-04 Bitmicro Networks, Inc. Solid state drive with improved enclosure assembly
US8447908B2 (en) 2009-09-07 2013-05-21 Bitmicro Networks, Inc. Multilevel memory bus system for solid-state mass storage
US8560804B2 (en) 2009-09-14 2013-10-15 Bitmicro Networks, Inc. Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device
WO2011037624A1 (en) * 2009-09-22 2011-03-31 Emc Corporation Snapshotting a performance storage system in a system for performance improvement of a capacity optimized storage system
US8732394B2 (en) * 2009-12-24 2014-05-20 International Business Machines Corporation Advanced disk drive power management based on maximum system throughput
US9070453B2 (en) 2010-04-15 2015-06-30 Ramot At Tel Aviv University Ltd. Multiple programming of flash memory without erase
JP2011238038A (en) * 2010-05-11 2011-11-24 Nec Corp Disk array device, disk array device control system, and disk array device control program
USRE49818E1 (en) * 2010-05-13 2024-01-30 Kioxia Corporation Information processing method in a multi-level hierarchical memory system
AU2011255847B2 (en) * 2010-05-20 2014-03-13 Bridgestone Corporation Heavy duty tire
US8730251B2 (en) 2010-06-07 2014-05-20 Apple Inc. Switching video streams for a display without a visible interruption
TWI417874B (en) * 2010-07-30 2013-12-01 Apacer Technology Inc A hybrid hard drive integrated with a CD player
US8447925B2 (en) 2010-11-01 2013-05-21 Taejin Info Tech Co., Ltd. Home storage device and software including management and monitoring modules
US8990494B2 (en) 2010-11-01 2015-03-24 Taejin Info Tech Co., Ltd. Home storage system and method with various controllers
US8677162B2 (en) * 2010-12-07 2014-03-18 International Business Machines Corporation Reliability-aware disk power management
JP5505329B2 (en) * 2011-02-22 2014-05-28 日本電気株式会社 Disk array device and control method thereof
US9594421B2 (en) * 2011-03-08 2017-03-14 Xyratex Technology Limited Power management in a multi-device storage array
US9477597B2 (en) 2011-03-25 2016-10-25 Nvidia Corporation Techniques for different memory depths on different partitions
US8701057B2 (en) 2011-04-11 2014-04-15 Nvidia Corporation Design, layout, and manufacturing techniques for multivariant integrated circuits
US9529712B2 (en) * 2011-07-26 2016-12-27 Nvidia Corporation Techniques for balancing accesses to memory having different memory types
KR20130024271A (en) * 2011-08-31 2013-03-08 삼성전자주식회사 Storage system including hdd and nvm
US9372755B1 (en) 2011-10-05 2016-06-21 Bitmicro Networks, Inc. Adaptive power cycle sequences for data recovery
WO2013108132A2 (en) 2012-01-20 2013-07-25 Marvell World Trade Ltd. Cache system using solid state drive
US20130290611A1 (en) * 2012-03-23 2013-10-31 Violin Memory Inc. Power management in a flash memory
US9043669B1 (en) 2012-05-18 2015-05-26 Bitmicro Networks, Inc. Distributed ECC engine for storage media
US9423457B2 (en) 2013-03-14 2016-08-23 Bitmicro Networks, Inc. Self-test solution for delay locked loops
US9798688B1 (en) 2013-03-15 2017-10-24 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9934045B1 (en) 2013-03-15 2018-04-03 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9734067B1 (en) 2013-03-15 2017-08-15 Bitmicro Networks, Inc. Write buffering
US9875205B1 (en) 2013-03-15 2018-01-23 Bitmicro Networks, Inc. Network of memory systems
US10120694B2 (en) 2013-03-15 2018-11-06 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9501436B1 (en) 2013-03-15 2016-11-22 Bitmicro Networks, Inc. Multi-level message passing descriptor
US9400617B2 (en) 2013-03-15 2016-07-26 Bitmicro Networks, Inc. Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained
US9971524B1 (en) 2013-03-15 2018-05-15 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US9842024B1 (en) 2013-03-15 2017-12-12 Bitmicro Networks, Inc. Flash electronic disk with RAID controller
US9916213B1 (en) 2013-03-15 2018-03-13 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9672178B1 (en) 2013-03-15 2017-06-06 Bitmicro Networks, Inc. Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system
US10489318B1 (en) 2013-03-15 2019-11-26 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US9720603B1 (en) 2013-03-15 2017-08-01 Bitmicro Networks, Inc. IOC to IOC distributed caching architecture
US9430386B2 (en) 2013-03-15 2016-08-30 Bitmicro Networks, Inc. Multi-leveled cache management in a hybrid storage system
JP2014182855A (en) * 2013-03-19 2014-09-29 Toshiba Corp Disk storage unit and data storage method
JP6321325B2 (en) 2013-04-03 2018-05-09 ルネサスエレクトロニクス株式会社 Information processing apparatus and information processing method
US9292080B2 (en) 2013-06-19 2016-03-22 Microsoft Technology Licensing, Llc Selective blocking of background activity
US9213611B2 (en) 2013-07-24 2015-12-15 Western Digital Technologies, Inc. Automatic raid mirroring when adding a second boot drive
US9811461B1 (en) 2014-04-17 2017-11-07 Bitmicro Networks, Inc. Data storage system
US9952991B1 (en) 2014-04-17 2018-04-24 Bitmicro Networks, Inc. Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation
US10078604B1 (en) 2014-04-17 2018-09-18 Bitmicro Networks, Inc. Interrupt coalescing
US10042792B1 (en) 2014-04-17 2018-08-07 Bitmicro Networks, Inc. Method for transferring and receiving frames across PCI express bus for SSD device
US10025736B1 (en) 2014-04-17 2018-07-17 Bitmicro Networks, Inc. Exchange message protocol message transmission between two devices
US10055150B1 (en) 2014-04-17 2018-08-21 Bitmicro Networks, Inc. Writing volatile scattered memory metadata to flash device
US9541988B2 (en) 2014-09-22 2017-01-10 Western Digital Technologies, Inc. Data storage devices with performance-aware power capping
US10146293B2 (en) 2014-09-22 2018-12-04 Western Digital Technologies, Inc. Performance-aware power capping control of data storage devices
JP2016110305A (en) 2014-12-04 2016-06-20 富士通株式会社 Storage control apparatus, cache control method, cache control program, and computer system
US10026454B2 (en) 2015-04-28 2018-07-17 Seagate Technology Llc Storage system with cross flow cooling of power supply unit
US10097636B1 (en) 2015-06-15 2018-10-09 Western Digital Technologies, Inc. Data storage device docking station
US9965206B2 (en) 2015-10-23 2018-05-08 Western Digital Technologies, Inc. Enhanced queue management for power control of data storage device
TWI582582B (en) * 2015-12-28 2017-05-11 鴻海精密工業股份有限公司 A system and method to improve reading performance of raid
US10372364B2 (en) * 2016-04-18 2019-08-06 Super Micro Computer, Inc. Storage enclosure with daisy-chained sideband signal routing and distributed logic devices
US10552050B1 (en) 2017-04-07 2020-02-04 Bitmicro Llc Multi-dimensional computer storage system
CN109902035B (en) * 2019-02-03 2023-10-31 成都皮兆永存科技有限公司 composite memory

Family Cites Families (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US322447A (en) * 1885-07-21 Oil-can
US678249A (en) * 1901-03-12 1901-07-09 George C Hutchings Fire-extinguisher.
US779544A (en) * 1903-07-18 1905-01-10 Sven Hyden Apparatus for simultaneously corking a number of bottles.
US799151A (en) * 1904-10-20 1905-09-12 Roland H Elkins Lubricator.
US865732A (en) * 1905-03-06 1907-09-10 Charles Anthony Vandervell Dynamo or the like.
US820867A (en) * 1905-05-20 1906-05-15 Thomas C Henninger Combined separator and bagging device for grain.
US865368A (en) * 1905-12-30 1907-09-10 Justus B Entz System of electrical distribution.
US4425615A (en) * 1980-11-14 1984-01-10 Sperry Corporation Hierarchical memory system having cache/disk subsystem with command queues for plural disks
US5150465A (en) 1988-11-30 1992-09-22 Compaq Computer Corporation Mode-selectable integrated disk drive for computer
EP0617363B1 (en) 1989-04-13 2000-01-26 SanDisk Corporation Defective cell substitution in EEprom array
US5440749A (en) 1989-08-03 1995-08-08 Nanotronics Corporation High performance, low cost microprocessor architecture
JP2782913B2 (en) * 1990-04-23 1998-08-06 株式会社日立製作所 Disk control device with cache and data control method therefor
US5455913A (en) 1990-05-14 1995-10-03 At&T Global Information Solutions Company System and method for transferring data between independent busses
JP2669241B2 (en) * 1991-12-05 1997-10-27 日本電気株式会社 Migration processing method
JP2743730B2 (en) * 1992-08-28 1998-04-22 株式会社日立製作所 Array type storage system
US5485595A (en) * 1993-03-26 1996-01-16 Cirrus Logic, Inc. Flash memory mass storage architecture incorporating wear leveling technique without using cam cells
GB2286267A (en) * 1994-02-03 1995-08-09 Ibm Energy-saving cache control system
US5596708A (en) 1994-04-04 1997-01-21 At&T Global Information Solutions Company Method and apparatus for the protection of write data in a disk array
US5546558A (en) * 1994-06-07 1996-08-13 Hewlett-Packard Company Memory system with hierarchic disk array and memory map store for persistent storage of virtual mapping information
US5659718A (en) * 1994-08-19 1997-08-19 Xlnt Designs, Inc. Synchronous bus and bus interface device
JPH0883148A (en) 1994-09-13 1996-03-26 Nec Corp Magnetic disk device
GB9419246D0 (en) * 1994-09-23 1994-11-09 Cambridge Consultants Data processing circuits and interfaces
US5815726A (en) * 1994-11-04 1998-09-29 Altera Corporation Coarse-grained look-up table architecture
JP3834861B2 (en) * 1996-04-02 2006-10-18 株式会社日立製作所 Video recording device
US5768164A (en) 1996-04-15 1998-06-16 Hewlett-Packard Company Spontaneous use display for a computing system
JP3111912B2 (en) * 1996-11-29 2000-11-27 日本電気株式会社 Disk cache control method
US5937423A (en) 1996-12-26 1999-08-10 Intel Corporation Register interface for flash EEPROM memory arrays
US6035408A (en) 1998-01-06 2000-03-07 Magnex Corp. Portable computer with dual switchable processors for selectable power consumption
US6098119A (en) * 1998-01-21 2000-08-01 Mylex Corporation Apparatus and method that automatically scans for and configures previously non-configured disk drives in accordance with a particular raid level based on the needed raid level
EP1072153B1 (en) * 1998-04-17 2004-02-04 Matsushita Electric Industrial Co., Ltd. False contour correcting apparatus and method
CN1205477A (en) * 1998-07-16 1999-01-20 英业达股份有限公司 Memory substitution method and its device
US6578129B1 (en) * 1998-07-24 2003-06-10 Imec Vzw Optimized virtual memory management for dynamic data types
JP3819166B2 (en) * 1998-11-27 2006-09-06 ヒタチグローバルストレージテクノロジーズネザーランドビーブイ Energy consumption reduction method
JP4325817B2 (en) * 1999-04-05 2009-09-02 株式会社日立製作所 Disk array device
US6282614B1 (en) 1999-04-15 2001-08-28 National Semiconductor Corporation Apparatus and method for reducing the power consumption of a microprocessor with multiple levels of caches
JP4264777B2 (en) * 1999-05-31 2009-05-20 ソニー株式会社 Data reproduction method and data reproduction apparatus
JP2000357060A (en) * 1999-06-14 2000-12-26 Nec Corp Disk array device
JP2001043624A (en) * 1999-07-29 2001-02-16 Toshiba Corp Disk storage device and split data writing method
US6457135B1 (en) 1999-08-10 2002-09-24 Intel Corporation System and method for managing a plurality of processor performance states
WO2001015161A1 (en) 1999-08-25 2001-03-01 Seagate Technology Llc Intelligent power management of disc drives
JP3568110B2 (en) * 1999-10-15 2004-09-22 インターナショナル・ビジネス・マシーンズ・コーポレーション Cache memory control method, computer system, hard disk drive, and hard disk controller
JP2001126392A (en) * 1999-10-27 2001-05-11 Matsushita Electric Ind Co Ltd Recording and reproducing device
US6501999B1 (en) 1999-12-22 2002-12-31 Intel Corporation Multi-processor mobile computer system having one processor integrated with a chipset
US6631474B1 (en) 1999-12-31 2003-10-07 Intel Corporation System to coordinate switching between first and second processors and to coordinate cache coherency between first and second processors during switching
US6496915B1 (en) 1999-12-31 2002-12-17 Ilife Solutions, Inc. Apparatus and method for reducing power consumption in an electronic data storage system
US6594724B1 (en) 2000-03-30 2003-07-15 Hitachi Global Storage Technologies Netherlands B.V. Enhanced DASD with smaller supplementary DASD
US6628469B1 (en) * 2000-07-11 2003-09-30 International Business Machines Corporation Apparatus and method for low power HDD storage architecture
US6631469B1 (en) 2000-07-17 2003-10-07 Intel Corporation Method and apparatus for periodic low power data exchange
JP2002073497A (en) * 2000-09-04 2002-03-12 Sharp Corp Information processing apparatus and method
JP2002189539A (en) * 2000-10-02 2002-07-05 Fujitsu Ltd Software processor, program and recording medium
US6785767B2 (en) 2000-12-26 2004-08-31 Intel Corporation Hybrid mass storage system and method with two different types of storage medium
US6986066B2 (en) 2001-01-05 2006-01-10 International Business Machines Corporation Computer system having low energy consumption
US20020129288A1 (en) 2001-03-08 2002-09-12 Loh Weng Wah Computing device having a low power secondary processor coupled to a keyboard controller
US7231531B2 (en) 2001-03-16 2007-06-12 Dualcor Technologies, Inc. Personal electronics device with a dual core processor
US6976180B2 (en) 2001-03-16 2005-12-13 Dualcor Technologies, Inc. Personal electronics device
US20030153354A1 (en) 2001-03-16 2003-08-14 Cupps Bryan T. Novel personal electronics device with keypad application
US7184003B2 (en) 2001-03-16 2007-02-27 Dualcor Technologies, Inc. Personal electronics device with display switching
JP2002297320A (en) * 2001-03-30 2002-10-11 Toshiba Corp Disk array device
US6725336B2 (en) 2001-04-20 2004-04-20 Sun Microsystems, Inc. Dynamically allocated cache memory for a multi-processor unit
JP4339529B2 (en) * 2001-06-19 2009-10-07 富士通株式会社 Data storage device
US6925529B2 (en) * 2001-07-12 2005-08-02 International Business Machines Corporation Data storage on a multi-tiered disk system
US6859856B2 (en) 2001-10-23 2005-02-22 Flex P Industries Sdn. Bhd Method and system for a compact flash memory controller
US8181118B2 (en) 2001-11-28 2012-05-15 Intel Corporation Personal information device on a mobile computing platform
US6639827B2 (en) 2002-03-12 2003-10-28 Intel Corporation Low standby power using shadow storage
JP3898968B2 (en) * 2002-03-15 2007-03-28 インターナショナル・ビジネス・マシーンズ・コーポレーション Information recording method and information recording system
KR100441608B1 (en) 2002-05-31 2004-07-23 삼성전자주식회사 NAND flash memory interface device
US7082495B2 (en) * 2002-06-27 2006-07-25 Microsoft Corporation Method and apparatus to reduce power consumption and improve read/write performance of hard disk drives using non-volatile memory
JP2004087052A (en) * 2002-08-28 2004-03-18 Sony Corp Video and sound recording and reproducing device, and method for controlling the same
US7006318B2 (en) * 2002-08-29 2006-02-28 Freescale Semiconductor, Inc. Removable media storage system with memory for storing operational data
JP2004094478A (en) * 2002-08-30 2004-03-25 Toshiba Corp Disk drive, and data transfer method
ATE357689T1 (en) * 2002-09-09 2007-04-15 Koninkl Philips Electronics Nv METHOD AND DEVICE FOR MANAGING THE POWER CONSUMPTION OF A DISK DRIVE
JP2004165741A (en) * 2002-11-08 2004-06-10 Ricoh Co Ltd Image processor
JP2004192739A (en) * 2002-12-12 2004-07-08 Mitsumi Electric Co Ltd Disk drive system
AU2003303258A1 (en) * 2002-12-20 2004-07-14 Koninklijke Philips Electronics N.V. Power saving method for portable streaming devices
US6775180B2 (en) 2002-12-23 2004-08-10 Intel Corporation Low power state retention
US7254730B2 (en) 2003-02-14 2007-08-07 Intel Corporation Method and apparatus for a user to interface with a mobile computing device
AU2003900764A0 (en) * 2003-02-20 2003-03-06 Secure Systems Limited Bus bridge security system and method for computers
WO2004090889A1 (en) 2003-04-14 2004-10-21 Koninklijke Philips Electronics N.V. Format mapping scheme for universal drive device
US7221331B2 (en) 2003-05-05 2007-05-22 Microsoft Corporation Method and system for auxiliary display of information for a computing device
US7240228B2 (en) 2003-05-05 2007-07-03 Microsoft Corporation Method and system for standby auxiliary processing of information for a computing device
US7069388B1 (en) 2003-07-10 2006-06-27 Analog Devices, Inc. Cache memory data replacement strategy
US7925298B2 (en) 2003-09-18 2011-04-12 Vulcan Portals Inc. User interface for a secondary display module of a mobile electronic device
US20050066209A1 (en) 2003-09-18 2005-03-24 Kee Martin J. Portable electronic device having high and low power processors operable in a low power mode
US7017059B2 (en) * 2003-12-12 2006-03-21 Cray Canada Inc. Methods and apparatus for replacing cooling systems in operating computers
AU2003295260A1 (en) 2003-12-16 2005-07-05 Real Enterprise Solutions Development B.V. Memory management in a computer system using different swapping criteria
JP4518541B2 (en) * 2004-01-16 2010-08-04 株式会社日立製作所 Disk array device and disk array device control method
US7136973B2 (en) 2004-02-04 2006-11-14 Sandisk Corporation Dual media storage device
US7730335B2 (en) 2004-06-10 2010-06-01 Marvell World Trade Ltd. Low power computer with main and auxiliary processors
US7702848B2 (en) 2004-06-10 2010-04-20 Marvell World Trade Ltd. Adaptive storage system including hard disk drive with flash interface
US7634615B2 (en) 2004-06-10 2009-12-15 Marvell World Trade Ltd. Adaptive storage system
US20060069848A1 (en) 2004-09-30 2006-03-30 Nalawadi Rajeev K Flash emulation using hard disk
US20060075185A1 (en) 2004-10-06 2006-04-06 Dell Products L.P. Method for caching data and power conservation in an information handling system

Also Published As

Publication number Publication date
TWI363293B (en) 2012-05-01
CN1866163A (en) 2006-11-22
TW200622847A (en) 2006-07-01
EP1605454A3 (en) 2006-10-04
CN100541410C (en) 2009-09-16
EP1605454A2 (en) 2005-12-14
US7512734B2 (en) 2009-03-31
JP4969804B2 (en) 2012-07-04
US20060259802A1 (en) 2006-11-16
JP2005353080A (en) 2005-12-22
EP1605456A2 (en) 2005-12-14
JP2006024211A (en) 2006-01-26
EP1605455B1 (en) 2009-03-18
HK1094259A1 (en) 2007-03-23
DE602005013322D1 (en) 2009-04-30
TWI350973B (en) 2011-10-21
TWI388993B (en) 2013-03-11
EP1605455A3 (en) 2006-10-04
JP4969805B2 (en) 2012-07-04
EP1605456A3 (en) 2006-10-04
JP5059298B2 (en) 2012-10-24
EP1605453B1 (en) 2017-09-27
CN1866194A (en) 2006-11-22
EP1605455A2 (en) 2005-12-14
TW200622684A (en) 2006-07-01
TWI417743B (en) 2013-12-01
TW200625100A (en) 2006-07-16
CN100418039C (en) 2008-09-10
CN100541411C (en) 2009-09-16
US7634615B2 (en) 2009-12-15
EP1605453A2 (en) 2005-12-14
JP4969803B2 (en) 2012-07-04
DE602005005557D1 (en) 2008-05-08
EP1605456B1 (en) 2008-03-26
DE602005005557T2 (en) 2009-04-30
CN1707417A (en) 2005-12-14
EP1605453A3 (en) 2006-10-04
JP2006059323A (en) 2006-03-02
TW200619973A (en) 2006-06-16
US20050289361A1 (en) 2005-12-29
JP2006012126A (en) 2006-01-12

Similar Documents

Publication Publication Date Title
CN1866164A (en) Hard disk drive power reducing module
CN1707400A (en) High-power and low power computer processors
TWI426444B (en) Adaptive storage system including hard disk drive with flash interface
TWI390520B (en) Adaptive storage system including hard disk drive with flash interface
CN101443726B (en) Comprise the adaptive memory system of the hard disk drive with flash interface
US20080172519A1 (en) Methods For Supporting Readydrive And Readyboost Accelerators In A Single Flash-Memory Storage Device
JP5807942B2 (en) Disk array device and control method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1094259

Country of ref document: HK

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1094259

Country of ref document: HK

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201216

Address after: Hamilton, Bermuda

Patentee after: Marvell International Ltd.

Address before: Babado J San Michaele

Patentee before: MARVELL WORLD TRADE Ltd.

Effective date of registration: 20201216

Address after: Shin ha Po

Patentee after: Marvell Asia Pte. Ltd.

Address before: Grand Cayman Islands

Patentee before: Kavim International Inc.

Effective date of registration: 20201216

Address after: Grand Cayman Islands

Patentee after: Kavim International Inc.

Address before: Hamilton, Bermuda

Patentee before: Marvell International Ltd.