CN1211008A - Cache enabling architecture - Google Patents

Cache enabling architecture Download PDF

Info

Publication number
CN1211008A
CN1211008A CN98118871A CN98118871A CN1211008A CN 1211008 A CN1211008 A CN 1211008A CN 98118871 A CN98118871 A CN 98118871A CN 98118871 A CN98118871 A CN 98118871A CN 1211008 A CN1211008 A CN 1211008A
Authority
CN
China
Prior art keywords
read
write
data bus
optical memory
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN98118871A
Other languages
Chinese (zh)
Other versions
CN1119749C (en
Inventor
夏威尔·莱贝格
雷纳·施维尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deutsche Thomson Brandt GmbH
Original Assignee
Deutsche Thomson Brandt GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP97115527A external-priority patent/EP0901077A1/en
Application filed by Deutsche Thomson Brandt GmbH filed Critical Deutsche Thomson Brandt GmbH
Publication of CN1211008A publication Critical patent/CN1211008A/en
Application granted granted Critical
Publication of CN1119749C publication Critical patent/CN1119749C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • G06F13/4286Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus using a handshaking protocol, e.g. RS232C link
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0012High speed serial bus, e.g. IEEE P1394

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A cache enabling architecture in which an optical storage reading and/or writing device, a caching processor and a mass writing and reading device are each connected to a data bus. The optical storage reading and/or writing device exchanges information directly with the caching processor over the data bus. The caching uses the mass writing and reading device as cache memory.

Description

Cache enabling architecture
The present invention relates to a kind of cache enabling architecture (cache enabling architecture), wherein can carry out the slow middle storage of high speed the output of memory read and/or write device and/or the information of input.This cache enabling architecture can be for example realized in the computer system that described memory read and/or write device are connected to.In general, connection is undertaken by data bus.
From the storage arrangement cache information is known technology.Get on very well as an example particularly, known have many solutions to come random-access memory (ram), hard disk drive and other mass storage devices are carried out high-speed cache.Described various memory storage uses in computing machine usually or combines use with computing machine.The requirement that one memory storage is carried out high-speed cache provides a storer faster basically, information is comparable therein obtains more effectively access in described memory storage, and is that fixed information is copied to this storer or opposite faster from described memory storage basically.Described fixed information can for example be most possibly need or the information of frequent needs.Determine duplicating and discern with high-speed buffer processor and carrying out of information in the information in being included in memory storage (or storer) faster.High-speed buffer processor can for example be the software program of operation on calculating.Therefore high-speed cache has improved total performance of information handling system, and described information is the microprocessor processes information among the RAM that for example is stored in or is stored in computer treatmenting information in the mass storage peripheral hardware.
Computing machine generally with such as magnetic and/or optical storage peripheral hardware use.These memory storages directly or indirectly are connected on the data bus.Microprocessor is in the message exchange of carrying out on the data bus between each device that is connected on this data bus.Change with the characteristic of the performance that is stored in the message reference number of times expression in each memory storage with each memory storage.For example the performance of magnetic hard drive significantly is better than the performance of optical disc apparatus.Known to using disc driver to come optical disc apparatus is carried out high-speed cache as faster memory.
In a kind of realization of high-speed cache, high-speed buffer processor carries out high-speed cache by use direct connection of exchange message between optical disc apparatus and hard disk drive.This direct connection is essential, will exchange message not have other way between optical disc apparatus and magnetic hard-disk device because do not relate to microprocessor, and relates to the speed that microprocessor significantly reduces computing machine.On the other hand, directly connecting is a hardware, thereby this hardware does not belong to the production cost that the computer equipment of standard may increase the computing machine that is equipped with the storer peripheral hardware.
Nearest computer hardware comprises a kind of data bus, and two peripheral hardwares can swap data and not obvious interference is connected other peripheral hardwares on this data bus on this data bus.This means that the microprocessor that is also referred to as CPU (central processing unit) also can carry out other tasks except carrying out two message exchanges between the peripheral hardware.For example, microprocessor can be handled the data that are stored among the RAM.This data bus can be for example based on IEEE 1394 buses.
An object of the present invention is to seek a solution, make to come the optical storage peripheral hardware is carried out high-speed cache with another memory storage peripheral hardware, and need not between this two peripheral hardware, to have the direct connection of oneself.This solution should be utilized existing computer hardware as far as possible.
According to the present invention, found a solution of the problems referred to above, it is a kind of cache enabling architecture, be used for optical memory is read and/or the output of write device and/or the information of input are carried out high-speed cache, this cache enabling architecture comprises at least one high capacity write and read device, a sets of data bus and a high-speed buffer processor, and described high capacity write and read device is based on magnetic hard drive; Be connected with described high capacity write and read device on the described data bus indirectly or directly, arrive described high capacity write and read device by described data bus from the instruction of other some devices except that optics is deposited removal apparatus; Described high-speed buffer processor comes information is carried out high-speed cache by using this high capacity write and read device.High-speed buffer processor is directly connected to high capacity write and read device.Optical memory is read and/or the output of write device and/or input and high-speed buffer processor are connected by data bus, so as between described output and/or input and described high-speed buffer processor direct exchange message.
According to the present invention, found another solution of the problems referred to above, promptly a kind of magnetic hard drive of in computer system, using.This computer system comprises that at least one CPU (central processing unit), an optical memory read and/or a write device and a sets of data bus, and described CPU (central processing unit) and described optical memory are read and/or write device is connected to described data bus indirectly or directly.Described magnetic hard drive also comprises a connecting circuit and a high-speed buffer processor, connecting circuit is used for magnetic hard drive is connected to data bus, high-speed buffer processor receives request from data bus, be intended to be used for to read and/or to write that optical memory is read and/or the information of write device, high-speed buffer processor also magnetic hard drive and optical memory is read and/or write device between carry out message exchange by data bus, optical memory is read and/or write device carries out high-speed cache.
According to the following explanation of 1 couple of each embodiment with reference to the accompanying drawings, the other objects and features of the invention will become apparent.
Fig. 1 is the synoptic diagram of cache memory architectures.
Described each embodiment is not restrictive, and those skilled in the art can consider still other embodiment within the scope of the present invention.
Fig. 1 represents it can is the data bus 1 of a part of computing machine (not shown).This data bus 1 can for example be the bus based on IEEE 1394.IEEE 1394 buses are the high-speed serial bus that allow to transmit numerical data.In addition, IEEE 1394 also allow and be connected to this bus each device direct communication and mutually between exchanges data.
Optical memory is read and/or write device 2 is connected to data bus 1 by output and/or input connector connecting circuit 22.Optical memory is read and/or write device 2 can for example be CD-ROM, DVD-ROM/RAM or CD-RW (can rewrite) driver, and promptly data are read/are write with optics or magnetic-light method.CD drive provides relatively cheap method to visit/store bulk information.
High capacity write and read device 3 is connected to data bus 1 by connecting 4.This high capacity write and read device 3 can for example be a magnetic hard drive.Magnetic hard drive provides favourable P/C ratio, so be used in the most computers.
High-speed buffer processor 5 is connected to high capacity write and read and device 3 by connecting 6, is connected to data bus 1 by connecting 4.
Be better than generally with the performance of the high capacity write and read device 3 that the access times and the transfer rate of information are represented that optical disc memory is read and/or the performance of write device 2.High-speed buffer processor 5 is directly undertaken by data bus 1 and optical memory is read and/or write device 2 between message exchange.High-speed buffer processor 5 can be for example to the request that optical memory is read and/or write device 2 sends information, receive request after, optical memory is read and/or write device 2 sends institute's information requested to high-speed buffer processor 5.High-speed buffer processor 5 sends institute's information requested of receiving in the high capacity write and read device 3 of this information of storage.
Therefore, optical memory read and/or write device and high capacity write and read device between do not need special direct the connection.This cache enabling architecture has utilized two kinds of devices to pass through the data bus possibility of exchange message each other.
In the ordinary course of things, another device 7 is connected to data bus 1.This another device 7 can for example be a microprocessor.Another device 7 is to high capacity write and read device 3 or to representing optical memory to read and/or the high-speed buffer processor 5 of write device 2 sends request to information.These requests that high-speed buffer processor 5 is handled information, if institute's information requested has been stored in the high capacity write and read device 3, just obtain institute's information requested therefrom, otherwise read and/or write device 2 acquisition institute information requested, give another device 7 this information at last from optical memory.
High-speed buffer processor 5 also can be in a period of time be analyzed request to information according to cache policies.For a person skilled in the art, cache policies is well-known.As the result who analyzes, high-speed buffer processor 5 can determine which another device 7 is than definite information of the more frequent request of other information.As long as this fixed information often is requested, high-speed buffer processor 5 just can be stored in it in high capacity write and read device.High-speed buffer processor 5 also can be realized the cache policies that is referred to as to read in advance, thereby waits for the request of 7 pairs of information of another device in advance.
In another embodiment, also can use high-speed buffer processor 5 on data bus 1, to receive to send by another device 7, be intended to be stored in optical memory is read and/or write device 2 in information.High-speed buffer processor 5 will at first send to high capacity write and read device 3 to the information of receiving, the latter stores this information earlier, then it is copied to from high capacity write and read device 3 that optical memory is read and/or write device 2.By utilizing the write attribute of high capacity write and read device 3, improved in fact that optical memory is read and/or the write attribute of write device 2.
Each device that is connected on the data bus 1 uses communication protocol to come exchange message.In a preferred embodiment, optical memory read and/or write device 2 and high level cache processor 5 between communication protocol can be the optimization version of the communication protocol between another device 7 and high-speed buffer processor 5, to strengthen simplicity and performance.
In general, high capacity write and read device 3 can comprise himself carrying out oneself private cache processor of high-speed cache.In a preferred embodiment, the functional function that comprises the high-speed buffer processor that this is special-purpose of high-speed buffer processor 5, thereby removed to one physically each other private cache processor needs and further reduced cost.

Claims (4)

1. read and/or the cache enabling architecture of high-speed cache is carried out to information in the output of write device (2) and/or input in optical memory, comprising:
Based at least one high capacity write and read device (3) of magnetic hard drive,
Data bus (1) is connected with described high capacity write and read device thereon indirectly or directly, from read except that described optical memory and/or write device the instruction of other some devices (7) arrive described high capacity write and read device by described data bus,
High-speed buffer processor (5), it comes described information is carried out high-speed cache by using described high capacity write and read device, and described high-speed buffer processor is directly connected to described high capacity write and read device,
Wherein said optical memory is read and/or the described output and/or the input of write device are connected by described data bus with described high-speed buffer processor, so that directly exchange described information between described output and/or input and described high-speed buffer processor.
2. according to the described high-speed cache of claim 1 place framework, it is characterized in that described high-speed buffer processor is the ingredient of described high capacity write and read device.
3. according to the described cache enabling architecture of arbitrary claim in the claim 1 to 2, it is characterized in that described data bus is based on IEEE 1394 buses.
4. the magnetic hard drive of in computer system, using, described computer system comprises: at least one CPU (central processing unit), optical memory are read and/or write device and data bus, wherein said CPU (central processing unit) and described optical memory are read and/or write device is connected to described data bus indirectly or directly, and described magnetic hard drive further comprises:
Connecting circuit is used for described magnetic hard drive is connected to described data bus,
High-speed buffer processor, it from described data bus receive request with read and/or write be intended to be used for described optical memory is read and/information of write device, described high-speed buffer processor also described magnetic hard drive and described optical memory is read and/or write device between carry out message exchange described optical memory is read and/or write device carries out high-speed cache by described data bus.
CN98118871A 1997-09-08 1998-09-04 Cache enabling architecture Expired - Fee Related CN1119749C (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US058,452 1987-06-05
US5845297P 1997-09-08 1997-09-08
US058452 1997-09-08
EP97115527A EP0901077A1 (en) 1997-09-08 1997-09-08 Cache enabling architecture
EP97115527.0 1997-09-08

Publications (2)

Publication Number Publication Date
CN1211008A true CN1211008A (en) 1999-03-17
CN1119749C CN1119749C (en) 2003-08-27

Family

ID=26145768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN98118871A Expired - Fee Related CN1119749C (en) 1997-09-08 1998-09-04 Cache enabling architecture

Country Status (7)

Country Link
JP (1) JPH11167469A (en)
KR (1) KR100580933B1 (en)
CN (1) CN1119749C (en)
HK (1) HK1017115A1 (en)
ID (1) ID20659A (en)
MY (1) MY118599A (en)
SG (1) SG70114A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101403982B (en) * 2008-11-03 2011-07-20 华为技术有限公司 Task distribution method, system for multi-core processor

Also Published As

Publication number Publication date
HK1017115A1 (en) 1999-11-12
CN1119749C (en) 2003-08-27
KR19990029463A (en) 1999-04-26
MY118599A (en) 2004-12-31
ID20659A (en) 1999-02-11
JPH11167469A (en) 1999-06-22
SG70114A1 (en) 2000-01-25
KR100580933B1 (en) 2006-10-24

Similar Documents

Publication Publication Date Title
US20030188045A1 (en) System and method for distributing storage controller tasks
KR101747966B1 (en) Autonomous subsystem architecture
Canim et al. Buffered Bloom Filters on Solid State Storage.
CN100351767C (en) Method, system for an adaptor to read and write to system memory
CN111625558A (en) Server architecture, database query method thereof and storage medium
CN111309266B (en) Distributed storage metadata system log optimization system and method based on ceph
JPH06175786A (en) Disk array device
CN115994115B (en) Chip control method, chip set and electronic equipment
RU2183850C2 (en) Method of performance of reading operation in multiprocessor computer system
CN1119749C (en) Cache enabling architecture
CN108920095A (en) A kind of data store optimization method and apparatus based on CRUSH
CN1299098A (en) Equity elevator scheduling calculating method used for direct access storage device
CN1930555A (en) Method and system for coalescing coherence messages
CN1258714C (en) Network optical disc database
JPH08212178A (en) Parallel computer
CN113296899A (en) Transaction master machine, transaction slave machine and transaction processing method based on distributed system
US6175895B1 (en) Cache enabling architecture
USRE38514E1 (en) System for and method of efficiently controlling memory accesses in a multiprocessor computer system
CN113347230B (en) Load balancing method, device, equipment and medium based on programmable switch
EP0901078A1 (en) Cache enabling architecture
CN117373501B (en) Statistical service execution rate improving method and related device
JP2994917B2 (en) Storage system
JPH02310649A (en) Reception frame transfer system and communication controller
JPH0554136B2 (en)
CN115878311A (en) Computing node cluster, data aggregation method and related equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20030827

Termination date: 20160904