CN103593306A - Design method for Cache control unit of protocol processor - Google Patents

Design method for Cache control unit of protocol processor Download PDF

Info

Publication number
CN103593306A
CN103593306A CN201310569312.XA CN201310569312A CN103593306A CN 103593306 A CN103593306 A CN 103593306A CN 201310569312 A CN201310569312 A CN 201310569312A CN 103593306 A CN103593306 A CN 103593306A
Authority
CN
China
Prior art keywords
cache
instruction
data
backfill
access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310569312.XA
Other languages
Chinese (zh)
Inventor
周恒钊
陈继承
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN201310569312.XA priority Critical patent/CN103593306A/en
Publication of CN103593306A publication Critical patent/CN103593306A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a design method for a Cache control unit of a protocol processor. The Cache control unit of the protocol processor is used for controlling access of the protocol processor in a CC-NUMA system to caching. Full synchronization of Cache reading and writing operation time sequence and a streamline of the protocol processor is achieved by dispatching and hanging different Cache access instructions, and cache seamless access and full streamline protocol message processing are achieved under the condition that the Cache is idle and hit. A system is divided into a plurality of sub-modules according to functions and comprises a streamline instruction hanging queue, a dispatching module, a label array, a data array, a backfill module, an interface communication module and failure cache, asynchronous operation is replaced by Cache operation, and delay of Cache access caused by the asynchronous operation is removed. When the cache is hit, the protocol processing streamline can synchronously operate data in the Cache, so that efficiency for protocol processing is improved.

Description

A kind of method for designing of protocol processor Cache control module
Technical field
The present invention relates to computer realm and integrated circuit (IC) design field, be specifically related to a kind of method for designing of protocol processor Cache control module.
Background technology
Cache, high-speed cache, is often referred to the high speed small-capacity memory between processor and main memory, and access speed is more faster than main memory, the access speed of matched-field processors.Cache realizes based on static RAM SRAM conventionally, and the advantage of SRAM relative dynamic random access memory DRAM is that speed is fast, and shortcoming is that cost is high, and area is large.Cache is with the content in several centesimal capacity-mapped part core address of main memory, when the data address of processor access is just in time positioned within its mapping, processor can directly operate Cache, has save the step of access main memory, and the processing speed of computing machine promotes greatly.
Supporting dual processor by 16 core CC-NUMA systems of the direct-connected formation of HT bus, data are divided and are evenly distributed on the main memory of each node according to address, between node and node, according to Cache consistency protocol, carry out data communication, with the form of the packet receiving of giving out a contract for a project, complete communication process.Communications packets between node and node exists with the form of protocol massages.Node needs by protocol processor, it to be resolved and is processed after port receives protocol massages.In protocol processor, there is the Cache of certain capacity, for storing most recently used protocol information.If Cache hits, protocol processor directly operates Cache, if Cache is miss, single-level memory is initiated access downwards.Cache control module in protocol processor, for receiving the Cache access instruction that it sends, is resolved backward Cache to instruction and is initiated corresponding operation.
While conventionally realizing protocol processor, Cache control module is become to asynchronous mode with protocol processes the pipeline design, data manipulation flow process between the two does not exist associated with sequential.Protocol processes streamline is just hung up the relevant protocol massages of this instruction after Cache sends access instruction, waits for after the response of Cache control module is returned and retrieves the protocol massages being suspended again, and enters streamline and re-starts protocol analysis and processing.The drawback of this implementation is whether protocol processes streamline Cache hits the data on can not synchronous operation Cache, the processing of each protocol massages all needs to wait for the Cache access time of several clock period, the efficiency that has reduced protocol processes, has increased system delay.
Summary of the invention
The method for designing that the object of this invention is to provide a kind of protocol processor Cache control module.
The object of the invention is to realize in the following manner, protocol processor Cache control module is for the access control of CC-NUMA system protocol processor to high-speed cache, by scheduling and hang-up to Different Ca che access instruction, realizing Cache read-write operation sequential synchronizes entirely with the streamline of protocol processor, under and the prerequisite of hitting idle at Cache, realize the seamless access of high-speed cache and protocol massages is processed complete flowing water, system is divided into several submodules according to function, comprise: instruction pipeline is hung up queue, scheduler module, tag array, data array, backfill module, interface communication module and inefficacy buffer memory, wherein:
Instruction pipeline is hung up queue, storage for the streamline Cache access instruction that is suspended, to realize the synchronous of FIFO, data fifo width is equal to Cache instruction length, the FIFO degree of depth is equal to 2 times of streamline flowing water station quantity, when the access instruction sending to Cache control module when protocol processes streamline does not obtain processing authority, FIFO writes and enables effectively, this instruction is written into instruction pipeline and hangs up queue, when it regains processing authority, FIFO reads to enable effectively, this instruction is read from queue, carry out corresponding Cache operation, when the number of instructions of storing in queue is more than or equal to streamline quantity, block source, stop receiving the Cache access instruction that protocol processes streamline sends, complete this source that reopens again of instruction process being suspended in waiting list,
Scheduler module, for scheduling and the arbitration to the access instruction of the separate sources of Cache control module reception, when a plurality of access instructions are simultaneously effective, Cache operation is done in the highest instruction of priority gating priority according to the rules, and the instruction that priority is lower is written into hang-up queue or the buffer memory of response;
Tag array, reading and comparing for label, first the access instruction of scheduler module gating is carried out to decoding, use the address after decoding to initiate read operation to 8 road sign label simultaneously, the label substance that read on each road compares with decoding data respectively, both hit road in equal Na mono-road, and the information of hitting on tag array Jiang Ge road is carried out array output, for the data strobe of Cache;
Data array, for reading and multi channel selecting of data, first the access instruction of scheduler module gating is carried out to decoding, use the address after decoding to initiate read operation to 8 circuit-switched data arrays simultaneously, after each circuit-switched data is read, the tag hit information of waiting for tag array output is effective, use the information of hitting of label to carry out data strobe to 8 circuit-switched data, that circuit-switched data that obtains hitting, after completing data strobe, export Cache hit results and hiting data, whole Cache read operation is all carried out between the cell site of protocol streams waterline and actuating station, pipeline synchronization with protocol processor, do not produce extra gap or system delay, actuating station obtains hit results and the hiting data of Cache will be as input, carry out ALU computing,
Backfill module, data backfill for Cache, instantly single-level memory accessing operation finishes, when data response is returned, backfill module is sent Cache backfill instruction to scheduler module, when backfill instruction obtains processing authority, carry out the backfill of corresponding Cache data, otherwise backfill instruction is write to backfill and hang up queue, backfill hang-up queue is synchronizeed with FIFO, principle is identical with instruction pipeline hang-up queue;
Interface communication module, for Cache at the corresponding levels, communicate by letter with the instruction and data between next stage storer, when Cache at the corresponding levels is miss, the downward single-level memory of interface communication module is initiated asynchronous access instruction, after several clock period, next stage memory access operation finishes, and when data response is returned, interface communication module receives the data of returning and notifies backfill module;
Inefficacy buffer memory, for being accessed to miss instruction, Cache carries out buffer memory, Cache control module returned to Cache miss information before actuating station, and miss access instruction is write in inefficacy buffer memory, wait for that interface communication module receives the data of returning, and backfill piece completes after backfill, activate the access instruction of appropriate address in inefficacy buffer memory, to scheduler module, send.
The functional module partiting step of protocol processes streamline Cache control module is as follows:
Scheduler module receives the Cache access instruction of separate sources, comprise: streamline access instruction, instruction pipeline is hung up queue, backfill instruction, the queue of backfill instruction suspends, inefficacy buffered instructions, it is arbitrated and is dispatched, inefficacy buffer module cushions the instruction of missing the target, after waiting for backfill, reactivate, the queue of backfill instruction suspends is not stored obtaining the backfill instruction of processing authority, wait for that stamping-out success is again processed, tag array carries out the index of the connected Cache of multichannel group, the information of hitting and the multichannel that according to decoding address and data, calculate Cache are selected signal, data array is carried out reading and writing of Cache data, and the data of selecting signal to read each circuit-switched data array according to the multichannel of tag array output are carried out gating, that circuit-switched data that output is hit, interface communication module receives the information of hitting and the hiting data of tag array and data array output, forming standard format returns to protocol streams waterline, when missing the target, the downward single-level memory of this module sends asynchronous access instruction, while having response to return on interface, notice backfill module is initiated backfill instruction to scheduler module,
The instruction process flow process of Cache control module is as follows:
The cell site of protocol streams waterline sends Cache access instruction, control module carries out multiple instruction arbitration and scheduling after finishing Instruction decoding, dispatching the successfully instruction of acquisition arbitration authority is processed, use address and the data that Instruction decoding obtains to initiate the read operation of data array and tag array, tag array is read result and is carried out data comparison, whether calculate Cache hits, hit and carry out data strobe, that circuit-switched data that obtains hitting, miss the target and export the Cache signal that misses the target, single-level memory sends the asynchronous instruction of reading downwards, obtain after hit results, if processing instruction type is streamline, export that Cache hits and hiting data, wait for after streamline actuating station completes and initiate Cache write operation, otherwise output Cache misses the target, instruction pipeline is write to hang-up queue, protocol streams waterline is writing back station initiation Cache write operation, control module drives respectively the write port of tag array and data array that data are write to corresponding address, a complete protocol massages treatment scheme relevant to Cache finishes.
The scheduling of described Cache control module to Different Ca che access instruction, realizes scheduler module the Cache access instruction of a plurality of separate sources is dispatched according to priority, and scheduling successfully source obtains processing authority, dispatches failed source and is suspended.
The hang-up of described Cache control module to Different Ca che access instruction, realizes two and hangs up queues and respectively the Cache backfill instruction that does not obtain the protocol processor Cache access instruction of processing authority and do not obtain processing authority is managed.
Described protocol processor Cache access instruction is hung up queue, follow first in first out, the output of this queue is also as one of source of the pending instruction of Cache control module, when it obtains processing authority, while dispatching successfully, the protocol processor Cache access instruction being suspended is read from this queue, carries out corresponding Cache accessing operation.
The queue of described Cache backfill instruction suspends, follow first in first out, the output of this queue is also as one of source of the pending instruction of Cache control module, when it obtains processing authority, while dispatching successfully, the Cache backfill instruction being suspended is read from this queue, carries out corresponding Cache backfilling operation.
Described high-speed cache is seamless, and access and protocol massages are processed complete flowing water, based on message, process and first read, the Cache operating principle of writing after hitting, the streamline of protocol processor sends Cache in cell site and reads instruction, at actuating station, carries out protocol processes, Cache control module completes command reception in the time between above-mentioned two flowing water stations, instruction scheduling, Cache reads, four operations of Output rusults, under the prerequisite that guarantees to hit at Cache, data are without delaying to reach actuating station.
The Cache operating principle that described protocol massages is write after processing and hitting, the streamline of protocol processor sends Cache write command writing back station, Cache control module receives this instruction, without having postponed Cache write operation, guarantees that data synchronously write Cache according to the instruction that writes back station.
Access that described high-speed cache is seamless, realize inefficacy cache module buffer memory Cache and access miss instruction, when carrying out Cache backfill, from inefficacy buffer memory, activate corresponding instruction, the output of inefficacy buffer memory is also as one of source of the pending instruction of Cache control module, when it obtains processing authority, while dispatching successfully, the Cache access missed instruction being suspended is read from this buffer memory, re-starts corresponding Cache accessing operation.
Described Cache access miss situation, realize interface communication module and to next stage Cache, initiate asynchronous access instruction when Cache at the corresponding levels is miss, after several clock period, next stage Cache accessing operation finishes, when data response is returned, backfill module is sent backfill instruction to scheduler module.
The invention has the beneficial effects as follows: the mode with synchronous Cache operation replaces asynchronous operation, eliminated the delay of the Cache access that asynchronous operation brings.When Cache hits, the data on protocol processes streamline energy synchronous operation Cache, have improved the efficiency of protocol processes.Various instructions are dispatched with optimal strategy, reduced the obstruction in system.The Cache synchronous operation mode that this Cache control module is realized has completed Cache read-write operation sequential and has synchronizeed with the complete of streamline of protocol processor, realize the seamless access of high-speed cache and protocol massages and process complete flowing water, reduce the delay of system, improved throughput.
Accompanying drawing explanation
Fig. 1 is Cache control module functional module logical organization schematic diagram;
Fig. 2 is Cache browsing process figure;
Fig. 3 is protocol processes journey figure;
Fig. 4 is Cache Instruction decoding structural drawing.
Embodiment
With reference to Figure of description, method of the present invention is described in detail below.
Accompanying drawing 1 has been described the functional module of protocol processes streamline Cache control module and has been divided, wherein scheduler module (Command Scheduler) receives 5 separate sources (streamline access instructions, instruction pipeline is hung up queue, backfill instruction, the queue of backfill instruction suspends, inefficacy buffered instructions) Cache access instruction, arbitrates and dispatches it.Inefficacy buffer module (Miss Buffer) cushions (Miss) instruction of missing the target, and after wait backfill, reactivates.Backfill instruction suspends queue (Fill-back Queue) is not stored obtaining the backfill instruction of processing authority, waits for that stamping-out success is again processed.Tag array (Tag Array) carries out the be connected index of Cache of multichannel group, and the information of hitting and the multichannel that according to decoding address and data, calculate Cache are selected signal.Data array (Data Array) is carried out reading and writing of Cache data, and the data of selecting signal to read each circuit-switched data array according to the multichannel of tag array output are carried out gating, that circuit-switched data that output is hit.Interface communication module (Interface Unit) receives the information of hitting and the hiting data of tag array and data array output, forms standard format and returns to protocol streams waterline.When missing the target (Miss), the downward single-level memory of this module sends asynchronous access instruction, and while having response to return on interface, notice backfill module is initiated backfill (Fill-back) instruction to scheduler module.
Accompanying drawing 2 has been described the instruction process flow process of Cache control module.The cell site of protocol streams waterline (SD) sends Cache access instruction, and control module carries out multiple instruction arbitration and scheduling after finishing Instruction decoding, dispatches the successfully instruction of acquisition arbitration authority and is processed.Use address and the data that Instruction decoding obtains to initiate the read operation of data array and tag array, tag array is read result and is carried out data comparison, calculates Cache and whether hits (Hit), hits and carries out data strobe, that circuit-switched data that obtains hitting.Miss the target (Miss) export the Cache signal that misses the target, single-level memory sends the asynchronous instruction of reading downwards.Obtain after hit results, if processing instruction type is streamline, export that Cache hits and hiting data, wait for after streamline actuating station (EX) completes and initiate Cache write operation, otherwise output Cache misses the target, instruction pipeline is write to hang-up queue.Protocol streams waterline is writing back station (WB) initiation Cache write operation, and control module drives respectively the write port of tag array and data array that data are write to corresponding address, and a complete protocol massages treatment scheme relevant to Cache finishes.
Accompanying drawing 3 has been described the flowing water at different levels station of protocol processes streamline.Cache control module carries out scheduling arbitration and the corresponding Cache read operation of gating instruction of a plurality of separate sources instructions between cell site (SD) and actuating station (EX).Carry out afterwards corresponding Cache write operation writing back station (WB).Whole process and agreement pipeline synchronization, the not delay outside occupying volume.Protocol massages can be processed in the mode of data stream (Packets Flow) in system, between message and message without inserting bubble.
Accompanying drawing 4 has been described the Instruction decoding form of Cache.Cache access instruction is divided into 5 sections, and first paragraph is protocol information index territory (PacketInfo Index), as the selection signal of protocol information in Cache Line.Second segment is label data territory (Tag Data), and during for the comparison of tag array content and backfill, label writes.The 3rd section is tag addresses territory (Tag Addr), as the read operation address of tag array and data array.The 4th section is cavity (Hole), as default domain.The 5th section is instruction type territory (Cmd), according to different instructions, it is encoded, and control module is made corresponding operating according to type of coding.
Except the technical characterictic described in instructions, be the known technology of those skilled in the art.

Claims (9)

1. the method for designing of a protocol processor Cache control module, it is characterized in that the access control to high-speed cache for CC-NUMA system protocol processor of protocol processor Cache control module, by scheduling and hang-up to Different Ca che access instruction, realizing Cache read-write operation sequential synchronizes entirely with the streamline of protocol processor, under and the prerequisite of hitting idle at Cache, realize the seamless access of high-speed cache and protocol massages is processed complete flowing water, system is divided into several submodules according to function, comprise: instruction pipeline is hung up queue, scheduler module, tag array, data array, backfill module, interface communication module and inefficacy buffer memory, wherein:
Instruction pipeline is hung up queue, storage for the streamline Cache access instruction that is suspended, to realize the synchronous of FIFO, data fifo width is equal to Cache instruction length, the FIFO degree of depth is equal to 2 times of streamline flowing water station quantity, when the access instruction sending to Cache control module when protocol processes streamline does not obtain processing authority, FIFO writes and enables effectively, this instruction is written into instruction pipeline and hangs up queue, when it regains processing authority, FIFO reads to enable effectively, this instruction is read from queue, carry out corresponding Cache operation, when the number of instructions of storing in queue is more than or equal to streamline quantity, block source, stop receiving the Cache access instruction that protocol processes streamline sends, complete this source that reopens again of instruction process being suspended in waiting list,
Scheduler module, for scheduling and the arbitration to the access instruction of the separate sources of Cache control module reception, when a plurality of access instructions are simultaneously effective, Cache operation is done in the highest instruction of priority gating priority according to the rules, and the instruction that priority is lower is written into hang-up queue or the buffer memory of response;
Tag array, reading and comparing for label, first the access instruction of scheduler module gating is carried out to decoding, use the address after decoding to initiate read operation to 8 road sign label simultaneously, the label substance that read on each road compares with decoding data respectively, both hit road in equal Na mono-road, and the information of hitting on tag array Jiang Ge road is carried out array output, for the data strobe of Cache;
Data array, for reading and multi channel selecting of data, first the access instruction of scheduler module gating is carried out to decoding, use the address after decoding to initiate read operation to 8 circuit-switched data arrays simultaneously, after each circuit-switched data is read, the tag hit information of waiting for tag array output is effective, use the information of hitting of label to carry out data strobe to 8 circuit-switched data, that circuit-switched data that obtains hitting, after completing data strobe, export Cache hit results and hiting data, whole Cache read operation is all carried out between the cell site of protocol streams waterline and actuating station, pipeline synchronization with protocol processor, do not produce extra gap or system delay, actuating station obtains hit results and the hiting data of Cache will be as input, carry out ALU computing,
Backfill module, data backfill for Cache, instantly single-level memory accessing operation finishes, when data response is returned, backfill module is sent Cache backfill instruction to scheduler module, when backfill instruction obtains processing authority, carry out the backfill of corresponding Cache data, otherwise backfill instruction is write to backfill and hang up queue, backfill hang-up queue is synchronizeed with FIFO, principle is identical with instruction pipeline hang-up queue;
Interface communication module, for Cache at the corresponding levels, communicate by letter with the instruction and data between next stage storer, when Cache at the corresponding levels is miss, the downward single-level memory of interface communication module is initiated asynchronous access instruction, after several clock period, next stage memory access operation finishes, and when data response is returned, interface communication module receives the data of returning and notifies backfill module;
Inefficacy buffer memory, for being accessed to miss instruction, Cache carries out buffer memory, Cache control module returned to Cache miss information before actuating station, and miss access instruction is write in inefficacy buffer memory, wait for that interface communication module receives the data of returning, and backfill piece completes after backfill, activate the access instruction of appropriate address in inefficacy buffer memory, to scheduler module, send, wherein:
The functional module partiting step of protocol processes streamline Cache control module is as follows:
Scheduler module receives the Cache access instruction of separate sources, comprise: streamline access instruction, instruction pipeline is hung up queue, backfill instruction, the queue of backfill instruction suspends, inefficacy buffered instructions, it is arbitrated and is dispatched, inefficacy buffer module cushions the instruction of missing the target, after waiting for backfill, reactivate, the queue of backfill instruction suspends is not stored obtaining the backfill instruction of processing authority, wait for that stamping-out success is again processed, tag array carries out the index of the connected Cache of multichannel group, the information of hitting and the multichannel that according to decoding address and data, calculate Cache are selected signal, data array is carried out reading and writing of Cache data, and the data of selecting signal to read each circuit-switched data array according to the multichannel of tag array output are carried out gating, that circuit-switched data that output is hit, interface communication module receives the information of hitting and the hiting data of tag array and data array output, forming standard format returns to protocol streams waterline, when missing the target, the downward single-level memory of this module sends asynchronous access instruction, while having response to return on interface, notice backfill module is initiated backfill instruction to scheduler module,
The instruction process flow process of Cache control module is as follows:
The cell site of protocol streams waterline sends Cache access instruction, control module carries out multiple instruction arbitration and scheduling after finishing Instruction decoding, dispatching the successfully instruction of acquisition arbitration authority is processed, use address and the data that Instruction decoding obtains to initiate the read operation of data array and tag array, tag array is read result and is carried out data comparison, whether calculate Cache hits, hit and carry out data strobe, that circuit-switched data that obtains hitting, miss the target and export the Cache signal that misses the target, single-level memory sends the asynchronous instruction of reading downwards, obtain after hit results, if processing instruction type is streamline, export that Cache hits and hiting data, wait for after streamline actuating station completes and initiate Cache write operation, otherwise output Cache misses the target, instruction pipeline is write to hang-up queue, protocol streams waterline is writing back station initiation Cache write operation, control module drives respectively the write port of tag array and data array that data are write to corresponding address, a complete protocol massages treatment scheme relevant to Cache finishes.
2. method according to claim 1, it is characterized in that the scheduling of described Cache control module to Different Ca che access instruction, realizing scheduler module dispatches according to priority the Cache access instruction of a plurality of separate sources, scheduling successfully source obtains processing authority, dispatches failed source and is suspended.
3. method according to claim 1, it is characterized in that the hang-up of described Cache control module to Different Ca che access instruction, realize two and hang up queues and respectively the Cache backfill instruction that does not obtain the protocol processor Cache access instruction of processing authority and do not obtain processing authority is managed.
4. method according to claim 3, it is characterized in that described protocol processor Cache access instruction hang-up queue, follow first in first out, the output of this queue is also as one of source of the pending instruction of Cache control module, when it obtains processing authority, while dispatching successfully, the protocol processor Cache access instruction being suspended is read from this queue, carries out corresponding Cache accessing operation.
5. method according to claim 3, it is characterized in that the queue of described Cache backfill instruction suspends, follow first in first out, the output of this queue is also as one of source of the pending instruction of Cache control module, when it obtains processing authority, while dispatching successfully, the Cache backfill instruction being suspended is read from this queue, carries out corresponding Cache backfilling operation.
6. method according to claim 1, it is characterized in that the seamless access of described high-speed cache and protocol massages process complete flowing water, based on message, process and first read, the Cache operating principle of writing after hitting, the streamline of protocol processor sends Cache in cell site and reads instruction, at actuating station, carry out protocol processes, Cache control module completes command reception in the time between above-mentioned two flowing water stations, instruction scheduling, Cache reads, four operations of Output rusults, under the prerequisite that guarantees to hit at Cache, data are without delaying to reach actuating station.
7. method according to claim 6, it is characterized in that the Cache operating principle of writing after described protocol massages processing is hit, the streamline of protocol processor sends Cache write command writing back station, Cache control module receives this instruction, without having postponed Cache write operation, guarantee that data synchronously write Cache according to the instruction that writes back station.
8. method according to claim 1, it is characterized in that the seamless access of described high-speed cache, realize inefficacy cache module buffer memory Cache and access miss instruction, when carrying out Cache backfill, from inefficacy buffer memory, activate corresponding instruction, the output of inefficacy buffer memory is also as one of source of the pending instruction of Cache control module, when it obtains processing authority, while dispatching successfully, the Cache access missed instruction being suspended is read from this buffer memory, re-starts corresponding Cache accessing operation.
9. method according to claim 8, it is characterized in that described Cache access miss situation, realize interface communication module and to next stage Cache, initiate asynchronous access instruction when Cache at the corresponding levels is miss, after several clock period, next stage Cache accessing operation finishes, when data response is returned, backfill module is sent backfill instruction to scheduler module.
CN201310569312.XA 2013-11-15 2013-11-15 Design method for Cache control unit of protocol processor Pending CN103593306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310569312.XA CN103593306A (en) 2013-11-15 2013-11-15 Design method for Cache control unit of protocol processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310569312.XA CN103593306A (en) 2013-11-15 2013-11-15 Design method for Cache control unit of protocol processor

Publications (1)

Publication Number Publication Date
CN103593306A true CN103593306A (en) 2014-02-19

Family

ID=50083457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310569312.XA Pending CN103593306A (en) 2013-11-15 2013-11-15 Design method for Cache control unit of protocol processor

Country Status (1)

Country Link
CN (1) CN103593306A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506362A (en) * 2014-12-29 2015-04-08 浪潮电子信息产业股份有限公司 Method for system state switching and monitoring on CC-NUMA (cache coherent-non uniform memory access architecture) multi-node server
CN105072048A (en) * 2015-09-24 2015-11-18 浪潮(北京)电子信息产业有限公司 Message storage scheduling method and apparatus
CN105182221A (en) * 2015-10-09 2015-12-23 天津国芯科技有限公司 JTAG multipath selector and connection method in SoC
CN103927277B (en) * 2014-04-14 2017-01-04 中国人民解放军国防科学技术大学 CPU and GPU shares the method and device of on chip cache
CN108733582A (en) * 2017-04-18 2018-11-02 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN109918131A (en) * 2019-03-11 2019-06-21 中电海康无锡科技有限公司 A kind of instruction read method based on non-obstruction command cache
CN110023916A (en) * 2016-12-16 2019-07-16 阿里巴巴集团控股有限公司 A kind of method and apparatus reducing read/write competition in caching
CN110036375A (en) * 2016-12-13 2019-07-19 超威半导体公司 Unordered cache returns
CN113778526A (en) * 2021-11-12 2021-12-10 北京微核芯科技有限公司 Cache-based pipeline execution method and device
CN113835673A (en) * 2021-09-24 2021-12-24 苏州睿芯集成电路科技有限公司 Method, system and device for reducing loading delay of multi-core processor
CN113934653A (en) * 2021-09-15 2022-01-14 合肥大唐存储科技有限公司 Cache implementation method and device of embedded system
CN116257191A (en) * 2023-05-16 2023-06-13 北京象帝先计算技术有限公司 Memory controller, memory component, electronic device and command scheduling method
CN116383102A (en) * 2023-05-30 2023-07-04 北京微核芯科技有限公司 Translation look-aside buffer access method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929800A (en) * 2012-10-17 2013-02-13 无锡江南计算技术研究所 Cache consistency protocol derivation processing method
CN103077132A (en) * 2013-01-07 2013-05-01 浪潮(北京)电子信息产业有限公司 Cache processing method and protocol processor cache control unit

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929800A (en) * 2012-10-17 2013-02-13 无锡江南计算技术研究所 Cache consistency protocol derivation processing method
CN103077132A (en) * 2013-01-07 2013-05-01 浪潮(北京)电子信息产业有限公司 Cache processing method and protocol processor cache control unit

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927277B (en) * 2014-04-14 2017-01-04 中国人民解放军国防科学技术大学 CPU and GPU shares the method and device of on chip cache
CN104506362A (en) * 2014-12-29 2015-04-08 浪潮电子信息产业股份有限公司 Method for system state switching and monitoring on CC-NUMA (cache coherent-non uniform memory access architecture) multi-node server
CN105072048A (en) * 2015-09-24 2015-11-18 浪潮(北京)电子信息产业有限公司 Message storage scheduling method and apparatus
CN105072048B (en) * 2015-09-24 2018-04-10 浪潮(北京)电子信息产业有限公司 A kind of packet storage dispatching method and device
CN105182221A (en) * 2015-10-09 2015-12-23 天津国芯科技有限公司 JTAG multipath selector and connection method in SoC
CN105182221B (en) * 2015-10-09 2017-12-22 天津国芯科技有限公司 A kind of JTAG MUXs and its connection method in system-on-a-chip
CN110036375B (en) * 2016-12-13 2023-11-03 超威半导体公司 Out-of-order cache return
CN110036375A (en) * 2016-12-13 2019-07-19 超威半导体公司 Unordered cache returns
CN110023916B (en) * 2016-12-16 2023-07-28 阿里巴巴集团控股有限公司 Method and device for reducing read/write competition in cache
CN110023916A (en) * 2016-12-16 2019-07-16 阿里巴巴集团控股有限公司 A kind of method and apparatus reducing read/write competition in caching
CN108733582A (en) * 2017-04-18 2018-11-02 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN108733582B (en) * 2017-04-18 2021-10-29 腾讯科技(深圳)有限公司 Data processing method and device
CN109918131B (en) * 2019-03-11 2021-04-30 中电海康无锡科技有限公司 Instruction reading method based on non-blocking instruction cache
CN109918131A (en) * 2019-03-11 2019-06-21 中电海康无锡科技有限公司 A kind of instruction read method based on non-obstruction command cache
CN113934653A (en) * 2021-09-15 2022-01-14 合肥大唐存储科技有限公司 Cache implementation method and device of embedded system
CN113934653B (en) * 2021-09-15 2023-08-18 合肥大唐存储科技有限公司 Cache implementation method and device of embedded system
CN113835673A (en) * 2021-09-24 2021-12-24 苏州睿芯集成电路科技有限公司 Method, system and device for reducing loading delay of multi-core processor
CN113835673B (en) * 2021-09-24 2023-08-11 苏州睿芯集成电路科技有限公司 Method, system and device for reducing loading delay of multi-core processor
CN113778526A (en) * 2021-11-12 2021-12-10 北京微核芯科技有限公司 Cache-based pipeline execution method and device
CN116257191A (en) * 2023-05-16 2023-06-13 北京象帝先计算技术有限公司 Memory controller, memory component, electronic device and command scheduling method
CN116257191B (en) * 2023-05-16 2023-10-20 北京象帝先计算技术有限公司 Memory controller, memory component, electronic device and command scheduling method
CN116383102A (en) * 2023-05-30 2023-07-04 北京微核芯科技有限公司 Translation look-aside buffer access method, device, equipment and storage medium
CN116383102B (en) * 2023-05-30 2023-08-29 北京微核芯科技有限公司 Translation look-aside buffer access method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103593306A (en) Design method for Cache control unit of protocol processor
CN104407997B (en) With instruction dynamic dispatching function and NOT-AND flash single channel isochronous controller
CN103077132B (en) A kind of cache handles method and protocol processor high-speed cache control module
EP3729281B1 (en) Scheduling memory requests with non-uniform latencies
JP4742116B2 (en) Out-of-order DRAM sequencer
CN106790599B (en) A kind of symbiosis virtual machine communication method based on multicore without lock buffer circle
CN101609438B (en) Memory system, access control method therefor, and computer program
EP2943956B1 (en) Memory device having an adaptable number of open rows
KR20210021302A (en) Refresh method in memory controller
KR20110089321A (en) Method and system for improving serial port memory communication latency and reliability
CN103714026B (en) A kind of memory access method supporting former address data exchange and device
KR101511972B1 (en) Methods and apparatus for efficient communication between caches in hierarchical caching design
CN102207916A (en) Instruction prefetch-based multi-core shared memory control equipment
EP3732578B1 (en) Supporting responses for memory types with non-uniform latencies on same channel
CN104298628A (en) Data storage device arbitration circuit and method for concurrent access
CN107391392A (en) A kind of garbage reclamation optimization method based on flash memory device Concurrent Feature
US20130151795A1 (en) Apparatus and method for controlling memory
CN105183662A (en) Cache consistency protocol-free distributed sharing on-chip storage framework
US10162522B1 (en) Architecture of single channel memory controller to support high bandwidth memory of pseudo channel mode or legacy mode
CN108139994B (en) Memory access method and memory controller
CN106372029A (en) Point-to-point on-chip communication module based on interruption
CN104681082B (en) Reading and write conflict avoiding method and its semiconductor chip in single-port memory device
Jeong et al. A technique to improve garbage collection performance for NAND flash-based storage systems
Kim et al. Networked SSD: Flash Memory Interconnection Network for High-Bandwidth SSD
CN112948322A (en) Virtual channel based on elastic cache and implementation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140219