CN103647807B - A kind of method for caching information, device and communication equipment - Google Patents

A kind of method for caching information, device and communication equipment Download PDF

Info

Publication number
CN103647807B
CN103647807B CN201310617002.0A CN201310617002A CN103647807B CN 103647807 B CN103647807 B CN 103647807B CN 201310617002 A CN201310617002 A CN 201310617002A CN 103647807 B CN103647807 B CN 103647807B
Authority
CN
China
Prior art keywords
associated data
buffer unit
queue
queue pair
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310617002.0A
Other languages
Chinese (zh)
Other versions
CN103647807A (en
Inventor
彭胜勇
程子明
石仔良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310617002.0A priority Critical patent/CN103647807B/en
Publication of CN103647807A publication Critical patent/CN103647807A/en
Priority to PCT/CN2014/086497 priority patent/WO2015078219A1/en
Application granted granted Critical
Publication of CN103647807B publication Critical patent/CN103647807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention discloses method for caching information, device and communication equipment, applied to communication technical field.When the communication equipment including RDMA modules according to the form of queue pair during data are transmitted, when needing to use a certain associated data of queue pair first, RDMA modules just get the associated data of the queue pair, and are accordingly stored into the buffer unit of RDMA modules together with the precedence information of the associated data.So when subsequently using the associated data if desired, RDMA modules avoid the need for obtaining from memory module by coupled processor interface, and are directly obtained from buffer unit, avoid between RDMA modules and memory frequently read-write operation;RDMA modules in the case that spatial cache is limited in RDMA modules, preferentially cache the associated data of the other queue pair of high priority according to the associated data of priority level buffer queue pair simultaneously.

Description

A kind of method for caching information, device and communication equipment
Technical field
The present invention relates to communication technical field, more particularly to method for caching information, device and communication equipment.
Background technology
Remote data is directly accessed(Remote Direct Memory Access, RDMA)Technology can reduce server Between carry out data processing delay issue, central processing unit in server can be reduced(Central Processing Unit, CPU)The load of processing data transmission.Specifically, it is straight to include CPU, memory module such as biserial for the server in RDMA systems The formula of inserting memory module(Dual-Inline-Memory-Modules, DIMM)With host side channel adapter(Host Channel Adapter, HCA)Deng, and between server interconnected by the cable between HCA, to realize the communication between server.
Wherein, the HCA in a server can be obtained after sending data by CPU to memory module, is sent to another The HCA of server, and be stored into the data of reception in memory module by CPU by the HCA of another server.So in data During transmission, CPU is only responsible for writing data into memory module and the transmitting data of the task is written in transmit queue Deng, and the control process of Data Transport Protocol such as parses data message, encapsulation of data message and reply data message etc. by HCA To perform, participated in without CPU, the disposal ability without using a large amount of CPU, reduce CPU load.
But during above-mentioned data transfer, when the HCA in server is sending data, it is necessary to which data institute will be sent The information of association such as internal memory translation protection table(Memory Translate Protect Table, MTPT)Pass through etc. information CPU is got to memory module so that memory module can be frequently written and read between HCA and CPU.
The content of the invention
The embodiment of the present invention provides method for caching information, device and communication equipment, reduces processor and tool in communication equipment Have and frequently operate between the module of RDMA functions.
First aspect of the embodiment of the present invention provides a kind of method for caching information, the remote data included applied to communication equipment It is directly accessed in RDMA modules, methods described includes:
Obtain the associated data of the queue pair of the communication equipment transmission data;
Determine the precedence information of the associated data of the queue pair;
The precedence information of the associated data of the queue pair and the associated data is accordingly stored into the RDMA In the buffer unit of module.
In the first possible implementation of first aspect of the embodiment of the present invention, the association for determining the queue pair The precedence information of data, is specifically included:
The queue pair queue to the service class field or custom field of context in determine the incidence number According to precedence information.
In second of possible implementation of first aspect of the embodiment of the present invention, the incidence number by the queue pair Accordingly it is stored into the buffer unit of the RDMA modules, specifically includes according to the precedence information with the associated data:
The buffer unit of free time is selected in the RDMA modules as the first buffer unit, and by the pass of the queue pair The precedence information of connection data and the associated data is stored into first buffer unit;
If there is no the buffer unit of free time in the RDMA modules, selected in the busy buffer unit preferential The level buffer unit lower than the priority of the associated data of the queue pair is as the second buffer unit, and with the queue pair The precedence information of associated data and the associated data replaces the information in second buffer unit;
It is identical with the priority of the associated data of the queue pair if there is the priority of busy buffer unit, then According to preset policy selection buffer unit as the 3rd buffer unit in the busy buffer unit, and with the team The associated data of row pair and the precedence information of the associated data replace the information in the 3rd buffer unit;
Wherein, the priority of associated data of the priority of buffer unit with being stored in the buffer unit is consistent.
With reference to the embodiment of the present invention in a first aspect, or first aspect the first or second of possible implementation, this In the third possible implementation of inventive embodiments first aspect, methods described also includes:
When the queue is to nullifying, the buffer unit in the RDMA modules is set to the buffer unit of free time.
With reference to the embodiment of the present invention in a first aspect, or first aspect the first into the third possible implementation appoint One may implementation, in the 4th kind of possible implementation of first aspect of the embodiment of the present invention:
The buffer unit includes label field and content domain, and the label field is used for mark and the institute for storing the queue pair State the precedence information of associated data;The content domain is used for the associated data for storing the queue pair.
With reference to the 4th kind of possible implementation of first aspect of the embodiment of the present invention, the of first aspect of the embodiment of the present invention In five kinds of possible implementations:
The associated data includes any one or more following information:Queue context, the transmission number of the queue pair According to internal memory translation protection table and the queue pair completion queue context;
In the content domain, by the queue context, internal memory translation protection table and queue context is completed according to pre- The ordered storage put.
With reference to the embodiment of the present invention in a first aspect, or first aspect the first into the 5th kind of possible implementation appoint One may implementation, in the 6th kind of possible implementation of first aspect of the embodiment of the present invention, the pass of the buffer unit Joining data includes one or more, and methods described also includes:
Any of described buffer unit or a variety of associated datas are updated.
Second aspect of the embodiment of the present invention provides a kind of information cache device, including:
Associated data acquiring unit, the associated data of the queue pair for obtaining the communication equipment transmission data;
Priority determining unit, the precedence information of the associated data for determining the queue pair;
Storage element, for the associated data of queue pair for obtaining the associated data acquiring unit and the priority The precedence information for the associated data that determining unit determines accordingly is stored into the buffer unit of described information buffer storage.
In the first possible implementation of second aspect of the embodiment of the present invention, the priority determining unit, tool Body be used for the queue pair queue to the service class field or custom field of context in determine the associated data Precedence information.
In second of possible implementation of second aspect of the embodiment of the present invention, the storage element includes:
First storage element, for selecting the buffer unit of free time single as the first caching in described information buffer storage Member, and the precedence information of the associated data of the queue pair and the associated data is stored into the first of the selection and cached In unit;
Second storage element, if for not having the buffer unit of free time in described information buffer storage, in the non-NULL Select the low buffer unit of the priority of the associated data of queue pair described in priority ratio slow as second in not busy buffer unit Memory cell, and replace second buffer unit with the associated data of the queue pair and the precedence information of the associated data In information;
3rd storage element, for the priority if there is busy buffer unit and the incidence number of the queue pair According to priority it is identical, then it is slow as the 3rd according to preset policy selection buffer unit in the busy buffer unit Memory cell, and replace the 3rd buffer unit with the associated data of the queue pair and the precedence information of the associated data In information;
Wherein, the priority of associated data of the priority of buffer unit with being stored in the buffer unit is consistent.
With reference to second aspect of the embodiment of the present invention, or the first or second of possible implementation of second aspect, this In the third possible implementation of inventive embodiments second aspect, in addition to:
Unit is nullified, for when the queue is to nullifying, the buffer unit in described information buffer storage to be set into sky Not busy buffer unit.
With reference to second aspect of the embodiment of the present invention, or the first of second aspect is appointed into the third possible implementation One may implementation, in the third possible implementation of second aspect of the embodiment of the present invention:
The buffer unit includes label field and content domain, and the label field is used for mark and the institute for storing the queue pair State the precedence information of associated data;The content domain is used for the associated data for storing the queue pair.
With reference to the third possible implementation of second aspect of the embodiment of the present invention, the of second aspect of the embodiment of the present invention In four kinds of possible implementations:
The associated data includes at least one following information:The queue context of the queue pair, transmission data Internal memory translation protection table and the queue pair completion queue context;
In the content domain, by the queue context, internal memory translation protection table and queue context is completed according to pre- The ordered storage put.
With reference to second aspect of the embodiment of the present invention, or the first of second aspect is appointed into the 4th kind of possible implementation One may implementation, in the 5th kind of possible implementation of second aspect of the embodiment of the present invention, the pass of the buffer unit Joining data includes one or more, and described device also includes:
Updating block, for being updated to any of described buffer unit or a variety of associated datas.
The third aspect of the embodiment of the present invention also provides a kind of communication equipment, including processor, remote data are directly accessed RDMA modules and memory module;
The RDMA modules are connected with the processor, are second aspects of the embodiment of the present invention, or the first of second aspect Information cache device of the kind into the 6th kind of possible implementation described in any possible implementation.
It can be seen that in the present embodiment, when the communication equipment including RDMA modules is transmitted according to the form of queue pair During data, when needing to use a certain associated data of queue pair first, RDMA modules are just by the association of the queue pair Data acquisition arrives, and is accordingly stored into the buffer unit of RDMA modules together with the precedence information of the associated data. So when subsequently if necessary, with to any associated data, RDMA modules are avoided the need for by coupled processor interface Obtain from memory module, and directly obtained from buffer unit, avoid and frequently read between RDMA modules and memory module Write operation;Simultaneously as the associated data of a queue pair can correspond to a priority level, such RDMA modules are according to preferential The associated data of level cache queue pair, it is preferentially that high priority is other in the case that spatial cache is limited in RDMA modules The associated data caching of queue pair.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also To obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is the structural representation of communication equipment in the embodiment of the present invention;
Fig. 2 is a kind of flow chart of method for caching information provided in an embodiment of the present invention;
Fig. 3 is the structural representation of buffer unit in the RDMA modules that communication equipment includes in the embodiment of the present invention;
Fig. 4 is the flow chart of another method for caching information provided in an embodiment of the present invention;
Fig. 5 is the structural representation of HCA cards in communication equipment in Application Example of the present invention;
Fig. 6 be the embodiment of the present invention communication equipment in the application caching that performs of the caching management module that includes of HCA cards it is single The flow chart of atom operation;
Fig. 7 be the embodiment of the present invention communication equipment in the read operation that performs of the caching management module that includes of HCA cards Flow chart;
Fig. 8 be the embodiment of the present invention communication equipment in the write operation that performs of the caching management module that includes of HCA cards Flow chart;
Fig. 9 is a kind of structural representation of information cache device provided in an embodiment of the present invention;
Figure 10 is the structural representation of another information cache device provided in an embodiment of the present invention;
Figure 11 is a kind of structural representation of communication equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made Embodiment, belong to the scope of protection of the invention.
The embodiment of the present invention provides a kind of method for caching information, and number is transmitted primarily directed to communication equipment as shown in Figure 1 Information cache during.Include processor in the communication equipment(Such as CPU), memory module(Such as DIMM)With RDMA modules(Such as HCA cards), wherein processor is mainly responsible for writing data into memory module and writes the transmitting data of the task Enter medium to transmit queue;And RDMA modules can pass through external equipment interconnection high-speed channel(Peripheral Component Interconnect Express, PCIE)It is connected with processor, the control process of main responsible Data Transport Protocol such as parses Data message, encapsulation of data message and reply data message etc..The cable between RDMA modules can be passed through between communication equipment Interconnection, the cable can be ethernet line or Infiniband netting twines, specifically be determined by the port type of RDMA modules.
The method of the present embodiment is the method performed by the RDMA modules in communication equipment, and flow chart is as shown in Fig. 2 bag Include:
Step 101, the queue pair of communication equipment transmission data is obtained(Queue Pair, QP)Associated data, wherein one Individual queue is to that can include transmit queue(Send Queue, SQ)And receiving queue(ReceiveQueue, RQ), the pass of queue pair Connection data are the configuration informations of queue pair, can be included in the information for needing to use in transmission data procedures, such as above and below queue Text(Queue Pair Context, QPC), complete queue context(Complete Queue Context, CQC)With data MTPT tables etc., in some embodiments, the associated data can also include shared receiving queue(SRQ)Etc. information.
It is appreciated that if necessary to transmission data, then the processor in communication equipment can first create queue pair, and set team The associated data of row pair, for example, queue to context, data MTPT tables and complete the associated data such as queue context, at this The physical address that data are stored in the memory module of communication equipment and the corresponding relation of logical address are stored in MTPT tables;So The data of transmission are written in memory module by preprocessor;Again, processor will transmit the type of data(Such as write operation)、 The content such as the initial address of data and the length information of transmission data is transmitted to be written in the transmit queue of queue pair.The present embodiment In, processor is when setting the associated data of queue pair, it is also necessary to is provided for indicating the queue to excellent in queue context The precedence information of first level, specifically, the service class of queue context can be used(servicelevel)Field is other Custom field indicate the precedence information.The precedence information can be configured as needed by user, such as The queue for the task that user is concerned about is to setting higher priority.It should be noted that what the processor of communication equipment was created The associated data of one queue pair will not create queue pair first and be initialized because transmitting the different and different of data After the associated data of the queue pair, the associated data of the queue pair would not change during data transfer, but different The associated data of queue pair is really different.
When processor create queue to after, notice RDMA modules have task to need to perform, and tell RDMA module needs The mark of queue pair such as sequence number corresponding to execution task.So RDMA modules can passes through according to the mark of the queue pair PCIE gets the associated data of the queue pair to memory module by processor, and then number to be transmitted is obtained from associated data According to physical address;Then, will be to be passed after getting data to be transmitted to memory module by processor according to the physical address Transmission of data is packaged into RDMA messages, and RDMA messages are sent to by the interface of the RDMA modules and other communication equipments.
Wherein RDMA modules can disposably get all associated datas of the queue pair, can also get part Associated data, while these associated datas with accordance with the following steps 102 and 103, can be stored into the buffer unit of RDMA modules In.
Step 102, the precedence information of the associated data of queue pair is determined, the precedence information of the associated data can be from Determined in service class field or custom field that queue context in the associated data got includes.
Step 103, the precedence information of the associated data of queue pair and associated data is accordingly stored into RDMA modules Buffer unit in, specifically, RDMA modules can first distribute one and store caching of the queue to all associated datas enough Space can set the buffer unit to include label as buffer unit(tag)Domain and content domain.
With reference to shown in figure 3, label field is used to store the mark of queue pair and the precedence information of associated data, can also store up Deposit mark and significance bit of label field etc., the wherein mark of label field can uniquely determine a buffer unit, and significance bit can be with Indicate whether the buffer unit is in idle condition, if the data for not storing data or storage in the buffer unit are in invalid State, then illustrate the buffer unit free time, i.e. buffer unit is not used or is deactivated after, and otherwise buffer unit is in non-NULL Not busy state;Content domain is used to storing the associated data of queue pair, and RDMA modules can be these associated datas be numbered with Which associated data refers to is specifically, and the size of content domain is determined by the size of associated data, the association of one of which type Data are properly termed as a member of the buffer unit.Further, RDMA modules can specify that each incidence number in content domain According to storage order, so in content domain, by the associated data of queue pair such as queue context, internal memory translation protection table and Complete queue context etc. will according to preset ordered storage, such as in Fig. 3 according to queue context, complete queue context The order storage of protection table is translated with internal memory.
It can be seen that in the present embodiment, when the communication equipment including RDMA modules is transmitted according to the form of queue pair During data, when needing to use a certain associated data of queue pair first, RDMA modules are just by the association of the queue pair Data acquisition arrives, and is accordingly stored into the buffer unit of RDMA modules together with the precedence information of the associated data. So when subsequently if necessary, with to any associated data, RDMA modules are avoided the need for by coupled processor interface Obtain from memory module, and directly obtained from buffer unit, avoid and frequently read between RDMA modules and memory module Write operation;Simultaneously as the associated data of a queue pair can correspond to a priority level, such RDMA modules are according to preferential The associated data of level cache queue pair, it is preferentially that high priority is other in the case that spatial cache is limited in RDMA modules The associated data caching of queue pair.
With reference to shown in figure 4, in a specific embodiment, RDMA modules, specifically can be with when performing above-mentioned steps 103 Realized, specifically included by the steps:
Step 201, judge in RDMA modules whether available free buffer unit, if so, then performing step 202;If No, the judgement in step 203 is continued executing with.Wherein RDMA modules specifically may be used when judging whether available free buffer unit To search the significance bit in buffer unit interior label domain, if indicate the buffer unit free time.
Step 202, the buffer unit of free time is selected in RDMA modules as the first buffer unit, and by the pass of queue pair The precedence information of connection data and associated data is stored into the first buffer unit.
Step 203, judge whether there is the queue obtained in priority ratio above-mentioned steps 101 in busy buffer unit To associated data the low buffer unit of priority, if so, then performing step 204;If not provided, in busy caching In the case of the priority identical of the priority of unit and the associated data of queue pair, RDMA modules can perform step 205, rather than The all high situation of the priority of the associated data of the priority ratio queue pair of idle buffer unit, then can not be by buffer unit Information replace away, now, RDMA modules will not cache the associated data of the acquisition.Wherein the priority of buffer unit is with delaying The priority of the associated data stored in memory cell is consistent.
Step 204, the low buffer unit of the priority of the associated data of priority ratio queue pair is selected as the second caching Unit, and replace the information in the second buffer unit with the precedence information of the associated data of queue pair and associated data.
Step 205, it is single as the 3rd caching according to preset policy selection buffer unit in busy buffer unit Member, and replace the information in the 3rd buffer unit with the precedence information of the associated data of queue pair and associated data.It is wherein pre- The strategy put can include:LRU(Least Recently Used, LRU)Scheduling algorithm.
It can be seen that by above-mentioned steps 201 to 205, RDMA modules can ensure the incidence number of the higher queue pair of priority According to caching.
It should be noted that because RDMA modules when distributing the resource of buffer unit first, store as needed The size of the associated data of queue pair is come what is distributed, then the granularity of buffer unit busy in RDMA modules is busy with this The associated data that currently stores of buffer unit it is in the same size.Therefore, RDMA modules at above-mentioned steps 204 and 205, it is necessary to The buffer unit that a memory space is sufficiently large and will not waste is selected for the information that above-mentioned steps 101 and 102 obtain, that is, is selected The size of buffer unit to be equal to the size of population of the precedence information for needing the associated data that store and associated data, so In replacement information, the associated data of above-mentioned queue pair and the precedence information of associated data could fully be stored into selection In buffer unit.
It is further to note that when the associated data of above-mentioned queue pair and the precedence information of associated data are stored up first After being stored to buffer unit, during subsequent transmission data, some of which associated data may change, then RDMA moulds Block can also be updated to the associated data stored in buffer unit, wherein can be to some or the progress of multiple associated datas Modification.And when queue is to nullifying after, in order to improve the utilization rate of buffer unit in RDMA modules, RDMA modules can will be above-mentioned Buffer unit be set to free time buffer unit, then the data invalid stored in the buffer unit, so the buffer unit can To store the associated data of the queue pair of any priority, i.e. data in the buffer unit can be replaced by any associated data Change.Wherein, the cancellation of queue pair can issue De-REGistration CoMmanD by the driving for the RDMA modules that communication equipment includes, when RDMA moulds Block receives De-REGistration CoMmanD can to corresponding queue to being unregistered, and drive module initiate De-REGistration CoMmanD can be by User calls the driving of the RDMA modules to perform and nullifies the operation of queue to function etc. to trigger
Method for caching information provided in an embodiment of the present invention is illustrated with specific embodiment below, is mainly used in In communication equipment as shown in Figure 1, processor in a communications device is CPU, and RDMA modules are HCA cards, and the HCA cards can be One piece of field programmable gate array(Field Programmable Gate Array, FPGA)Or solidification is special integrated Circuit(Application Specific Integrated Circuit, ASIC), pass through PCIE interfaces between HCA cards and CPU Connection.The structure of HCA cards in the present embodiment can be with as shown in figure 5, including PCIE interfaces, protocol engine, transceiver interface, queue Management module and caching(cache)Management module, wherein:
(1)PCIE interfaces, it is the interface of HCA cards and CPU, the pass of queue pair can be read from memory module by CPU Join data, and data to be transmitted can be read and write.
(2)Protocol engine is used to handle issuing for task of PCIE interfaces and the RDMA messages received from cable, and Terminate rdma protocol.
Specifically, when protocol engine receives the task that PCIE interfaces issue, analysis task and from queue management module The associated data of queue pair corresponding to reading, data to be transmitted is obtained to PCIE interface requests according to the associated data of reading, and Head is constructed according to rdma protocol, data to be transmitted is packaged into complete RDMA messages, transceiver interface is passed to and is sent to line Lu Shang.
On the other hand, when protocol engine receives the RDMA messages of transceiver interface, header can be analyzed, according to head The sequence number of the destination queue pair of mark, corresponding associated data is obtained from queue management module, and data to be transmitted is passed through PCIE interfaces are write in the memory module of communication equipment;Protocol engine can be to transceiver interface return response message, Huo Zhetong simultaneously Cross PCIE interfaces and target data is read from memory module, then construction reading response message passes to transceiver interface and is sent to cable On.
(3)Transceiver interface is the connecting interface with HCA cards in other communication equipments, realizes that agreement is drawn by the transceiver interface Conversion between the logical message and the physical signalling of coincidence circuit electric rule held up, and then can realize and be set with other communicate Communication between standby.
(4)Queue management module, the associated data of the queue pair for obtaining needs from caching management module, if The associated data is not present in HCA cards, then needs to obtain these associations from memory module by protocol engine and PCIE interfaces Data, and the associated data got is stored by buffer unit to cache module application.
(5)Caching management module is used for application, search, release, reading, the write-in that response queue's management module is sent and cached The order of unit.
Specifically, when queue management module fail from caching management module read need queue pair associated data, then After associated data is got from the memory module of communication equipment by PCIE interfaces, Shen can be sent to caching management module Please buffer unit order;Then caching management module is according to internal algorithm, it is determined whether allows to cache the associated data, if permitted Perhaps, then can be to the mark such as sequence number of queue management module return cache unit;Otherwise, returning to illegal value instruction does not allow to delay Deposit the associated data.
When queue management module needs to provide associated data to protocol engine, search is sent to caching management module first Order, to be confirmed whether to exist corresponding associated data;In another case, when queue management module is received under PCIE interfaces During the operation of the cancellation queue pair of hair, search command can be sent to caching management module, to be confirmed whether to exist the queue to right The buffer unit answered.If it does, caching management module can return to the sequence number of the buffer unit of storage associated data;In the absence of then Return to invalid sequence number.
The operation of queue pair is nullified when queue management module receives, and has been searched in the presence of the queue to corresponding slow During memory cell, queue management module can issue the order of release buffer unit to caching management module;When existing in buffer unit During the associated data for the queue pair that queue management module needs to use, queue management module can issue reading to caching management module Go out the order of information in buffer unit, to obtain corresponding associated data;When caching management module allows to store some queue pair During corresponding associated data, after the sequence number of buffer unit is provided, queue management module can by caching management module to Go out the buffer unit write-in associated data of sequence number.
When two above-mentioned communication equipments(Such as communication equipment A and B)When transmitting data, it can specifically pass through such as lower section Method is realized:
(1)Communication equipment A sends data to be transmitted
A1:Data to be transmitted is written in memory module by the CPU in communication equipment A, and the task of transmission is written to In the transmit queue of memory module, and HCA cards are notified to perform corresponding task by doorbell.Wherein doorbell can be a notice Message, the sequence number of queue pair can be included in the notification message.
B1:PCIE interfaces in HCA cards are transferred to protocol engine to the doorbell, and the doorbell is parsed by protocol engine, After the sequence number of the queue pair wherein included, the queue pair is obtained by the memory module of from PCIE interfaces to communication equipment A Associated data, following at least one information can be included:The queue context of queue pair, the MTPT tables of data to be transmitted and queue To the data such as completion queue context.
C1:While step B1 is performed, the protocol engine in HCA cards can trigger queue management module to cache management Module issues the order of application buffer unit, and the sequence number and precedence information of queue pair are carried in order.If in HCA cards Buffer unit according to above-mentioned Fig. 3 structure store associated data, can be according to after caching management module receives the order The steps carrys out the operation of application for execution buffer unit, flow chart as shown in fig. 6, including:
C11:Caching management module according to the sequence number for receiving the queue pair in ordering reaffirm in buffer unit whether The queue is stored to corresponding associated data, if stored, returns to already present information to queue management module, and terminate Flow;If do not stored, step C12 is performed.
C12:Judge whether available free buffer unit, mainly travel through the significance bit in buffer unit, the significance bit is used To indicate whether buffer unit is idle, if so, then returning to the sequence of one of idle buffer unit to queue management module Number;If it is not, perform step C13.
C13:Judge whether excellent indicated by precedence information in the order that is received in priority ratio above-mentioned steps C11 The low buffer unit of first level, it can specifically travel through the priority bit in buffer unit and enter with receiving the precedence information in ordering Row compares, if it is present performing step C14;If it does not exist, then the letter for not allowing caching is returned to queue management module Breath.
C14:The relatively low busy buffer unit of a priority is selected, and the buffer unit is returned to queue management module Sequence number, and allow the associated data of queue pair being stored into the buffer unit.
D1:When queue management module receives the sequence number of buffer unit and allows the associated data of queue pair being stored into this After the information of caching, the request of write-in buffer unit can be initiated to memory management unit, queue pair can also be carried in the request The associated data that stores of sequence number, precedence information, the sequence number of buffer unit and needs, so when cache module unit receives After the request, precedence information, the sequence number of queue pair and associated data will be stored into phase according to the structure shown in above-mentioned Fig. 3 In the buffer unit answered.
E1:The queue context in protocol engine analyzing and associating data in HCA cards, obtains queue pair transmit queue Physical address, then can be read according to the physical address by PCIE interfaces from communication equipment A memory module should for protocol engine The Work Queue Elements of transmit queue(Work Queue Element, WQE)Afterwards, analyze the WQE, it is determined that need send it is to be passed The source virtual address and length of transmission of data.
F1:Protocol engine triggers queue management module and initiates read requests to caching management module, and in read requests The sequence number of queue pair is carried, and needs to read the information of a certain member of associated data such as MTPT tables, when caching management module connects After receiving the read requests, read operation can be performed in accordance with the following steps, flow chart as shown in fig. 7, comprises:
F11:After caching management module receives the read requests, this is determined whether there is according to the sequence number of wherein queue pair Queue is to corresponding buffer unit, if it does not exist, then the information being not present is returned to queue management module, if it is present Perform step F12.
F12:Caching management module finds corresponding buffer unit according to the sequence number of queue pair, and according to need in read requests The information about firms to be read, skew and reading length of the member in buffer unit are calculated, so as to read the member.
F13:Caching management module returns to the member read from buffer unit i.e. MTPT tables by queue management module To protocol engine.
G1:After protocol engine gets MTPT tables, due to described in MTPT tables the physical address of data to be transmitted With the corresponding relation of virtual address, then protocol engine can according to the source virtual address obtained in step E1 and the MTPT tables, it is determined that The physical address of data to be transmitted;Then according to the length of the data to be transmitted obtained in the physical address and above-mentioned steps E1, Data to be transmitted is got from communication equipment A memory module by PCIE interfaces.
H1:Data to be transmitted can be packaged into RDMA messages by protocol engine, and the RDMA messages are transmitted from transceiver interface Onto the cable being connected with communication equipment B, wherein, following information can be included in RDMA messages:The data of transmission, operation Type(It is write operation in the present embodiment), destination queue pair to be operated sequence number and purpose virtual address.
(2)Communication equipment B receives data to be transmitted
A2:After the included transceiver interface of HCA cards receives RDMA messages in communication equipment B, protocol engine point is transferred to Analyse the RDMA messages, the data transmitted, action type(It is write operation in the present embodiment), to be operated destination queue pair Sequence number and purpose virtual address.Destination queue pair wherein to be operated is with queue when data are transmitted in communication equipment A to phase It is corresponding, when communication equipment A sets queue pair when transmit data, can set communication equipment A transmit the transmit queues of data with Communication equipment B receives the receiving queue of data, then the information of the destination queue pair to be operated is the information of receiving queue.
B2:Protocol engine triggers queue management module and initiates to read to caching management module according to the sequence number of destination queue pair A certain member such as MTPT tables or multiple members in the associated data of the queue pair is taken, then caching management module can be according to above-mentioned The MTPT tables of reading are returned to protocol engine by the method shown in Fig. 7;If the purpose team is not stored in buffer unit The associated data of row pair, then protocol engine can be by memory module acquisition destination queue of the PCIE interfaces to communication equipment B to right The associated data answered, including MTPT tables.
It should be noted that the associated data of queue pair can be before data are transmitted with communication equipment A in communication equipment B Set.
C2:Protocol engine obtains corresponding physical address, then passed through according to the MTPT tables and purpose virtual address of acquisition The data transmitted in RDMA messages are written in memory module corresponding to the physical address by PCIE interfaces.
D2:Protocol engine prepares response message, and the response message is transferred to by transceiver interface and connected with communication equipment A On the cable connect, following information can be included in response message:The sequence number of queue pair.
(3)Communication equipment A receives response message
A3:After the included transceiver interface of HCA cards receives RDMA messages in communication equipment A, protocol engine point is transferred to The response message is analysed, obtains the sequence number of queue pair.
B3:Protocol engine can trigger queue management module and initiate to obtain the queue to corresponding pass to caching management module Joining data includes completing the information such as queue context, when caching management module is receiving the associated data that reads the queue pair After request, the associated data of reading can be returned into protocol engine according to the method shown in above-mentioned Fig. 7.
C3:Protocol engine generates according to the completion queue context and completes queue element (QE)(CQE), while optionally generated Into event queue element (QE)(Complete Event Queue Element, CEQE)With reporting interruption etc., CPU passes through training in rotation CQE Or response is interrupted(When generating CEQE and interrupting), learnt and completed queue generation, by CPU appointing come this transmission data that terminate Business.
D3, meanwhile, after carrying out multiple data transfer between communication equipment A and B, protocol engine can trigger queue management Module initiates releasing request to caching management module, and the sequence number of queue pair is carried in releasing request, then caching management module is looked for To the queue to corresponding buffer unit, the cancellation of queue pair is carried out, i.e., the significance bit in buffer unit is arranged to invalid, i.e., The buffer unit is set to the buffer unit of free time.
It should be noted that in above-mentioned two communication equipment during data are transmitted, if in buffer unit squadron When some associated datas of row pair change, the included queue management module of HCA cards can manage to caching in communication equipment Manage module and initiate write request, and in the information for the associated data that write request includes the sequence number of queue pair and needs write; After caching management module receives the write request, write operation, flow chart such as Fig. 8 can be performed according to the steps It is shown, including:
A4:Caching management module searches whether the sequence of the queue pair according to the sequence number of the queue pair in the request of reception Buffer unit corresponding to number is present, if it does not exist, then returning to the information being not present to queue management module;If it is present Perform step B4.
B4:Corresponding buffer unit is found to the sequence number lined up according to this, and according to needing what is write in write request The information of associated data, calculate skew and length of the associated data in buffer unit.
C4:According to the skew calculated in step B4, it would be desirable to which the associated data of write-in is written in buffer unit accordingly Position.
The embodiment of the present invention also provides a kind of information cache device, and the device supports RDMA operation, structural representation such as Fig. 9 It is shown, including:
Associated data acquiring unit 10, the associated data of the queue pair for obtaining the communication equipment transmission data.
Priority determining unit 11, the precedence information of the associated data for determining the queue pair;The priority is true Order member 11 specifically can the queue pair queue to the service class field or custom field of context in determine institute State precedence information.
Storage element 12, for the associated data of queue pair that obtains the associated data acquiring unit 10 and described excellent The precedence information for the associated data that first level determining unit 11 determines accordingly is stored into the caching list of described information buffer storage In member 13.
In the present embodiment, above-mentioned buffer unit can include label field and content domain, and label field is used to store queue pair The precedence information of mark and associated data, and buffer unit mark and significance bit can also be stored;Content domain is used to store team The associated data of row pair.And associated data includes the queue context of the queue pair, transmits the internal memory translation protection of data The information such as table and the completion queue context of the queue pair, and in content domain, by queue context, internal memory translation protection table Deposited with queue context etc. is completed according to preset ordered storage, such as according to the order shown in Fig. 3.
It can be seen that in the information cache device of the present embodiment, during data are transmitted, if needing to use first During some associated datas of queue pair, associated data acquiring unit 10 can transmit communication equipment the incidence number of the queue pair of data According to getting, and information cache device is accordingly stored into by precedence information of the storage element 12 together with the associated data Buffer unit 13 in.So when subsequently using the associated data if desired, information cache device avoid the need for by with The interface of processor obtains from memory module in communication equipment, and is directly obtained from buffer unit, avoids information cache from filling Put between memory module frequently read-write operation;Simultaneously as the associated data of a queue pair can correspond to one preferentially Rank, such information cache device just according to the associated data of priority level buffer queue pair, cache in information cache device In the case that space is limited, preferentially the associated data of the other queue pair of high priority is cached.
With reference to shown in figure 10, in a specific embodiment, information cache device is as shown in Figure 9 except that can include Outside structure, it can also include nullifying unit 14 and updating block 15, and wherein storage element 12 can also be single by the first storage First 120, second storage element 121 and the 3rd storage element 122 realize, specifically:
First storage element 120, for selecting the caching list of free time in the buffer unit 13 of described information buffer storage Member is used as the first buffer unit, and the associated data for the queue pair that the associated data acquiring unit 10 is obtained and priority are true The precedence information for the associated data that order member 11 determines is stored into first buffer unit.Wherein buffer unit is preferential The priority of associated data of the level with being stored in the buffer unit is consistent.
Second storage element 121, if for not having the caching of free time in the buffer unit 13 of described information buffer storage Unit, the low caching of the priority of the associated data of queue pair described in priority ratio is selected in the busy buffer unit Unit is as the second buffer unit, and the associated data and priority of the queue pair obtained with the associated data acquiring unit 10 The precedence information for the associated data that determining unit 11 determines replaces the information in second buffer unit.
3rd storage element 122, for the priority if there is busy buffer unit and the pass of the queue pair It is identical to join the priority of data, then is used as the according to preset policy selection buffer unit in the busy buffer unit Three buffer units, and the associated data and priority determining unit 11 of the queue pair obtained with the associated data acquiring unit 10 The precedence information of the associated data of determination replaces the information in the 3rd buffer unit.
Nullify unit 14, for when the communication equipment queue to nullify when, will in described information buffer storage storage pair The buffer unit of the associated data for the queue pair answered is set to the buffer unit of free time, can store the queue pair of any priority Associated data.Specifically, the cancellation unit 14 can change the significance bit in the buffer unit, to indicate that the buffer unit is empty Not busy buffer unit.
Updating block 15, for being updated to any of described buffer unit 13 or a variety of associated datas.
It is to store up as needed it should be noted that because information cache device when distributing the resource of buffer unit first The size of the associated data for the queue pair deposited is distributed, then the granularity of busy buffer unit and the busy caching list Member currently store associated data it is in the same size.Therefore, the above-mentioned storage element 122 of second storage element 121 and the 3rd for , it is necessary to select a memory space enough for the associated data that associated data acquiring unit 10 obtains when changing the information of buffer unit The size of the buffer unit of big buffer unit, i.e. selection is equal to the priority letter for needing the associated data and associated data stored The size of population of breath, so in replacement information, the associated data of queue pair and the precedence information of associated data could be completely Ground is stored into the buffer unit of selection.
The embodiment of the present invention also provides a kind of communication equipment, and structural representation is as shown in figure 11, including mounts respectively everywhere Memory 20, input unit 23 and the output device 24 and RDMA modules 22 on device 21 are managed, wherein:
It is used for storing the data inputted from input unit 23 in memory 20, and the processing data of processor 21 can also be stored The information such as necessary file.Input unit 23 and output device 24 are the ports that communication equipment communicates with other equipment, can be with Including the external equipment of communication equipment such as display, keyboard, mouse and printer etc..
It processor 21, can be used for data waiting for transmission being written in memory 20, and the task of transmission be written to In transmit queue.
RDMA modules 22 can be connected with output device 24, and be connected between other communication equipments, described for obtaining The associated data of the queue pair of communication equipment transmission data;Determine the precedence information of the associated data of the queue pair;By institute The precedence information of the associated data and associated data of stating queue pair is accordingly stored into the buffer unit of the RDMA modules 22 In, wherein, RDMA modules 22 can the queue pair queue in the service class field or custom field of context Determine the precedence information of the associated data.So when subsequently if necessary, with to any associated data, RDMA modules 22 Avoid the need for obtaining from memory 20 by the interface with processor in communication equipment 21, and directly obtained from buffer unit Take, avoid between RDMA modules 22 and memory 20 frequently read-write operation;Simultaneously as the associated data of a queue pair can With a corresponding priority level, such RDMA modules 22 are just according to the associated data of priority level buffer queue pair, in RDMA moulds In the case that spatial cache is limited in block 22, preferentially the associated data of the other queue pair of high priority is cached.
In the present embodiment, the buffer unit in RDMA modules 22 can include label field and content domain, and label field is used to store up The mark of queue pair and the precedence information of associated data are deposited, and buffer unit mark and significance bit can also be stored;Content domain For storing the associated data of queue pair.And associated data includes the queue context of the queue pair, transmits the interior of data The information such as translation protection table and the completion queue context of the queue pair is deposited, and in content domain, by queue context, internal memory Translation protection table and completion queue context etc. are deposited according to preset ordered storage, such as according to the order shown in Fig. 3.
Further, RDMA modules 22, can be in RDMA when storing the precedence information of associated data and associated data The buffer unit of free time is selected in the buffer unit of module 22 as the first buffer unit, and by the associated data of the queue pair It is stored into the precedence information of associated data in first buffer unit, wherein the priority of buffer unit and the caching The priority of the associated data stored in unit is consistent.If there is no the caching of free time in the buffer unit of the RDMA modules 22 Unit, the low caching of the priority of the associated data of queue pair described in priority ratio is selected in the busy buffer unit Unit replaces described the as the second buffer unit, and with the precedence information of the associated data of the queue pair and associated data Information in two buffer units.If the priority of the busy buffer unit is excellent with the associated data of the queue pair First level is identical, then single as the 3rd caching according to preset policy selection buffer unit in the busy buffer unit Member, and replace the letter in the 3rd buffer unit with the precedence information of the associated data of the queue pair and associated data Breath.
It is to store as needed it should be noted that because RDMA modules 22 when distributing the resource of buffer unit first The size of associated data of queue pair distribute, then the size of busy buffer unit and the busy buffer unit The associated data currently stored it is in the same size.Therefore, RDMA modules 22 replace buffer unit information when, it is necessary to for association Data select the sufficiently large buffer unit of a memory space, i.e. the size of the buffer unit of selection is greater than or equal to needing to store up The associated data and the size of population of the precedence information of associated data deposited, so in replacement information, the incidence number of queue pair Fully it could be stored into according to the precedence information with associated data in the buffer unit of selection.
It is further to note that when the associated data of above-mentioned queue pair and the precedence information of associated data are stored up first After being stored to buffer unit, during subsequent transmission data, some of which associated data may change, then RDMA moulds Block 22 can also be updated to the associated data stored in buffer unit;And when queue is to nullifying, in order to improve RDMA moulds The buffer unit of the associated data of above-mentioned storage queue pair can be set to by the utilization rate of buffer unit in block 22, RDMA modules 22 Idle buffer unit, so the buffer unit can store the associated data of the queue pair of any priority.Specifically, RDMA modules 22 can change the significance bit in the buffer unit, to indicate that the buffer unit is idle buffer unit.
In the particular embodiment, the structure of RDMA modules 22 can as above-mentioned information cache device or it is as shown in Figure 5 Shown in the structure of HCA cards, herein without repeating.
In summary, using above- mentioned information caching method, some following beneficial effect can be brought:
1st, the associated data of the queue pair of high priority is preferentially stored in the buffer unit of RDMA modules so that high preferential The task of level can be always hit in buffer unit, improve the performance of high-priority task.So as to be supplied to user one Kind mechanism, the priority of the queue pair of oneself being concerned about for task can be set higher, be not replaced out as far as possible, improved user and close The task of the heart(Or process)Performance;
2nd, the hit rate of information is relatively stable in buffer unit, will not have larger difference because transmitting scene is different.
3rd, due to that in the associated data in not needing buffer unit, the buffer unit can be set into the free time, can stored The associated data of other any priority levels so that the resource of spatial cache can be utilized to greatest extent.
4th, the size of buffer unit is the size of all associated datas of corresponding queue pair, so only stores transmission number Necessary information during, without storing other information, meet the characteristics of RDMA is transmitted, there is higher efficiency.
5th, the associated data stored in buffer unit can individually be read and write according to member unit, for example reading institute is relevant Some in data or multiple members, meet the characteristics of RDMA is transmitted.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the RDMA hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, Storage medium can include:Read-only storage(ROM), random access memory(RAM), disk or CD etc.;And related RDMA hardware can pass through FPGA(FPGA)Or ASIC(ASIC)Etc. realizing.
Method for caching information, device and the communication equipment provided above the embodiment of the present invention is described in detail, Specific case used herein is set forth to the principle and embodiment of the present invention, and the explanation of above example is simply used Understand the method and its core concept of the present invention in help;Meanwhile for those of ordinary skill in the art, according to the present invention's Thought, there will be changes in specific embodiments and applications, in summary, this specification content should not be construed as Limitation of the present invention.

Claims (9)

1. a kind of method for caching information, it is characterised in that the remote data included applied to communication equipment is directly accessed RDMA moulds In block, methods described includes:
Obtain the associated data of the queue pair of the communication equipment transmission data;
Determine the precedence information of the associated data of the queue pair;
The precedence information of the associated data of the queue pair and the associated data is accordingly stored into the RDMA modules Buffer unit in;
Wherein, it is described the precedence information of the associated data of the queue pair and the associated data is accordingly stored into it is described In the buffer unit of RDMA modules, specifically include:
The buffer unit of free time is selected in the RDMA modules, as the first buffer unit, and by the association of the queue pair The precedence information of data and the associated data is stored into first buffer unit;
If there is no the buffer unit of free time in the RDMA modules, selected in busy buffer unit described in priority ratio The low buffer unit of the priority of the associated data of queue pair, as the second buffer unit, and with the incidence number of the queue pair The information in second buffer unit is replaced according to the precedence information with the associated data;
It is identical with the priority of the associated data of the queue pair if there is the priority of busy buffer unit, then in institute State in busy buffer unit according to preset policy selection buffer unit, as the 3rd buffer unit, and with the queue To associated data and the precedence information of the associated data replace information in the 3rd buffer unit;
Wherein, the priority of associated data of the priority of buffer unit with being stored in the buffer unit is consistent;
Wherein, the buffer unit includes label field and content domain, the label field be used for the mark for storing the queue pair and The precedence information of the associated data;The content domain is used for the associated data for storing the queue pair;
The associated data includes any one or more following information:The queue context of the queue pair, transmit data Internal memory translation protection table and the completion queue context of the queue pair;
In the content domain, by the queue context, internal memory translation protection table and queue context is completed according to preset Ordered storage.
2. the method as described in claim 1, it is characterised in that the priority letter of the associated data for determining the queue pair Breath, is specifically included:
The queue pair queue to the service class field or custom field of context in determine the associated data Precedence information.
3. the method as described in any one of claim 1 to 2, it is characterised in that methods described also includes:
When the queue is to nullifying, the buffer unit in the RDMA modules is set to the buffer unit of free time.
4. the method as described in any one of claim 1 to 2, it is characterised in that the associated data of the buffer unit includes one Kind is a variety of, and methods described also includes:
Any of described buffer unit or a variety of associated datas are updated.
A kind of 5. information cache device, it is characterised in that including:
Associated data acquiring unit, the associated data of the queue pair for obtaining communication equipment transmission data;
Priority determining unit, the precedence information of the associated data for determining the queue pair;
Storage element, determined for the associated data of queue pair for obtaining the associated data acquiring unit and the priority The precedence information for the associated data that unit determines accordingly is stored into the buffer unit of described information buffer storage;
Wherein, the storage element includes:
First storage element, for selecting the buffer unit of free time in described information buffer storage as the first buffer unit, And the precedence information of the associated data of the queue pair and the associated data is stored into first buffer unit;
Second storage element, if for not having the buffer unit of free time in described information buffer storage, in busy caching The low buffer unit of priority of the associated data of queue pair described in priority ratio is selected in unit, as the second buffer unit, And the letter in second buffer unit is replaced with the associated data of the queue pair and the precedence information of the associated data Breath;
3rd storage element, for the priority if there is busy buffer unit and the associated data of the queue pair Priority is identical, then according to preset policy selection buffer unit in the busy buffer unit, as the 3rd caching Unit, and replaced with the associated data of the queue pair and the precedence information of the associated data in the 3rd buffer unit Information;
Wherein, the priority of associated data of the priority of buffer unit with being stored in the buffer unit is consistent;
Wherein, the buffer unit includes label field and content domain, the label field be used for the mark for storing the queue pair and The precedence information of the associated data;The content domain is used for the associated data for storing the queue pair;
The associated data includes at least one following information:The queue context of the queue pair, transmit the interior of data Deposit translation protection table and the completion queue context of the queue pair;
In the content domain, by the queue context, internal memory translation protection table and queue context is completed according to preset Ordered storage.
6. device as claimed in claim 5, it is characterised in that the priority determining unit, specifically in the queue To queue to determining the precedence information of the associated data in the service class field or custom field of context.
7. the device as described in any one of claim 5 to 6, it is characterised in that also include:
Unit is nullified, for when the queue is to nullifying, the buffer unit in described information buffer storage to be set into the free time Buffer unit.
8. the device as described in any one of claim 5 to 6, it is characterised in that the associated data of the buffer unit includes one Kind is a variety of, and described device also includes:
Updating block, for being updated to any of described buffer unit or a variety of associated datas.
9. a kind of communication equipment, it is characterised in that be directly accessed RDMA modules and memory module including processor, remote data;
The RDMA modules are connected with the processor, are the information cache devices as any one of claim 5 to 8.
CN201310617002.0A 2013-11-27 2013-11-27 A kind of method for caching information, device and communication equipment Active CN103647807B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310617002.0A CN103647807B (en) 2013-11-27 2013-11-27 A kind of method for caching information, device and communication equipment
PCT/CN2014/086497 WO2015078219A1 (en) 2013-11-27 2014-09-15 Information caching method and apparatus, and communication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310617002.0A CN103647807B (en) 2013-11-27 2013-11-27 A kind of method for caching information, device and communication equipment

Publications (2)

Publication Number Publication Date
CN103647807A CN103647807A (en) 2014-03-19
CN103647807B true CN103647807B (en) 2017-12-15

Family

ID=50252960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310617002.0A Active CN103647807B (en) 2013-11-27 2013-11-27 A kind of method for caching information, device and communication equipment

Country Status (2)

Country Link
CN (1) CN103647807B (en)
WO (1) WO2015078219A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103647807B (en) * 2013-11-27 2017-12-15 华为技术有限公司 A kind of method for caching information, device and communication equipment
CN104077198B (en) * 2014-06-19 2017-06-06 华为技术有限公司 Doorbell DB restoration methods and device, the input/output I/O equipment with the device
CN105808477A (en) * 2014-12-29 2016-07-27 杭州华为数字技术有限公司 Data access method and related device
CN106484314B (en) * 2015-09-01 2020-01-03 阿里巴巴集团控股有限公司 Cache data control method and device
US9880772B2 (en) * 2015-09-21 2018-01-30 Micron Technology, Inc. Systems and methods for providing file information in a memory system protocol
CN105446936B (en) * 2015-11-16 2018-07-03 上海交通大学 Distributed hashtable method based on HTM and unidirectional RDMA operation
CN106940682B (en) * 2017-03-07 2020-06-09 武汉科技大学 Embedded system optimization method based on-chip programmable memory
CN113709057B (en) * 2017-08-11 2023-05-05 华为技术有限公司 Network congestion notification method, proxy node, network node and computer equipment
CN108304214B (en) * 2017-12-13 2022-05-13 超聚变数字技术有限公司 Method and device for verifying integrity of immediate data
CN108366111B (en) * 2018-02-06 2020-04-07 西安电子科技大学 Data packet low-delay buffer device and method for switching equipment
CN108663971A (en) * 2018-06-01 2018-10-16 北京汉能光伏投资有限公司 Order retransmission method and device, solar energy system and central controller
CN109359069A (en) * 2018-09-25 2019-02-19 济南浪潮高新科技投资发展有限公司 A kind of data transmission method and device
CN111277616B (en) * 2018-12-04 2023-11-03 中兴通讯股份有限公司 RDMA-based data transmission method and distributed shared memory system
CN112463654A (en) * 2019-09-06 2021-03-09 华为技术有限公司 Cache implementation method with prediction mechanism
CN113055131B (en) * 2019-12-26 2023-06-30 阿里巴巴集团控股有限公司 Data processing method, data segmentation method, computing device and medium
CN111262917A (en) 2020-01-13 2020-06-09 苏州浪潮智能科技有限公司 Remote data moving device and method based on FPGA cloud platform
CN113381939B (en) * 2020-03-10 2022-04-29 阿里巴巴集团控股有限公司 Data transmission method and device, electronic equipment and computer readable storage medium
CN111737176B (en) * 2020-05-11 2022-07-15 瑞芯微电子股份有限公司 PCIE data-based synchronization device and driving method
CN113900972A (en) * 2020-07-06 2022-01-07 华为技术有限公司 Data transmission method, chip and equipment
CN113300979A (en) * 2021-02-05 2021-08-24 阿里巴巴集团控股有限公司 Network card queue creating method and device under RDMA (remote direct memory Access) network
CN113301285A (en) * 2021-05-11 2021-08-24 深圳市度信科技有限公司 Multi-channel data transmission method, device and system
CN115633104B (en) * 2022-09-13 2024-02-13 江苏为是科技有限公司 Data transmission method, data receiving method, device and data receiving and transmitting system
CN115858160B (en) * 2022-12-07 2023-12-05 江苏为是科技有限公司 Remote direct memory access virtualized resource allocation method and device and storage medium
CN116303173B (en) * 2023-05-19 2023-08-08 深圳云豹智能有限公司 Method, device and system for reducing RDMA engine on-chip cache and chip

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609378A (en) * 2012-01-18 2012-07-25 中国科学院计算技术研究所 Message type internal memory accessing device and accessing method thereof
CN103019962A (en) * 2012-12-21 2013-04-03 华为技术有限公司 Data cache processing method, device and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7941799B2 (en) * 2004-05-27 2011-05-10 International Business Machines Corporation Interpreting I/O operation requests from pageable guests without host intervention
US7466716B2 (en) * 2004-07-13 2008-12-16 International Business Machines Corporation Reducing latency in a channel adapter by accelerated I/O control block processing
US7506084B2 (en) * 2006-10-17 2009-03-17 International Business Machines Corporation Method for communicating with an I/O adapter using cached address translations
CN103647807B (en) * 2013-11-27 2017-12-15 华为技术有限公司 A kind of method for caching information, device and communication equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609378A (en) * 2012-01-18 2012-07-25 中国科学院计算技术研究所 Message type internal memory accessing device and accessing method thereof
CN103019962A (en) * 2012-12-21 2013-04-03 华为技术有限公司 Data cache processing method, device and system

Also Published As

Publication number Publication date
CN103647807A (en) 2014-03-19
WO2015078219A1 (en) 2015-06-04

Similar Documents

Publication Publication Date Title
CN103647807B (en) A kind of method for caching information, device and communication equipment
CN105556496B (en) Pass through the method and apparatus for the expansible direct inter-node communication that quick peripheral component interconnection high speed (Peripheral Component Interconnect-Express, PCIe) carries out
CN106415515B (en) Grouping is sent using the PIO of the optimization without SFENCE write-in sequence
CN107992436A (en) A kind of NVMe data read-write methods and NVMe equipment
CN117971715A (en) Relay coherent memory management in multiprocessor systems
US20160132541A1 (en) Efficient implementations for mapreduce systems
JP6763984B2 (en) Systems and methods for managing and supporting virtual host bus adapters (vHBAs) on InfiniBand (IB), and systems and methods for supporting efficient use of buffers with a single external memory interface.
CN106537858B (en) A kind of method and apparatus of queue management
US11935600B2 (en) Programmable atomic operator resource locking
EP3777059B1 (en) Queue in a network switch
CN105589664A (en) Virtual storage high-speed transmission method
KR20160102445A (en) Cache coherent not with flexible number of cores, i/o devices, directory structure and coherency points
CA3173088A1 (en) Utilizing coherently attached interfaces in a network stack framework
CN106662895A (en) Computer device and data read-write method for computer device
CN104252416B (en) A kind of accelerator and data processing method
CN106372013B (en) Long-distance inner access method, device and system
EP3926482A1 (en) System and method for performing transaction aggregation in a network-on-chip (noc)
WO2019095942A1 (en) Data transmission method and communication device
US11520718B2 (en) Managing hazards in a memory controller
US20240069795A1 (en) Access request reordering across a multiple-channel interface for memory-based communication queues
US12008243B2 (en) Reducing index update messages for memory-based communication queues
US20240069805A1 (en) Access request reordering for memory-based communication queues
US20240143184A1 (en) Reducing index update messages for memory-based communication queues
US11455262B2 (en) Reducing latency for memory operations in a memory controller
CN105391610B (en) GPGPU network request message Lothrus apterus sending methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant