CN117234998B - Multi-host data access method and system - Google Patents

Multi-host data access method and system Download PDF

Info

Publication number
CN117234998B
CN117234998B CN202311174123.2A CN202311174123A CN117234998B CN 117234998 B CN117234998 B CN 117234998B CN 202311174123 A CN202311174123 A CN 202311174123A CN 117234998 B CN117234998 B CN 117234998B
Authority
CN
China
Prior art keywords
data
host
request
calculation unit
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311174123.2A
Other languages
Chinese (zh)
Other versions
CN117234998A (en
Inventor
秦保力
孟繁毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311174123.2A priority Critical patent/CN117234998B/en
Publication of CN117234998A publication Critical patent/CN117234998A/en
Application granted granted Critical
Publication of CN117234998B publication Critical patent/CN117234998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention provides a multi-host data access method and a system, wherein the method is applied to a network card connected with a plurality of hosts, and the method comprises the following steps: allocating a host number to each connected host, wherein the host number comprises a host number corresponding to the host, a calculation unit number corresponding to a calculation unit in the host and a sub calculation unit number corresponding to a sub calculation unit in the calculation unit; constructing a joint number based on a host number, a calculation unit number and a sub calculation unit number of a host connected with the network card, and determining a transmission path based on the joint number; the network card acquires a data request of a host, determines an initiating position of the data request based on a transmission path of the data request, and sends the data request to a destination. According to the scheme, the starting position of the data is determined through the joint number, so that on one hand, the requirement on network card resources is reduced, and the problem that a data path is disordered due to the fact that multiple hosts are connected with one network card is solved.

Description

Multi-host data access method and system
Technical Field
The present invention relates to the field of data access technologies, and in particular, to a method and a system for accessing multi-host data.
Background
In the field of super computing, clusters of computing nodes are necessarily high-density and highly concentrated due to the separation of storage nodes and computing nodes; in the prior art, a network card is usually connected to a computing node, and the separation of a storage node and the computing node causes difficulty in the butt joint of the computing node and the network card, so that a multi-host application technology is developed, and the multi-host application technology is used for designing and constructing an extended heterogeneous computing and storage architecture.
In the existing multi-host application technology, a network card is generally connected to a computing node, and the computing node is connected to a storage through the network card, but for a system with a huge number of hosts, resources consumed by connecting one network card to each computing node are larger.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a multi-host data access method that obviates or mitigates one or more of the disadvantages of the prior art.
One aspect of the present invention provides a multi-host data access method applied to a network card connected to a plurality of hosts, the method comprising the steps of:
allocating a host number to each connected host, wherein the host number comprises a host number corresponding to the host, a calculation unit number corresponding to a calculation unit in the host and a sub calculation unit number corresponding to a sub calculation unit in the calculation unit;
Constructing a joint number based on a host number, a calculation unit number and a sub calculation unit number of a host connected with the network card, and determining a transmission path based on the joint number;
The network card acquires a data request of a host, determines an initiating position of the data request based on a transmission path of the data request, and sends the data request to a destination.
By adopting the scheme, in the prior art, one network card is connected for each host end, resources are greatly wasted, in the scheme, data access of multiple hosts can be realized by only using one network card.
In some embodiments of the present invention, in the step of constructing the joint number based on the host number, the calculation unit number, and the sub calculation unit number of the host connected to the network card, the joint number is a joint number formed by sequentially connecting the host number, the calculation unit number, and the sub calculation unit number.
In some embodiments of the present invention, if the destination is a storage, and the data request is a read request, in the step of sending the data request to the destination, data in the storage is read based on a read location of the read request, and the data read from the storage is fed back to a sub-computing unit of a corresponding host based on a joint number corresponding to the data request.
In some embodiments of the present invention, the network card is provided with a channel allocation module, the network card is provided with a preset number of data channels, and the steps of the method further include: the channel allocation module acquires all the joint numbers and allocates a preset number of data channels to transmission paths corresponding to the joint numbers.
In some embodiments of the present invention, in the step of allocating a preset number of data channels to transmission paths corresponding to a joint number, the channel allocation module allocates the preset number of data channels to each transmission path in an average allocation manner, or the channel allocation module allocates the data channels to the transmission paths based on computing resources of a sub-computation unit in the joint number corresponding to the transmission path.
In some embodiments of the present invention, in the step of sending the data request to the destination, it is determined whether an idle data channel exists in the data channels corresponding to the transmission path of the data request; if the data request exists, the data request is sent to a destination terminal through a data channel of the space; and if the data request does not exist, adding the data request into a transmission queue, and sequentially transmitting the data requests in the queue according to the sequence of the queue.
In some embodiments of the present invention, the data request includes a first-level request and a second-level request, if the data request is the first-level request and there is no idle data channel in a data channel corresponding to a transmission path of the data request, it is determined that the number of data requests in the current transmission queue is less than a first preset number, and the data request is added to the end of the transmission queue, and then the data requests in the queue are waiting for being sequentially transmitted according to the sequence of the queue; if the number of the data requests in the current sending queue is not less than the first preset number, traversing the data requests in the current queue, inserting the current data requests into the position behind the last one-level request in the current queue, and sending the data requests in the queue in sequence according to the sequence of the queue.
In some embodiments of the present invention, if the data request is a secondary request, the data request is added to the end of the sending queue, and then the data requests in the queue are waiting for being sent sequentially according to the sequence of the queue.
In some embodiments of the present invention, the total number of the sub-computing units is an integer multiple of 4, and in the step of constructing a joint number based on the host number, the computing unit number, and the sub-computing unit number of the host connected to the network card, the total number of the joint numbers is less than or equal to the preset number of the data channels.
The second aspect of the present invention also provides a multi-host data access system comprising a computer device comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps implemented by the method as described above when the computer instructions are executed by the processor.
The third aspect of the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps performed by the multi-host data access method described above.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the application.
FIG. 1 is a diagram illustrating a multi-host data access method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a data interaction architecture of a multi-host data access method according to the present invention;
FIG. 3 is a schematic diagram of the correspondence between the associated reference numerals and the data channels.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
The method specifically comprises the following steps:
As shown in fig. 1 and 2, the present invention proposes a multi-host data access method, the method is applied to a network card connected with a plurality of hosts, and the steps of the method include:
In a specific implementation process, the network card may be a data processing unit (Data Processing Unit, DPU), the network card is connected to a System on Chip (SoC), the host may be a computer or a server, etc., which is not limited herein, the network card is simultaneously connected to multiple hosts, the hosts are connected to the network card through a peripheral component interconnect Express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) bus, the network card is also connected to a destination end, the destination end adopts an NVM Express (NVMe) communication protocol, and the destination end may be a storage end.
Step S100, assigning a host number to each connected host, wherein the host number comprises a host number corresponding to the host, a calculation unit number corresponding to a calculation unit in the host, and a sub-calculation unit number corresponding to a sub-calculation unit in the calculation unit;
in some embodiments of the present invention, each host includes a plurality of computing units, each computing unit includes a plurality of sub-computing units, the host number corresponds to each host, the computing unit number corresponds to each computing unit, the sub-computing unit number corresponds to each sub-computing unit, and the host number, the computing unit number, and the sub-computing unit number may be a digital number or an alphabetical number, which is not limited herein.
Step S200, constructing a joint number based on a host number, a calculation unit number and a sub calculation unit number of a host connected with the network card, and determining a transmission path based on the joint number;
In some embodiments of the present invention, an arbitration module is disposed in the network card, and the arbitration module constructs a joint number based on a host number, a calculation unit number, and a sub-calculation unit number of a host connected to the network card, and determines a transmission path based on the joint number.
In some embodiments of the present invention, each joint number is used as a transmission path, and the starting position of the transmission path is the corresponding sub-calculation unit in the joint number.
Step S300, the network card acquires a data request of a host, determines an initiating position of the data request based on a transmission path of the data request, and sends the data request to a destination.
In some embodiments of the present invention, the network card may send a data request of the host to the destination on one hand, or may be fed back to the host by the destination.
By adopting the scheme, in the prior art, one network card is connected to each host end, so that resources are greatly wasted, in the scheme, data access of multiple hosts can be realized by only using one network card.
In some embodiments of the present invention, in the step of constructing the joint number based on the host number, the calculation unit number, and the sub calculation unit number of the host connected to the network card, the joint number is a joint number formed by sequentially connecting the host number, the calculation unit number, and the sub calculation unit number.
In a specific implementation process, the joint number may also be obtained by adding corresponding positions of the host number, the calculation unit number and the sub calculation unit number.
By adopting the scheme, the joint number is constructed, so that the problem of data transmission confusion caused by repeated numbers can be effectively avoided.
In some embodiments of the present invention, if the destination is a storage, and the data request is a read request, in the step of sending the data request to the destination, data in the storage is read based on a read location of the read request, and the data read from the storage is fed back to a sub-computing unit of a corresponding host based on a joint number corresponding to the data request.
In some embodiments of the present invention, if the destination end is a storage end, the data request is a read request, where the read request includes a read location, reads data from a corresponding location of the storage end based on the read location, and feeds the data back to a sub-computing unit of a corresponding host, where the sub-computing unit further processes the data based on a preset processing program.
In some embodiments of the present invention, if the destination is a storage, the data request may also be a storage request, and data is stored in a corresponding location of the destination, the data request may also be a delete request, and the data of the corresponding location of the destination is deleted, and the same data request may also be a modify request, and the data of the corresponding location of the destination is modified.
In some embodiments of the present invention, the network card is provided with a channel allocation module, the network card is provided with a preset number of data channels, and the steps of the method further include: the channel allocation module acquires all the joint numbers and allocates a preset number of data channels to transmission paths corresponding to the joint numbers.
In a specific implementation process, the invention comprises the design of an NVMe data path, NVMe data stream processing uses the joint number as a judging condition for selecting data stream DMA target equipment, classifies the data stream according to the joint number, locks the data stream to different functional modules of different hosts, and informs other hosts through an NVMe sharing register after the transmission is completed.
In a specific implementation process, the invention further comprises an NVMe configuration channel design, configuration channel processing of NVMe collects and distinguishes configuration requests from a host and a system-level chip through an arbitration module, the joint number is used as a judgment condition, and finally, the configuration requests of multiple hosts are collected to NVMe and other targets. The number of the sub-computing units corresponding to each host is an integer multiple of 4, and the configuration can be carried out according to the application scene of the host.
The invention also comprises an interrupt processing design of NVMe, wherein the NVMe interrupt only needs to finish the interrupt reporting, and the interface number is distinguished through an arbitration module after the response of the ack of the host is received.
As shown in fig. 3, in some embodiments of the present invention, in the step of allocating a preset number of data channels to transmission paths corresponding to a joint number, the channel allocation module allocates the preset number of data channels to each transmission path in an average allocation manner, or the channel allocation module allocates the data channels to the transmission paths based on computing resources of a sub-computation unit in the joint number corresponding to the transmission path.
In the specific implementation process, the processing steps of the channel allocation module are completed through system-level chip execution, and the network card is provided with 256 data channels.
In a specific implementation process, the number of the transmission paths changes in real time, the total number of the transmission paths is not greater than the total number of the data channels, and at least one data channel is allocated to each transmission path.
In some embodiments of the present invention, if the channel allocation module allocates data channels for the transmission paths based on the computing resources of the sub-computing units in the joint numbers corresponding to the transmission paths, the computing resource weights of the sub-computing units in each transmission path are counted, the resource proportion of each sub-computing unit is calculated, the data channels are allocated for the transmission paths according to the resource proportion, and more data channels are allocated for the transmission paths corresponding to the sub-computing units with higher resource occupation according to the resource proportion.
In some embodiments of the present invention, in the step of sending the data request to the destination, it is determined whether an idle data channel exists in the data channels corresponding to the transmission path of the data request; if the data request exists, the data request is sent to a destination terminal through a data channel of the space; and if the data request does not exist, adding the data request into a transmission queue, and sequentially transmitting the data requests in the queue according to the sequence of the queue.
In the implementation process, the idle data channel is a data channel which has no data request being sent and is not in a working state in the current data channel.
In some embodiments of the invention, the transmit queue is a queue that is built up of data requests to be transmitted.
In some embodiments of the present invention, the data request includes a first-level request and a second-level request, if the data request is the first-level request and there is no idle data channel in a data channel corresponding to a transmission path of the data request, it is determined that the number of data requests in the current transmission queue is less than a first preset number, and the data request is added to the end of the transmission queue, and then the data requests in the queue are waiting for being sequentially transmitted according to the sequence of the queue; if the number of the data requests in the current sending queue is not less than the first preset number, traversing the data requests in the current queue, inserting the current data requests into the position behind the last one-level request in the current queue, and sending the data requests in the queue in sequence according to the sequence of the queue.
In a specific implementation process, the primary request is an emergency request, and the secondary request is a common request.
In a specific implementation process, if the first preset number is 5, and the current queue is 1, C, 2, B, 3, and a, where 1, 2, and 3 are primary requests, C, B, A is a secondary request, and the currently added data request is primary request 4, firstly, if the number of data requests in the current queue is not less than 5, traversing the data requests in the current queue, inserting the current data request into a position next to the last primary request in the current queue, adding the primary request 4 into a position next to the last primary request 3 in the current queue, and updating the queues to 1, C, 2, B, 3, 4, and a.
By adopting the scheme, the data request is divided into the first-level request and the second-level request, the first-level request is used as an emergency request, the second-level request is used as a common request, and firstly, if the number of the data requests in the current transmission queue is smaller than the first preset number, the data requests are added to the tail of the transmission queue, the first preset number is the preset number, and in the range of the number, the data requests are timely transmitted, so that when the number of the data requests in the transmission queue is smaller than the first preset number, the data requests can be directly added to the tail of the transmission queue and are sequentially transmitted, and the transmission of the emergency request is timely and not influenced; if the number of the data requests in the current transmission queue is not smaller than the first preset number, the current data requests are inserted into the position behind the last one-stage request in the current queue, namely the waiting time of the one-stage request cannot exceed the time for transmitting the data requests of the first preset number, so that the transmission timeliness of the one-stage request is further ensured.
In some embodiments of the present invention, if the data request is a secondary request, the data request is added to the end of the sending queue, and then the data requests in the queue are waiting for being sent sequentially according to the sequence of the queue.
In some embodiments of the present invention, the total number of the sub-computing units is an integer multiple of 4, and in the step of constructing a joint number based on the host number, the computing unit number, and the sub-computing unit number of the host connected to the network card, the total number of the joint numbers is less than or equal to the preset number of the data channels.
By adopting the scheme, the original PCIe interface number is recovered according to the position of the sub-calculation unit number of the NVMe under the condition that the NVMe transmits data to the PCIe uplink, and the communication between the host and the network card is not influenced by the joint number. And realizing the mapping of multiple DMA channels and completing the uploading of NVMe service data to a host. And realizing multi-path peripheral bus configuration interface arbitration, NVMe address and configuration space division and register configuration.
The embodiment of the invention also provides a multi-host data access system, which comprises a computer device, wherein the computer device comprises a processor and a memory, the memory is stored with computer instructions, the processor is used for executing the computer instructions stored in the memory, and the system realizes the steps realized by the method when the computer instructions are executed by the processor.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps implemented by the aforementioned multi-host data access method. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present invention are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A multi-host data access method, wherein the method is applied to a network card connected with a plurality of hosts, the network card is provided with a preset number of data channels, and the method comprises the steps of:
allocating a host number to each connected host, wherein the host number comprises a host number corresponding to the host, a calculation unit number corresponding to a calculation unit in the host and a sub calculation unit number corresponding to a sub calculation unit in the calculation unit;
Constructing a joint number based on a host number, a calculation unit number and a sub calculation unit number of a host connected with the network card, and determining a transmission path based on the joint number;
The network card acquires a data request of a host, determines an initiating position of the data request based on a transmission path of the data request, sends the data request to a destination, and determines whether an idle data channel exists in a data channel corresponding to the transmission path of the data request; if yes, the data request is sent to a destination terminal through the idle data channel; if the data request does not exist, adding the data request into a transmission queue, specifically, the data request comprises a first-level request and a second-level request, if the data request is the first-level request and no idle data channel exists in a data channel corresponding to a transmission path of the data request, judging that the number of the data requests in the current transmission queue is smaller than a first preset number, adding the data request into the tail of the transmission queue, and waiting for the data requests in the queue to be sequentially transmitted according to the sequence of the queue; if the number of the data requests in the current sending queue is not less than the first preset number, traversing the data requests in the current queue, inserting the current data requests into the position behind the last one-level request in the current queue, sequentially sending the data requests in the queue according to the sequence of the queue, and sequentially sending the data requests in the queue.
2. The multi-host data access method according to claim 1, wherein in the step of constructing a joint number based on a host number, a calculation unit number, and a sub-calculation unit number of a host connected to the network card, the joint number is a joint number formed by sequentially connecting the host number, the calculation unit number, and the sub-calculation unit number.
3. The multi-host data access method according to claim 1, wherein if the destination is a storage, the data request is a read request, in the step of sending the data request to the destination, the data in the storage is read based on a read location of the read request, and the data read from the storage is fed back to a sub-calculation unit of a corresponding host based on a joint number corresponding to the data request.
4. The multi-host data access method of claim 1, wherein the network card is provided with a channel allocation module, the steps of the method further comprising: the channel allocation module acquires all the joint numbers and allocates a preset number of data channels to transmission paths corresponding to the joint numbers.
5. The multi-host data access method according to claim 4, wherein in the step of allocating a preset number of data channels to transmission paths corresponding to a joint number, the channel allocation module allocates the preset number of data channels to each transmission path in an average allocation manner, or the channel allocation module allocates the data channels to the transmission paths based on calculation resources of a sub-calculation unit in the joint number corresponding to the transmission paths.
6. The multi-host data access method of claim 1 wherein if the data request is a secondary request, the data request is added to the end of the send queue and the data requests in the queue are waiting for sequential sending according to the order of the queues.
7. The multi-host data access method according to claim 5 or 6, wherein the total number of the sub-calculation units is an integer multiple of 4, and in the step of constructing a joint number based on a host number, a calculation unit number, and a sub-calculation unit number of a host connected to the network card, the total number of the joint numbers is less than or equal to a preset number of the data channels.
8. A multi-host data access system comprising a computer device, said computer device comprising a processor and a memory, said memory having stored therein computer instructions for executing the computer instructions stored in said memory, the system implementing the steps implemented by the method according to any of claims 1-7 when said computer instructions are executed by the processor.
CN202311174123.2A 2023-09-12 2023-09-12 Multi-host data access method and system Active CN117234998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311174123.2A CN117234998B (en) 2023-09-12 2023-09-12 Multi-host data access method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311174123.2A CN117234998B (en) 2023-09-12 2023-09-12 Multi-host data access method and system

Publications (2)

Publication Number Publication Date
CN117234998A CN117234998A (en) 2023-12-15
CN117234998B true CN117234998B (en) 2024-06-07

Family

ID=89087280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311174123.2A Active CN117234998B (en) 2023-09-12 2023-09-12 Multi-host data access method and system

Country Status (1)

Country Link
CN (1) CN117234998B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9547598B1 (en) * 2013-09-21 2017-01-17 Avego Technologies General Ip (Singapore) Pte. Ltd. Cache prefill of cache memory for rapid start up of computer servers in computer networks
CN110290217A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Processing method and processing device, storage medium and the electronic device of request of data
CN110808908A (en) * 2019-09-27 2020-02-18 华东计算技术研究所(中国电子科技集团公司第三十二研究所) System and method for switching redundant network in real time across platforms
US11030104B1 (en) * 2020-01-21 2021-06-08 International Business Machines Corporation Picket fence staging in a multi-tier cache
CN115622954A (en) * 2022-09-29 2023-01-17 中科驭数(北京)科技有限公司 Data transmission method and device, electronic equipment and storage medium
CN115643321A (en) * 2022-09-29 2023-01-24 中科驭数(北京)科技有限公司 Data processing method, device, equipment and computer readable storage medium
CN115774620A (en) * 2022-12-23 2023-03-10 摩尔线程智能科技(北京)有限责任公司 Method and device for realizing mutual access of storage spaces in GPU (graphics processing Unit) interconnection architecture and computing equipment
CN116132369A (en) * 2023-02-13 2023-05-16 武汉绿色网络信息服务有限责任公司 Flow distribution method of multiple network ports in cloud gateway server and related equipment
WO2023143504A1 (en) * 2022-01-29 2023-08-03 阿里云计算有限公司 Computing system, pci device manager, and initialization method therefor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856440B (en) * 2012-11-29 2015-11-18 腾讯科技(深圳)有限公司 A kind of message treatment method based on distributed bus, server and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9547598B1 (en) * 2013-09-21 2017-01-17 Avego Technologies General Ip (Singapore) Pte. Ltd. Cache prefill of cache memory for rapid start up of computer servers in computer networks
CN110290217A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Processing method and processing device, storage medium and the electronic device of request of data
CN110808908A (en) * 2019-09-27 2020-02-18 华东计算技术研究所(中国电子科技集团公司第三十二研究所) System and method for switching redundant network in real time across platforms
US11030104B1 (en) * 2020-01-21 2021-06-08 International Business Machines Corporation Picket fence staging in a multi-tier cache
WO2023143504A1 (en) * 2022-01-29 2023-08-03 阿里云计算有限公司 Computing system, pci device manager, and initialization method therefor
CN115622954A (en) * 2022-09-29 2023-01-17 中科驭数(北京)科技有限公司 Data transmission method and device, electronic equipment and storage medium
CN115643321A (en) * 2022-09-29 2023-01-24 中科驭数(北京)科技有限公司 Data processing method, device, equipment and computer readable storage medium
CN115774620A (en) * 2022-12-23 2023-03-10 摩尔线程智能科技(北京)有限责任公司 Method and device for realizing mutual access of storage spaces in GPU (graphics processing Unit) interconnection architecture and computing equipment
CN116132369A (en) * 2023-02-13 2023-05-16 武汉绿色网络信息服务有限责任公司 Flow distribution method of multiple network ports in cloud gateway server and related equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Modelling, analysis and performance improvement of an SRU’s access request queue in multi-channel V2I communications;panelMaurice Khabbaz等;《 Pervasive and Mobile Computing》;20150721;第21卷;第92-102页 *
基于FCS数据通信技术的研究与实现;陈冬芳;《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》;20030215(第2期);I136-128 *

Also Published As

Publication number Publication date
CN117234998A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN114780458B (en) Data processing method and storage system
US10642777B2 (en) System and method for maximizing bandwidth of PCI express peer-to-peer (P2P) connection
US10044796B2 (en) Method and system for transmitting an application message between nodes of a clustered data processing system
CN112214166B (en) Method and apparatus for transmitting data processing requests
US7395392B2 (en) Storage system and storage control method
US6892259B2 (en) Method and apparatus for allocating computer bus device resources to a priority requester and retrying requests from non-priority requesters
CN109408243B (en) RDMA-based data processing method, device and medium
CN101311915A (en) Method and system for dynamically reassigning virtual lane resources
US11231964B2 (en) Computing device shared resource lock allocation
CN112769905B (en) NUMA (non uniform memory access) architecture based high-performance network card performance optimization method under Feiteng platform
US11966585B2 (en) Storage device and storage system
CN105141603A (en) Communication data transmission method and system
CN115643318A (en) Command execution method, device, equipment and computer readable storage medium
CN115481048A (en) Memory system and chip
US8090893B2 (en) Input output control apparatus with a plurality of ports and single protocol processing circuit
CN109783002B (en) Data reading and writing method, management equipment, client and storage system
KR102303424B1 (en) Direct memory access control device for at least one processing unit having a random access memory
US8478877B2 (en) Architecture-aware allocation of network buffers
CN117234998B (en) Multi-host data access method and system
CN110515564B (en) Method and device for determining input/output (I/O) path
CN111404842A (en) Data transmission method, device and computer storage medium
CN115189977A (en) Broadcast transmission method, system and medium based on AXI protocol
JP2007221522A (en) Polling device, terminal device, polling method and program
CN117971135B (en) Storage device access method and device, storage medium and electronic device
CN115017072B (en) Burst length splitting method and device, chip system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant