TW201432566A - Expansion card of graphic processing unit and expanding method - Google Patents

Expansion card of graphic processing unit and expanding method Download PDF

Info

Publication number
TW201432566A
TW201432566A TW102104178A TW102104178A TW201432566A TW 201432566 A TW201432566 A TW 201432566A TW 102104178 A TW102104178 A TW 102104178A TW 102104178 A TW102104178 A TW 102104178A TW 201432566 A TW201432566 A TW 201432566A
Authority
TW
Taiwan
Prior art keywords
gpu
slave
expansion card
address
graphics processor
Prior art date
Application number
TW102104178A
Other languages
Chinese (zh)
Inventor
Chih-Huang Wu
Original Assignee
Hon Hai Prec Ind Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hon Hai Prec Ind Co Ltd filed Critical Hon Hai Prec Ind Co Ltd
Priority to TW102104178A priority Critical patent/TW201432566A/en
Publication of TW201432566A publication Critical patent/TW201432566A/en

Links

Landscapes

  • Multi Processors (AREA)

Abstract

The present invention provides a expansion card of graphic processing unit which referred to as GPU. The expansion card includes two interfaces, communication chip, the first control unit and the second control unit. The interfaces are used to connect a server motherboard or connect a new GPU. The communication chip is responsible for communication and transmission data. The first control unit includes the request module which requests master GPU to distribute a slave address and the receiving module which receive the slave address from the master GPU. The second unit includes distribution address module which distributes a slave address, detection module which detects the number of all the GPU and distribution operation module is to balance operation percentage of all the GPU.

Description

圖形處理器擴展卡及擴展方法Graphics processor expansion card and extension method

本發明涉及一種GPU擴展卡及擴展方法。The invention relates to a GPU expansion card and an expansion method.

隨著雲端運算的興起以及許多GPU運算的設計,因為一顆GPU裏面具有256個以上的流處理,許多企業型的伺服器及資料中心已經採用GPU運算架構的設計,同時存在於多台主機之中,並且使用GPU去做一些複雜度很高的運算。但隨著業務量及運算需求的增加,需要去動態即時擴展GPU,並且不受到匯流排(Pci Express,PCI-E)插槽的限制。With the rise of cloud computing and the design of many GPU operations, because there are more than 256 stream processing in a GPU, many enterprise servers and data centers have adopted the design of GPU computing architecture, and exist in multiple hosts. And use the GPU to do some very complex operations. However, as the volume of business and computing needs increase, it is necessary to dynamically expand the GPU in real time, and is not limited by the bus (Pci Express, PCI-E) slot.

鑒於以上內容,有必要提供一種圖形處理器擴展卡及擴展方法,圖形處理器擴展卡簡稱GPU擴展卡,可以動態即時擴展GPU。In view of the above, it is necessary to provide a graphics processor expansion card and an extension method, and the graphics processor expansion card is referred to as a GPU expansion card, which can dynamically and dynamically expand the GPU.

一種GPU擴展卡,其包括:介面一及介面二,用於連接伺服器的主機板或者串接一個新的從GPU;通信晶片,用於與各個串接的GPU之間進行溝通通信及傳輸資料;控制單元一,當該GPU擴展卡用於從GPU中才會被觸發,包括:請求模組,用於該從GPU透過所述通信晶片請求主GPU分配一個子位址,所述子位址用於標識該從GPU,與主機板連接的GPU被標記為主GPU;接收模組,用於接收主GPU透過通信晶片傳遞過來的子位址;控制單元二,當該GPU擴展卡用於主GPU中才會被觸發,包括:分配位址模組,用於分配一個子位址並傳遞給新串接的從GPU;偵測模組,用於偵測相互串接的所有GPU的數量;分配運算模組,用於平衡分配所有GPU的運算負載百分比,並透過通信晶片傳遞給所有串接的從GPU。A GPU expansion card, comprising: an interface 1 and an interface 2, for connecting a server board or a new slave GPU; a communication chip for communicating and transmitting data with each serial GPU The control unit first, when the GPU expansion card is used to be triggered from the GPU, includes: a requesting module, wherein the slave GPU requests the primary GPU to allocate a sub-address through the communication chip, the sub-address For identifying the slave GPU, the GPU connected to the motherboard is marked as the main GPU; the receiving module is configured to receive the sub-address transmitted by the main GPU through the communication chip; and the control unit 2, when the GPU expansion card is used for the main The GPU will be triggered, including: an allocation address module for allocating a sub-address and passing it to the newly connected slave GPU; and a detection module for detecting the number of all GPUs connected in series; A computing module is allocated to balance the percentage of computing load allocated to all GPUs and passed through the communication chip to all serial slave GPUs.

一種GPU擴展方法,該方法包括:請求步驟,新串接一個從GPU時,該從GPU透過所述通信晶片請求主GPU分配一個子位址,所述子位址用於標識該從GPU,與主機板連接的GPU被標記為主GPU;分配位址步驟,主GPU分配一個子位址並傳遞給新串接的從GPU;接收步驟,新串接的從GPU接收主GPU透過通信晶片傳遞過來的子位址;偵測步驟,主GPU偵測相互串聯的所有GPU的數量;分配運算步驟,主GPU平衡分配所有GPU的運算負載百分比,並透過通信晶片傳遞給所有串接的從GPU。A GPU extension method, the method comprising: a requesting step, when newly connecting a slave GPU, the slave GPU requests the master GPU to allocate a sub-address through the communication chip, where the sub-address is used to identify the slave GPU, and The GPU connected to the motherboard is marked as the main GPU; in the address allocation step, the main GPU allocates a sub-address and passes it to the newly connected slave GPU; in the receiving step, the newly connected slave GPU receives the main GPU and transmits it through the communication chip. Sub-address; detection step, the main GPU detects the number of all GPUs connected in series; the allocation operation steps, the main GPU balances the operating load percentage of all GPUs, and transmits them to all serial slave GPUs through the communication chip.

相較於習知技術,所述GPU擴展卡及擴展方法,可透過GPU擴展卡即時串接多個GPU來分擔運算負載,透過主GPU來平衡所有GPU的運算負載百分比,並且不受到匯流排(Pci Express,PCI-E)插槽的限制。Compared with the prior art, the GPU expansion card and the extension method can directly load multiple GPUs through the GPU expansion card to share the computing load, balance the computing load percentage of all the GPUs through the main GPU, and are not subject to the bus bar ( Pci Express, PCI-E) slot limitations.

6...伺服器6. . . server

7...主機板7. . . motherboard

30...主GPU30. . . Main GPU

40...GPU擴展卡40. . . GPU expansion card

42...通信晶片42. . . Communication chip

43...存儲晶片43. . . Memory chip

44...介面一44. . . Interface one

45...介面二45. . . Interface two

46...微處理器46. . . microprocessor

48...控制單元一48. . . Control unit one

400...請求模組400. . . Request module

401...接收模組401. . . Receiving module

41...控制單元二41. . . Control unit two

410...分配位址模組410. . . Allocation address module

411...偵測模組411. . . Detection module

412...分配運算模組412. . . Distribution computing module

圖1是本發明GPU擴展卡之應用環境圖。1 is a diagram showing an application environment of a GPU expansion card of the present invention.

圖2是本發明GPU擴展卡之架構圖。2 is a block diagram of a GPU expansion card of the present invention.

圖3是本發明GPU擴展方法的較佳實施例之流程圖。3 is a flow chart of a preferred embodiment of the GPU expansion method of the present invention.

如圖1所示,是本發明GPU擴展卡的應用環境圖。在本實施例中,圖形處理器擴展卡簡稱GPU擴展卡,GPU擴展卡40應用於GPU中,當與伺服器6連接的主GPU 30運算負擔較重時,可以透過GPU擴展卡40即時擴展從GPU來分擔運算壓力。所述伺服器6還包括主機板7。所述每個GPU擴展卡40還與外部電源相連。As shown in FIG. 1, it is an application environment diagram of the GPU expansion card of the present invention. In this embodiment, the GPU expansion card is referred to as a GPU expansion card. The GPU expansion card 40 is applied to the GPU. When the main GPU 30 connected to the server 6 has a heavy computing burden, the GPU expansion card 40 can be instantly expanded. The GPU shares the computational pressure. The server 6 also includes a motherboard 7. Each of the GPU expansion cards 40 is also coupled to an external power source.

如圖2所示,是本發明GPU擴展卡的架構圖。所述GPU擴展卡40包括控制單元一48、控制單元二41、通信晶片42、存儲晶片43、介面一44、介面二45及微處理器46。所述控制單元一48還包括請求模組400及接收模組401。所述控制單元二41包括分配位址模組410、偵測模組411及分配運算模組412。As shown in FIG. 2, it is an architectural diagram of the GPU expansion card of the present invention. The GPU expansion card 40 includes a control unit 48, a control unit 22, a communication chip 42, a memory chip 43, an interface 44, an interface 25, and a microprocessor 46. The control unit one 48 further includes a request module 400 and a receiving module 401. The control unit 22 includes a distribution address module 410, a detection module 411, and a distribution operation module 412.

所述介面一44及所述介面二45的功能一樣,都是用於連接主機板或者向下擴展從GPU時,可以串接新的從GPU。當GPU擴展卡40的介面一44或者介面二45與主機板7的匯流排(Pci Express,PCI-E)介面直接相連時,該GPU擴展卡所在的GPU即為主GPU 30,負責與伺服器6進行通信及資料傳輸。所述主GPU 30的數量是一個,其他的從GPU不與主機板7的匯流排相連接,透過GPU擴展卡40的介面一44或者介面二45彼此串接。所述串接的方式可以透過排線或其他連接方式。The functions of the interface 44 and the interface 2 are the same as those used to connect to the motherboard or extend down from the GPU to connect the new slave GPU. When the interface 44 or the interface 25 of the GPU expansion card 40 is directly connected to the bus (Pci Express, PCI-E) interface of the motherboard 7, the GPU where the GPU is located is the main GPU 30, and is responsible for the server. 6 Communication and data transmission. The number of the main GPUs 30 is one. The other slave GPUs are not connected to the busbars of the motherboard 7, and are connected to each other through the interface 44 or the interface 25 of the GPU expansion card 40. The manner of serial connection can be through a cable or other connection.

所述通信晶片42用於各個串接的從GPU與主GPU 30進行溝通通信及傳輸資料。The communication chip 42 is used for communication and transmission of data between the serially connected slave GPU and the main GPU 30.

所述控制單元一48中的請求模組400用於當新串接一個從GPUn時,透過串列匯流排I2C向上一階從GPUn-1傳遞預設請求信號請求分配一個子位址,直至將該預設請求信號傳遞給主GPU。所述子位址用於標識從GPU,便於主GPU區分各個從GPU及管理各個從GPU的運算負載。如圖1所示,與主GPU串接的各個從GPU都分配一個子位址,從CPU1對應的是子位址1,依次類推,從CPUn-1對應的是子位址n-1。The request module 400 in the control unit 48 is configured to, when newly connecting a slave GPU, transmit a preset request signal from the GPU n-1 to the first order through the serial bus I2C to allocate a sub-address, until The preset request signal is passed to the main GPU. The sub-address is used to identify the slave GPU, so that the main GPU can distinguish the respective slave GPUs and manage the computation load of each slave GPU. As shown in FIG. 1, each slave GPU serially connected to the main GPU is assigned a sub-address, the slave CPU 1 corresponds to the sub-address 1, and so on, and the slave CPU n-1 corresponds to the sub-address n-1.

所述控制單元二41在GPU為主GPU時才會被觸發,所述控制單元二41用於在接收到從新串接的從GPUn傳遞過來的預設請求信號後,所述分配位址模組410分配一個子位址給新串接的從GPUn並透過通信晶片傳遞給下一階的從GPU1,從GPU1再將該子位址透過通信晶片傳遞給從GPU2,直至傳遞到從GPUn。所述控制單元一48的接收模組401用於接收從主GPU傳遞過來的子位址。接收完畢後,所述從GPUn等待主GPU分配運算負載。The control unit 22 is triggered when the GPU is the main GPU, and the control unit 22 is configured to: after receiving the preset request signal transmitted from the newly connected GPU n, the allocation address module 410 allocates a subaddress to the newly concatenated slave GPUn and passes it through the communication chip to the next stage slave GPU1, which then passes the subaddress to the slave GPU2 through the communication chip until it is passed to the slave GPUn. The receiving module 401 of the control unit 48 is configured to receive the sub-address transmitted from the main GPU. After receiving, the slave GPUn waits for the main GPU to allocate an arithmetic load.

所述控制單元二41中的偵測模組411用於偵測相互串聯的所有GPU的數量。The detection module 411 of the control unit 22 is configured to detect the number of all GPUs connected in series.

所述控制單元二41中的分配運算模組412用於按照一種計算方法給所有串聯的從GPU分配運算負載百分比。所述運算負載百分比是一個GPU所承擔的運算量佔整個運算量的總和。所述計算方法是將100%去除所述GPU數量,得到的數值即為各個GPU的運算負載百分比。The allocation operation module 412 in the control unit 22 is configured to allocate the operating load percentage to all the connected slave GPUs according to a calculation method. The operation load percentage is the sum of the calculation amount undertaken by one GPU and the entire operation amount. The calculation method is to remove 100% of the number of GPUs, and the obtained value is the operation load percentage of each GPU.

所述存儲晶片43用於存儲運算資料及GPU擴展方法的程式段。The memory chip 43 is used to store a program segment of the data and GPU expansion method.

所述微處理器46用於處理運算資料及GPU擴展方法的程式段。The microprocessor 46 is used to process the program data and the program segment of the GPU expansion method.

參閱圖3所示,是本發明GPU擴展方法的較佳實施例的流程圖。根據不同的需求,該流程圖中步驟的順序可以改變,某些步驟可以省略。Referring to FIG. 3, it is a flow chart of a preferred embodiment of the GPU expansion method of the present invention. The order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.

步驟S10,當與伺服器6串接的GPU 運算負載較重時,在該GPU的擴展卡的介面一44或者介面二45上串接一個新的從GPUn。In step S10, when the GPU computing load connected to the server 6 is heavy, a new slave GPU is serially connected to the interface 44 or the interface 45 of the expansion card of the GPU.

步驟S11,從GPUn的控制單元一48中的請求模組400透過通信晶片42向從GPUn-1發出預設請求信號請求主GPU分配一個子位址,從GPUn-1將該請求信號再透過通信晶片傳遞給從GPUn-2,直至傳遞給主GPU。Step S11, the request module 400 in the control unit 48 of the GPUn sends a preset request signal from the GPUn-1 through the communication chip 42 to request the primary GPU to allocate a sub-address, and the request signal is re-transmitted from the GPUn-1. The wafer is passed to the slave GPUn-2 until it is passed to the master GPU.

步驟S12,主GPU收到該請求信號後,控制單元二41中的分配位址模組410給該新串接的從GPUn分配一個子位址並透過通信晶片傳遞給下一階的從GPU1,從GPU1再將該子位址傳遞給從GPU2,直至傳遞到從GPUn。Step S12, after the main GPU receives the request signal, the allocated address module 410 in the control unit 2 41 allocates a sub-address to the newly serialized GPUn and transmits the sub-address to the next-order slave GPU1 through the communication chip. The subaddress is then passed from GPU 1 to slave GPU 2 until it is passed to slave GPUn.

步驟S13,從GPUn的控制單元一48的接收模組401接收從主GPU依次傳遞過來的子位址。In step S13, the sub-address transmitted from the main GPU in sequence is received from the receiving module 401 of the control unit 48 of the GPUn.

步驟S14,主GPU的控制單元二41中的偵測模組411偵測相互串聯的GPU的數量。In step S14, the detection module 411 in the control unit 411 of the main GPU detects the number of GPUs connected in series.

步驟S15,主GPU 30的控制單元二41中的分配運算模組412按照一種計算方法給所有串聯的GPU 3平衡分配運算負載百分比,並傳遞給各個GPU。In step S15, the distribution operation module 412 in the control unit XX of the main GPU 30 balances the operation load percentages to all the GPUs 3 connected in series according to a calculation method, and transmits them to the respective GPUs.

在本實施例中,例如當相互串接的GPU的數量為四個時,此時各個GPU的運算負載百分比為25%。當有任意一個GPU運算負載偏高,會與主GPU進行通信請求轉移運算負載至偏低的GPU,直到各個GPU的運算負載百分比為25%為止。In this embodiment, for example, when the number of GPUs connected in series is four, the calculation load percentage of each GPU at this time is 25%. When any GPU computing load is too high, it will communicate with the main GPU to request the transfer operation load to the lower GPU until the operating load percentage of each GPU is 25%.

透過所述步驟S10至S15,即可實現當GPU遇到運算滿載時,可以透過GPU擴展卡40上介面一44或者介面二45去即時串接多個GPU來分擔運算負載,並透過主GPU來平衡所有GPU的運算負載百分比。Through the steps S10 to S15, when the GPU encounters the full load of the operation, the GPU can be connected to the GPU through the interface 44 or the interface 45 of the GPU expansion card 40 to share the computing load and pass through the main GPU. Balance the operating load percentage of all GPUs.

最後應說明的是,以上實施例僅用以說明本發明的技術方案而非限制,儘管參照較佳實施例對本發明進行了詳細說明,本領域的普通技術人員應當理解,可以對本發明的技術方案進行修改或等同替換,而不脫離本發明技術方案的精神和範圍。It should be noted that the above embodiments are only for explaining the technical solutions of the present invention and are not intended to be limiting, and the present invention will be described in detail with reference to the preferred embodiments. Modifications or equivalents are made without departing from the spirit and scope of the invention.

40...GPU擴展卡40. . . GPU expansion card

42...通信晶片42. . . Communication chip

43...存儲晶片43. . . Memory chip

44...介面一44. . . Interface one

45...介面二45. . . Interface two

48...控制單元一48. . . Control unit one

46...微處理器46. . . microprocessor

400...請求模組400. . . Request module

401...接收模組401. . . Receiving module

41...控制單元二41. . . Control unit two

410...分配位址模組410. . . Allocation address module

411...偵測模組411. . . Detection module

412...分配運算模組412. . . Distribution computing module

Claims (8)

一種圖形處理器擴展卡,所述圖形處理器擴展卡包括:
介面一及介面二,用於連接伺服器的主機板或者串接一個新的從GPU;
通信晶片,用於與各個串接的GPU之間進行溝通通信及傳輸資料;
控制單元一,當該圖形處理器擴展卡用於從GPU中時被觸發,包括:
請求模組,用於該從GPU透過所述通信晶片請求主GPU分配一個子位址,所述子位址用於標識該從GPU,所述主GPU為與主機板連接的GPU;
接收模組,用於接收主GPU透過通信晶片傳遞過來的子位址;
控制單元二,當該圖形處理器擴展卡用於主GPU中時被觸發,包括:
分配位址模組,用於分配一個子位址並傳遞給新串接的從GPU;
偵測模組,用於偵測相互串接的所有GPU的數量;
分配運算模組,用於平衡分配所有GPU的運算負載百分比,並透過通信晶片傳遞給所有串接的從GPU。
A graphics processor expansion card, the graphics processor expansion card comprising:
Interface 1 and interface 2, used to connect to the server's motherboard or serially connect a new slave GPU;
a communication chip for communicating and transmitting data with each GPU connected in series;
Control unit one, when the graphics processor expansion card is used to be triggered from the GPU, includes:
a requesting module, wherein the slave GPU requests the primary GPU to allocate a sub-address through the communication chip, the sub-address is used to identify the slave GPU, and the master GPU is a GPU connected to the motherboard;
a receiving module, configured to receive a sub-address transmitted by the main GPU through the communication chip;
Control unit 2, when the graphics processor expansion card is used in the main GPU, is triggered, including:
Assigning an address module for allocating a sub-address and passing it to the newly concatenated slave GPU;
a detection module for detecting the number of all GPUs connected in series;
A computing module is allocated to balance the percentage of computing load allocated to all GPUs and passed through the communication chip to all serial slave GPUs.
根據申請專利範圍第1項之圖形處理器擴展卡,所述圖形處理器擴展卡與外部電源相連。According to the graphics processor expansion card of claim 1, the graphics processor expansion card is connected to an external power source. 根據申請專利範圍第1項之圖形處理器擴展卡,當該圖形處理器擴展卡用於從GPU中向主GPU傳遞請求時,先向與其相連的上一階從GPU傳遞請求,所述相連的從GPU再向其上一階的GPU傳遞直至傳遞到主GPU。According to the graphics processor expansion card of claim 1, when the graphics processor expansion card is used to transfer a request from the GPU to the main GPU, the request is first transmitted to the upper-order slave GPU connected thereto, the connected From the GPU to its upper-order GPU pass until it is passed to the main GPU. 根據申請專利範圍第1項之圖形處理器擴展卡,當該圖形處理器擴展卡用於從GPU中,主GPU向該從GPU傳遞信號時,先傳遞給與主GPU相連的從GPU,所述相連的從GPU再向其下一階的從GPU傳遞直至傳遞到該從GPU。According to the graphics processor expansion card of claim 1, when the graphics processor expansion card is used in the slave GPU, when the master GPU transmits a signal to the slave GPU, it is first transferred to the slave GPU connected to the master GPU. The connected slave GPU passes to the next-stage slave GPU until it is passed to the slave GPU. 一種根據申請專利範圍第1項之圖形處理器擴展卡的擴展方法,該方法包括:
請求步驟,新串接一個從GPU時,該從GPU透過所述通信晶片請求主GPU分配一個子位址,所述子位址用於標識該從GPU,所述主GPU為與主機板連接的GPU;
分配位址步驟,主GPU分配一個子位址並傳遞給新串接的從GPU;
接收步驟,新串接的從GPU接收主GPU透過通信晶片傳遞過來的子位址;
偵測步驟,主GPU偵測相互串聯的所有GPU的數量;
分配運算步驟,主GPU平衡分配所有GPU的運算負載百分比,並透過通信晶片傳遞給所有串接的從GPU。
An extension method of a graphics processor expansion card according to claim 1 of the patent application scope, the method comprising:
a requesting step, when the slave GPU is newly connected in series, the slave GPU requests the master GPU to allocate a sub-address through the communication chip, where the sub-address is used to identify the slave GPU, and the master GPU is connected to the motherboard GPU;
In the address allocation step, the primary GPU allocates a sub-address and passes it to the newly concatenated slave GPU;
Receiving step, the newly connected slave GPU receives the sub-address transmitted by the main GPU through the communication chip;
In the detecting step, the main GPU detects the number of all GPUs connected in series;
In the allocation operation step, the main GPU balances the operating load percentage of all GPUs and passes them to all serial slave GPUs through the communication chip.
根據申請專利範圍第5項之擴展方法,所述圖形處理器擴展卡與外部電源相連。According to the expansion method of claim 5, the graphics processor expansion card is connected to an external power source. 根據申請專利範圍第5項之擴展方法,當該圖形處理器擴展卡用於從GPU中向主GPU傳遞請求時,先向與其相連的上一階從GPU傳遞請求,所述相連的從GPU再向其上一階的GPU傳遞直至傳遞到主GPU。According to the extension method of claim 5, when the graphics processor expansion card is used to transfer a request from the GPU to the main GPU, the request is first transmitted to the GPU connected to the upper stage, and the connected slave GPU Pass to the upper-order GPU until it is passed to the main GPU. 根據申請專利範圍第5項之擴展方法,當該圖形處理器擴展卡用於從GPU中,主GPU向該從GPU傳遞信號時,先傳遞給與主GPU相連的從GPU,所述相連的從GPU再向其下一階的從GPU傳遞直至傳遞到該從GPU。
According to the expansion method of claim 5, when the graphics processor expansion card is used in the slave GPU, when the master GPU transmits a signal to the slave GPU, it is first transferred to the slave GPU connected to the master GPU, and the connected slave The GPU then passes to the next stage of the slave GPU until it is passed to the slave GPU.
TW102104178A 2013-02-04 2013-02-04 Expansion card of graphic processing unit and expanding method TW201432566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW102104178A TW201432566A (en) 2013-02-04 2013-02-04 Expansion card of graphic processing unit and expanding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW102104178A TW201432566A (en) 2013-02-04 2013-02-04 Expansion card of graphic processing unit and expanding method

Publications (1)

Publication Number Publication Date
TW201432566A true TW201432566A (en) 2014-08-16

Family

ID=51797431

Family Applications (1)

Application Number Title Priority Date Filing Date
TW102104178A TW201432566A (en) 2013-02-04 2013-02-04 Expansion card of graphic processing unit and expanding method

Country Status (1)

Country Link
TW (1) TW201432566A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753134A (en) * 2018-12-24 2019-05-14 四川大学 A kind of GPU inside energy consumption control system and method based on overall situation decoupling
CN112328532A (en) * 2020-11-02 2021-02-05 长沙景嘉微电子股份有限公司 Multi-GPU communication method and device, storage medium and electronic device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753134A (en) * 2018-12-24 2019-05-14 四川大学 A kind of GPU inside energy consumption control system and method based on overall situation decoupling
CN109753134B (en) * 2018-12-24 2022-04-15 四川大学 Global decoupling-based GPU internal energy consumption control system and method
CN112328532A (en) * 2020-11-02 2021-02-05 长沙景嘉微电子股份有限公司 Multi-GPU communication method and device, storage medium and electronic device
CN112328532B (en) * 2020-11-02 2024-02-09 长沙景嘉微电子股份有限公司 Method and device for multi-GPU communication, storage medium and electronic device

Similar Documents

Publication Publication Date Title
EP3629186B1 (en) Method and apparatus for extending pcie domain
US8126993B2 (en) System, method, and computer program product for communicating sub-device state information
US20130151750A1 (en) Multi-root input output virtualization aware switch
TW201423422A (en) System and method for sharing device having PCIe interface
US20140250239A1 (en) System and Method for Routing Data to Devices within an Information Handling System
US20130110960A1 (en) Method and system for accessing storage device
US10592285B2 (en) System and method for information handling system input/output resource management
WO2018107751A1 (en) Resource scheduling device, system, and method
JP2008287718A (en) System and method for dynamically reassigning virtual lane resource
US20170124018A1 (en) Method and Device for Sharing PCIE I/O Device, and Interconnection System
TW201805830A (en) Apparatus allocating controller and apparatus allocating method
TWI616759B (en) Apparatus assigning controller and apparatus assigning method
TW201432566A (en) Expansion card of graphic processing unit and expanding method
US9411763B2 (en) Allocation of flow control credits for high performance devices
US10628366B2 (en) Method and system for a flexible interconnect media in a point-to-point topography
US20170344511A1 (en) Apparatus assigning controller and data sharing method
CN103970686A (en) GPU (graphic processing unit) expansion card and expansion method
US9842074B2 (en) Tag allocation for non-posted commands in a PCIe application layer
US10042792B1 (en) Method for transferring and receiving frames across PCI express bus for SSD device
CN104516852A (en) Lane division multiplexing of an I/O link
US10025736B1 (en) Exchange message protocol message transmission between two devices
TWI615720B (en) Resource allocation system, apparatus allocation controller, and apparatus recognizing method
CN216286653U (en) Multiprocessor system, mainboard and computer equipment
TW201710915A (en) Data channel allocation
US20140320424A1 (en) Data transmission method, touch data processing method and electronic device