CN111427822A - Edge computing system - Google Patents
Edge computing system Download PDFInfo
- Publication number
- CN111427822A CN111427822A CN202010230748.6A CN202010230748A CN111427822A CN 111427822 A CN111427822 A CN 111427822A CN 202010230748 A CN202010230748 A CN 202010230748A CN 111427822 A CN111427822 A CN 111427822A
- Authority
- CN
- China
- Prior art keywords
- pcie
- switch
- edge
- computing system
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001133 acceleration Effects 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000005516 engineering process Methods 0.000 claims description 6
- 238000007596 consolidation process Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 9
- 230000005540 biological transmission Effects 0.000 abstract description 5
- 238000007726 management method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 238000000034 method Methods 0.000 description 5
- 238000013468 resource allocation Methods 0.000 description 5
- 238000011176 pooling Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 3
- 241001290266 Sciaenops ocellatus Species 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/042—Network management architectures or arrangements comprising distributed management centres cooperatively managing the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/0026—PCI express
Abstract
The application discloses an edge computing system, comprising: the edge server comprises a Switch, a PCIe Switch, a CPU (central processing unit) node and a PCIe acceleration device, wherein the Switch is connected with the PCIe Switch, and the PCIe Switch is respectively connected with the CPU node and the PCIe acceleration device. It can be seen that the nodes of the edge server in the system can be not only CPUs, but also part of the nodes can become PCIe acceleration devices. PCIe Switch is arranged to carry out PCIe communication in the edge server, so that high-speed signal exchange transmission resource sharing of each node can be enhanced, and the purpose of improving the computing capability is achieved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an edge computing system.
Background
Edge computing, which is a distributed computing architecture, moves the operations of application programs, data and services from a central node of a network to an edge node of network logic for processing. Edge operations break down the large services that were originally handled entirely by the central node, cut into smaller and more manageable parts, and distribute them to the edge nodes for processing. The edge node is closer to the user terminal device, which can accelerate the processing and transmission speed of data and reduce the delay.
KubeEdge, namely Kube + Edge, realizes cloud Edge coordination, calculation sinking and smooth access of mass equipment by relying on the container arrangement and scheduling capability of K8S as the name suggests. The KubeEdge architecture includes two parts, which are a cloud side and an edge side. The cloud is responsible for issuing the application and configuration, and the edge side is responsible for running the edge application and managing the access equipment.
Currently, for the OpenEdge architecture, a kubeedge2.0 part supports multiple CPU nodes to deploy corresponding system resources, as shown in fig. 1, so that an Edge Server can perform system and corresponding software installation. Therefore, OpenEdge can only develop its corresponding service application through the CPU node at present. In addition to the design limitation, the board of the current server cannot be directly changed, all the structures or the design on the back board cannot be directly changed, the operation can be performed only through the front space, and the maximum operation capability and efficiency use are difficult to realize under the limited space condition.
Therefore, at present, the nodes of the OpenEdge can only use a CPU, and the maximization of the computing capacity cannot be realized.
Disclosure of Invention
The purpose of the present application is to provide an edge computing system, so as to solve the problem that the nodes of the OpenEdge can only use a CPU and cannot maximize the computing capability.
To solve the above technical problem, the present application provides an edge computing system, including: the edge server comprises a Switch, a PCIe Switch, a CPU (Central processing Unit) node and a PCIe acceleration device, wherein the Switch is connected with the PCIe Switch, and the PCIe Switch is respectively connected with the CPU node and the PCIe acceleration device.
Preferably, the PCIe Switch is configured to perform distribution management on the CPU node and the PCIe acceleration device.
Preferably, the PCIe acceleration device includes any one or more of: GPU, FPGA, NVMe, ASCI, HPC.
Preferably, the data center is connected to the edge server sequentially through a PSME switch and a Multi-host chip.
Preferably, the edge server is provided with an intel RSD tool.
Preferably, the PSME switch is configured to invoke the intel RSD tool on the edge server to perform resource integration.
Preferably, the CPU node is configured to deploy an application on the edge server according to a KubeEdge technique.
Preferably, the edge server provides a plurality of interface templates, and the interface templates represent interface positions of the CPU nodes and the PCIe acceleration devices on the PCIe Switch.
An edge computing system provided by the present application includes: the edge server comprises a Switch, a PCIe Switch, a CPU (central processing unit) node and a PCIe acceleration device, wherein the Switch is connected with the PCIe Switch, and the PCIeSwitch is respectively connected with the CPU node and the PCIe acceleration device. It can be seen that the nodes of the edge server in the system can be not only CPUs, but also part of the nodes can become PCIe acceleration devices. PCIe Switch is arranged to carry out PCIe communication in the edge server, so that high-speed signal exchange transmission resource sharing of each node can be enhanced, and the purpose of improving the computing capability is achieved.
Drawings
For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an OpenEdge architecture in the prior art;
FIG. 2 is a block diagram of an edge computing system according to a first embodiment of the present disclosure;
fig. 3 is a schematic diagram of an OpenEdge architecture provided in the present application according to a first embodiment of the present application;
FIG. 4 is a diagram illustrating an internal architecture of an edge server in the prior art;
fig. 5 is a schematic diagram of an internal architecture of an edge server according to a first embodiment of the present application;
FIG. 6 is a block diagram of a second embodiment of an edge computing system according to the present application;
fig. 7 is a schematic diagram illustrating a connection relationship between a data center and an edge server in a second embodiment of an edge computing system provided in the present application;
FIG. 8 is a diagram illustrating a first interface template in a second embodiment of an edge computing system according to the present application;
FIG. 9 is a diagram illustrating a second interface template in a second embodiment of an edge computing system according to the present application;
fig. 10 is a diagram illustrating a third interface template in an embodiment of an edge computing system according to the present application.
Detailed Description
The core of the application is to provide an edge computing system, wherein partial nodes of an edge server are changed into PCIe acceleration devices, PCIe switches are arranged to carry out PCIe communication in the edge server, high-speed signal exchange transmission resource sharing of all nodes is strengthened, and the computing capacity is improved.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 2, an embodiment of an edge computing system provided in the present application is described as follows, where the embodiment includes: the edge server 12 comprises a Switch 121, a PCIe Switch122, a CPU node 123 and a PCIe acceleration device 124, wherein the Switch 121 is connected to the PCIe Switch122, and the PCIe Switch122 is connected to the CPU node 123 and the PCIe acceleration device 124 respectively.
Nokia defines the architecture of Openedge Chasis through the specification of OCP (open computer project), and many communication manufacturers purchase the system at present. The system defines a rack of servers, server mechanisms, associated high speed signaling and management interfaces. In the aspect of design architecture, the OpenEdge platform adopts a modularized mode to split main functions, each function is an independent module, the modules cannot influence each other, a containerized design mode is used for building a mapping file, and because Docker supports the characteristic of cross-platform, the consistency of the OpenEdge platform in each environment can be ensured, and resources of a CPU (central processing unit), a memory and an execution instance can be accurately distributed, so that the efficiency of resource application is improved.
In the deployment of using devices, as shown in fig. 1, currently, all nodes of OpenEdge can only use CPUs, but cannot be replaced with a single PCIe acceleration device, and it is difficult to use the OpenEdge with maximum computation power and efficiency under limited space conditions, and it becomes necessary to increase cost to meet the computation requirements in the application scenario of use. To address this problem, the present embodiment makes part of the nodes become PCIe acceleration devices, as shown in fig. 3, where the PCIe acceleration devices may be specifically a GPU, an FPGA, an NVMe, an ASCI, an HPC, and the like. Specifically, the internal architecture of the original edge server is as shown in fig. 4, and each node can access the resources of the data center through the network only by configuring a network card; in the embodiment, a PCIe Switch is installed to perform PCIe communication internally, so as to enhance high-speed handshake communication and resource sharing of each node, as shown in fig. 5.
That is, in the present embodiment, the network card portion shown in fig. 4 is changed to PCIe Switch shown in fig. 5 for device communication. On the basis of OpenEdge, generally, the resources are allocated and managed by the concatenation of network cards, so as to effectively manage the hardware computing resources of the computing node.
The edge calculation is already introduced in the technology of the software resource control and management part, but no part for hardware resource management exists at present, and the node can be more flexibly distributed with the hardware resources through a Multi-Host network card. In the past, PCIe Multi-Host allocates pooled resources through PCIe Switch, and now, the same contents can be completed on a high-speed network by using a part of the smart network card.
The embodiment provides an edge computing system, including: the edge server comprises a Switch, a PCIeSwitch, a CPU node and a PCIe acceleration device, wherein the Switch is connected with the PCIe Switch, and the PCIeSwitch is respectively connected with the CPU node and the PCIe acceleration device. It can be seen that the nodes of the edge server in the system can be not only CPUs, but also part of the nodes can become PCIe acceleration devices. PCIe Switch is arranged to carry out PCIe communication in the edge server, so that high-speed signal exchange transmission resource sharing of each node can be enhanced, and the purpose of improving the computing capability is achieved.
An embodiment of an edge computing system provided by the present application is described in detail below, and the embodiment two is implemented based on the foregoing embodiment one, and is expanded to a certain extent based on the embodiment one.
Referring to fig. 6, the second embodiment specifically includes: data center 21 and edge server 22, the edge server includes Switch, PCIe Switch, CPU node and PCIe acceleration device, wherein the Switch with PCIeSwitch is connected, PCIe Switch respectively with the CPU node with the PCIe acceleration device is connected. The PCIe acceleration device includes any one or more of: GPU, FPGA, NVMe, ASCI, HPC. As shown in fig. 7, the data center 21 is connected to the edge server 22 through a PSME switch 23 and a Multi-host chip 24 in this order.
In addition, the PCIe Switch is configured to perform distribution management on the CPU node and the PCIe acceleration device. And an intel RSD tool is arranged on the edge server. The PSME switch is used for calling the intel RSD tool on the edge server to carry out resource integration. The CPU node is used for deploying application on the edge server according to a KubeEdge technology. The edge server provides a plurality of interface templates, and the interface templates represent the interface positions of the CPU nodes and the PCIe acceleration devices on the PCIe Switch.
The Intel RSD can complete the configuration and integration of software and hardware by utilizing the Redfish technology, and aims to efficiently integrate all resources of the hardware, change the concept of the resources into pooled resources to be allocated and used for the software, and change each hardware resource into the Pod of the hardware to be used for a user.
The OpenEdge architecture has different templates, and fig. 8, 9, and 10 are three interface templates provided in this embodiment, and each interface template has its corresponding bit for RMC control. Since the pooling function of the RSD still has to be controlled by the CPU node, the transformed 2U windows are located at Addr2, Addr3, Addr4 and Addr5 of the RMC bits, respectively, so it is assumed that Addr1 is the CPU control node, and the other nodes can also be CPUs or other heterogeneous devices.
It is understood that the PCIe signals from the network card have different device numbers according to the number of PCIe L ane used, and the hardware configuration functions provided by the RSD can be implemented between different nodes through the PCIe of the network card.
The machines connected through the network card can directly and respectively install corresponding hardware distribution pooling tasks, can be storage nodes, can be calculations related to HPC, can also be ASIC calculations of specific applications, and if the CPU nodes are used, the problems do not exist.
Network Ethernet switches are an important administrator of RSD, and the management is a traditional management architecture based on C L I and configuration files, while PSME-Network of RSD is a redfly technology of Restful/http web-based, and is implemented as Software-Defined Networking (SDN) concept.
The RSD architecture used by a general server generally has various nodes such as management Node, switch, storage, and computation, and can be managed by the API provided by the RSD as long as the API provided by PSME can be used as a resource.
Kubernets, also called k8s, aims to accomplish the resource allocation of hardware pooling through software, and can allocate hardware resources to user operation and adjust load balance through JSON over HTTP, and the subject thereof can quickly establish the environment required by software through container of Docker. The natural lightweight and portability of the container are very suitable for the scene of edge computing, but the native K8S component cannot directly run on the edge node due to the instability of edge scene communication and the strict resource consumption limit. KubeEdge selects a 'light edge' architecture, namely, a container engine and an equipment management agent on the edge side are as light as possible, a management surface runs in the cloud and is constructed on the scheduling capability of K8S, the K8S native API is 100% compatible, and naturally, the problem of K8S needing to be solved in the field of edge computing is also clear.
The same content can be achieved through a Multi-host supported by a smart network card on the OpenEdge, and resource management of RSD is performed through a switch of PSME. The management server of the PodM/RSDM, or the storage controller, or the k8s resource allocator can be regarded as any node. The PodM/RSDM is the most important, and the control message of the resource allocation required by the switch in the PSME needs to be completed through the node, and the resource allocation can be performed by using the node of one CPU as the manager.
Although kubeeedge can already deploy the corresponding application content through the nodes of the CPU, for the deployment of the operating system, the RSD has the control node of the RSDM, so that the operating system and the corresponding PCIe driver software can be stored thereon. Directly deploying these contents on KubeEdge allows the installation of the operating system to be added to the part of the quick deployment.
The kubeEdge can directly deploy its application on the operating system through the above-mentioned infrastructure (container suite manager), so that the RSD can be effectively integrated with the software resource management part of kubeenetes.
And finally, through RSD resources deployed by the data center, the same architecture can be transplanted to a server deployed by OpenEdge, so that a pooled architecture can be built on an edge server, and the RSD of edge calculation and the RSD of the data center can share a resource framework.
Therefore, in the edge computing system provided by the embodiment, an RSD design is introduced in edge computing, and a hardware resource allocation function is added; importing a function of pooling resource allocation on the OpenEdge in combination with the RSD and the speed of kubeEdge accelerated deployment; the use efficiency of each OpenEdge internal node is improved, and a CPU or an acceleration device can be used independently.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above detailed descriptions of the solutions provided in the present application, and the specific examples applied herein are set forth to explain the principles and implementations of the present application, and the above descriptions of the examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (8)
1. An edge computing system, comprising: the edge server comprises a Switch, a PCIe Switch, a CPU (Central processing Unit) node and a PCIe acceleration device, wherein the Switch is connected with the PCIe Switch, and the PCIe Switch is respectively connected with the CPU node and the PCIe acceleration device.
2. The edge computing system of claim 1, wherein the PCIe Switch is to perform allocation management for the CPU node and the PCIe acceleration device.
3. The edge computing system of claim 2, wherein the PCIe acceleration device comprises any one or more of: GPU, FPGA, NVMe, ASCI, HPC.
4. The edge computing system of claim 1, wherein the data center is connected to the edge server sequentially through a PSME switch and a Multi-host chip.
5. The edge computing system of claim 4 wherein the edge server has an intelRSD tool disposed thereon.
6. The edge computing system of claim 5, wherein the PSME switch is to invoke the intel RSD utility on the edge server for resource consolidation.
7. The edge computing system of claim 1 wherein the CPU node is to deploy an application on the edge server in accordance with KubeEdge technology.
8. The edge computing system of any of claims 1-7, wherein the edge server provides a plurality of interface templates that represent interface locations of the CPU node and the PCIe acceleration device on the PCIe Switch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010230748.6A CN111427822A (en) | 2020-03-27 | 2020-03-27 | Edge computing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010230748.6A CN111427822A (en) | 2020-03-27 | 2020-03-27 | Edge computing system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111427822A true CN111427822A (en) | 2020-07-17 |
Family
ID=71551692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010230748.6A Pending CN111427822A (en) | 2020-03-27 | 2020-03-27 | Edge computing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111427822A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111930521A (en) * | 2020-09-23 | 2020-11-13 | 北京百度网讯科技有限公司 | Method and device for deploying application, electronic equipment and readable storage medium |
CN112328361A (en) * | 2020-11-02 | 2021-02-05 | 杭州当虹科技股份有限公司 | Method for rapidly deploying kubenetes cluster |
CN114124852A (en) * | 2021-11-29 | 2022-03-01 | 苏州汇云鼎技术服务有限公司 | Data switch based on industrial manufacturing |
US11343315B1 (en) | 2020-11-23 | 2022-05-24 | International Business Machines Corporation | Spatio-temporal social network based mobile kube-edge auto-configuration |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090150518A1 (en) * | 2000-08-22 | 2009-06-11 | Lewin Daniel M | Dynamic content assembly on edge-of-network servers in a content delivery network |
CN104636089A (en) * | 2015-02-25 | 2015-05-20 | 浪潮集团有限公司 | Method for accelerating performance of servers of domestic central processing units on basis of NVME (nonvolatile memory express) technology |
CN107450987A (en) * | 2017-07-28 | 2017-12-08 | 山东超越数控电子有限公司 | A kind of heterogeneous server of High Availabitity |
CN109104464A (en) * | 2018-07-23 | 2018-12-28 | 东南大学 | A kind of distributed data update method towards collaboration storage under edge calculations environment |
CN110162496A (en) * | 2019-04-15 | 2019-08-23 | 深圳市同泰怡信息技术有限公司 | A method of realizing that the port PCIE automatically configures based on central processing unit |
-
2020
- 2020-03-27 CN CN202010230748.6A patent/CN111427822A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090150518A1 (en) * | 2000-08-22 | 2009-06-11 | Lewin Daniel M | Dynamic content assembly on edge-of-network servers in a content delivery network |
CN104636089A (en) * | 2015-02-25 | 2015-05-20 | 浪潮集团有限公司 | Method for accelerating performance of servers of domestic central processing units on basis of NVME (nonvolatile memory express) technology |
CN107450987A (en) * | 2017-07-28 | 2017-12-08 | 山东超越数控电子有限公司 | A kind of heterogeneous server of High Availabitity |
CN109104464A (en) * | 2018-07-23 | 2018-12-28 | 东南大学 | A kind of distributed data update method towards collaboration storage under edge calculations environment |
CN110162496A (en) * | 2019-04-15 | 2019-08-23 | 深圳市同泰怡信息技术有限公司 | A method of realizing that the port PCIE automatically configures based on central processing unit |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111930521A (en) * | 2020-09-23 | 2020-11-13 | 北京百度网讯科技有限公司 | Method and device for deploying application, electronic equipment and readable storage medium |
CN112328361A (en) * | 2020-11-02 | 2021-02-05 | 杭州当虹科技股份有限公司 | Method for rapidly deploying kubenetes cluster |
US11343315B1 (en) | 2020-11-23 | 2022-05-24 | International Business Machines Corporation | Spatio-temporal social network based mobile kube-edge auto-configuration |
WO2022105527A1 (en) * | 2020-11-23 | 2022-05-27 | International Business Machines Corporation | Mobile kube-edge auto-configuration |
GB2616752A (en) * | 2020-11-23 | 2023-09-20 | Ibm | Mobile kube-edge auto-configuration |
GB2616752B (en) * | 2020-11-23 | 2024-03-20 | Ibm | Mobile kube-edge auto-configuration |
CN114124852A (en) * | 2021-11-29 | 2022-03-01 | 苏州汇云鼎技术服务有限公司 | Data switch based on industrial manufacturing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111427822A (en) | Edge computing system | |
Weerasinghe et al. | Network-attached FPGAs for data center applications | |
CN103533063A (en) | Method and device capable of realizing dynamic expansion of WEB (World Wide Web) application resource | |
CN105450588A (en) | RDMA-based data transmission method and RDMA network cards | |
CN110311948B (en) | Communication method between container groups and container cloud network system based on same | |
CN111694789A (en) | Embedded reconfigurable heterogeneous determination method, system, storage medium and processor | |
CN110308986B (en) | Method for distributed training data communication on container cloud based on optimal scheduling | |
US9904577B2 (en) | Hybrid heterogeneous host system, resource configuration method and task scheduling method | |
CN104601684A (en) | Cloud server system | |
WO2022037265A1 (en) | Edge computing center integrated server | |
CN105472291A (en) | Digital video recorder with multiprocessor cluster and realization method of digital video recorder | |
WO2023020010A1 (en) | Process running method, and related device | |
CN116126742A (en) | Memory access method, device, server and storage medium | |
CN110198364B (en) | Container cloud distributed training data communication method based on designated DNS analysis | |
CN110990154A (en) | Big data application optimization method and device and storage medium | |
CN113626150A (en) | Elastic container example implementation method | |
CN103108046A (en) | Cloud terminal virtual machine login method and cloud terminal virtual machine login device and cloud terminal virtual machine login system | |
CN115934624B (en) | Method, equipment and medium for managing multi-host remote direct memory access network | |
US20240036926A1 (en) | Resource Allocation Method, Electronic Device and Storage Medium | |
CN111324424A (en) | Virtual machine deployment method, device, server and storage medium | |
CN111124669A (en) | Operation method, system, terminal and storage medium of distributed SaaS software | |
CN107122268B (en) | NUMA-based multi-physical-layer partition processing system | |
CN111580935A (en) | Network communication method, device, equipment and storage medium | |
CN113535370A (en) | Method and equipment for realizing multiple RDMA network card virtualization of load balancing | |
CN116800616B (en) | Management method and related device of virtualized network equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200717 |