WO2021253817A1 - 一种互联通道的调整方法、装置、系统、设备和介质 - Google Patents
一种互联通道的调整方法、装置、系统、设备和介质 Download PDFInfo
- Publication number
- WO2021253817A1 WO2021253817A1 PCT/CN2021/071202 CN2021071202W WO2021253817A1 WO 2021253817 A1 WO2021253817 A1 WO 2021253817A1 CN 2021071202 W CN2021071202 W CN 2021071202W WO 2021253817 A1 WO2021253817 A1 WO 2021253817A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- nodes
- performance parameters
- interconnection
- competition value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/17—Interprocessor communication using an input/output type connection, e.g. channel, I/O port
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
Definitions
- the present invention relates to the field of cloud processing technology, and in particular to a method, device, system, equipment and computer-readable storage medium for adjusting an interconnection channel.
- Cloud has become a hot word in the era of emerging information technology. Cloud computing and cloud processing have become important guarantees for major companies in market competition and technological competition. The emergence of the recent epidemic has also further promoted the birth of new work models such as cloud office, and has also put forward higher requirements for data processing, network maintenance, and maximum utilization of server resources. In the face of the pressure and challenges of servers dealing with larger information storms, the reasonable utilization of server resources and the reasonable allocation of resource competition have become a focus of server research and development.
- a single central processing unit (Central Processing Unit, CPU) has limited processing capabilities, and cloud platforms generally use multiple CPUs to provide services.
- Ultra Path Interconnect UPI
- UPI Ultra Path Interconnect
- Figure 1 shows a UPI high-speed interconnection topology diagram of a 2-way server platform in the prior art.
- Two CPUs are interconnected through 4 UPIs.
- the interconnection method is relatively complicated, and there are many unreasonable situations in resource scheduling when faced with simple calculations.
- the purpose of the embodiments of the present invention is to provide a method, a device, a system, a device, and a computer-readable storage medium for adjusting an interconnection channel, which can realize a balanced distribution of resources and improve resource utilization.
- an embodiment of the present invention provides a method for adjusting an interconnection channel, including:
- the selection of callable nodes whose performance parameters meet a preset condition includes:
- performance parameters of each node wherein the performance parameters include CPU utilization, memory occupancy, and I/O performance indicators;
- the nodes whose performance parameters do not exceed the preset threshold are regarded as callable nodes.
- the method further includes:
- a pause flag is set for the target node to stop assigning tasks to the target node.
- the calculating the competition value of each node according to the resource utilization rate and type parameter of each node includes:
- c 0 and c 1 represent the trained machine learning based on the type of parameters used to characterize the current node performance
- X i represents the current node resource usage of tasks based on the i-th training learning machine obtained
- n is represented by the current node The total number of tasks.
- the creating a corresponding number of interconnection channels between the current node and the callable node according to the competition value and performance parameters of each node includes:
- an interconnection channel is created between the node to be migrated and the node to be received.
- the method further includes:
- the interconnection channel between the node to be migrated and the node to be received is closed.
- the embodiment of the present invention also provides a device for adjusting an interconnection channel, which includes a calculation unit, a selection unit, and a creation unit;
- the calculation unit is used to calculate the competition value of each node according to the resource utilization rate and type parameter of each node;
- the selection unit is used to select callable nodes whose performance parameters meet preset conditions
- the creation unit is configured to create a corresponding number of interconnection channels between the current node and the callable node according to the competition value and performance parameters of each node.
- the selection unit includes an acquisition subunit, a judgment subunit, and as a subunit;
- the obtaining subunit is used to obtain performance parameters of each node; wherein, the performance parameters include CPU utilization, memory occupancy, and I/O performance indicators;
- the judgment subunit is used to judge whether the performance parameter of each node exceeds a preset threshold; wherein, each type of performance parameter has its own corresponding preset threshold;
- the as a subunit is used to use nodes whose performance parameters do not exceed a preset threshold as callable nodes.
- it further includes a setting unit
- the setting unit is configured to set a pause flag for the target node when there is a target node whose performance parameter exceeds a preset threshold, so as to stop assigning tasks to the target node.
- the calculation unit is specifically configured to calculate the competition value I CPU of the current node according to the following formula,
- c 0 and c 1 represent the trained machine learning based on the type of parameters used to characterize the current node performance
- X i represents the current node resource usage of tasks based on the i-th training learning machine obtained
- n is represented by the current node The total number of tasks.
- the creation unit includes a selection subunit and a creation subunit
- the selection subunit is configured to select nodes to be migrated with a competition value exceeding a preset upper limit from all the nodes; select nodes to be received with a competition value less than the preset lower limit from all the callable nodes ;
- the creation subunit is configured to determine the number of channels that need to be opened between the node to be migrated and the node to be received according to the performance parameters of the node to be migrated and the performance parameters of the node to be received; According to the number of channels, an interconnection channel is created between the node to be migrated and the node to be received.
- it further includes a closing unit
- the closing unit is configured to close the interconnection channel between the node to be migrated and the node to be received when it is detected that the competition value of the node to be migrated is less than a preset lower limit.
- the embodiment of the present invention also provides an interconnection channel adjustment system, which includes a plurality of nodes and a channel controller; according to a structure in which a plurality of nodes are serially connected in a circle, a fixed channel is set between every two adjacent nodes; The channel controllers are respectively connected to the multiple nodes in specific communication;
- the channel controller is used to calculate the competition value of each node according to the resource utilization and type parameters of each node; select the callable nodes whose performance parameters meet the preset conditions; according to the competition value and performance parameters of each node, A corresponding number of interconnection channels are created between the current node and the callable node.
- the embodiment of the present invention also provides an adjustment device for an interconnection channel, including:
- Memory used to store computer programs
- the processor is configured to execute the computer program to implement the steps of the method for adjusting the interconnection channel as described above.
- the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method for adjusting the interconnection channel as described in any one of the above is implemented A step of.
- the competition value of each node is calculated according to the resource utilization and type parameters of each node; the competition value reflects the degree of demand for resources by the node. The greater the competition value of the node, the node performs the current task The more resources are needed. For a node with a higher competition value, if only a single node is used for processing, the processing performance of a single node will be limited, and the resources of other nodes with a lower competition value will not be fully utilized. Therefore, in this technical solution, callable nodes whose performance parameters meet preset conditions can be selected; according to the competition value and performance parameters of each node, a corresponding number of interconnection channels are created between the current node and callable nodes.
- the tasks of the nodes with higher resource requirements can be migrated to the nodes with lower resource requirements, so as to realize the full use of the resources of each node.
- the purpose of balanced resource allocation is to improve resource utilization.
- Figure 1 is a UPI high-speed interconnection topology diagram of a 2-way server platform provided by the prior art
- FIG. 2 is a flowchart of a method for adjusting an interconnection channel according to an embodiment of the present invention
- FIG. 3 is a schematic structural diagram of a device for adjusting interconnection channels according to an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of a system for adjusting interconnection channels provided by an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a device for adjusting an interconnection channel provided by an embodiment of the present invention.
- FIG. 2 is a flowchart of a method for adjusting an interconnection channel according to an embodiment of the present invention, and the method includes:
- S201 Calculate the competition value of each node according to the resource utilization rate and type parameter of each node.
- the channel controller can be used to establish and close channels between multiple nodes.
- a fixed channel can be set between every two adjacent nodes according to a structure in which multiple nodes are connected in series in a circle.
- the channel controller respectively communicates and connects with multiple nodes, and can establish a new interconnection channel between the nodes according to resource requirements.
- Each node calculates the competition value in the same manner.
- a node is used as an example for introduction.
- each task has its corresponding resource utilization.
- the resource utilization rate of the i-th task of the current node can be obtained by means of machine learning.
- each node has its own corresponding type parameter.
- the type parameters used to characterize the performance of the current node can be obtained based on machine learning training.
- the type parameter of the node and the resource utilization rate of each task on the node are important factors that affect the performance of the node.
- the competition value I CPU of the current node can be calculated according to the following formula,
- c 0 and c 1 represent the trained machine learning based on the type of parameters used to characterize the current node performance
- X i represents the current node resource usage of tasks based on the i-th training learning machine obtained
- n is represented by the current node The total number of tasks.
- the performance parameters of a node refer to the parameters that affect the processing capacity of the node, which can include CPU utilization, memory utilization, and I/O performance indicators.
- the I/O performance index may include I/O usage rate and I/O saturation.
- the preset condition may be a condition met when the node can receive a new task.
- performance parameters such as CPU utilization, memory occupancy, and I/O performance indicators can be comprehensively evaluated, and different types of performance parameters such as CPU utilization, memory occupancy, and I/O performance indicators can also be separately evaluated. Conduct an individual assessment.
- a pause flag can be set for the target node so that the server system can stop assigning tasks to the target node.
- ICs such as Complex Programmable Logic Device (CPLD), Baseboard Management Controller (BMC), or PCA9555.
- CPLD Complex Programmable Logic Device
- BMC Baseboard Management Controller
- PCA9555 PCA9555
- the memory occupancy rate is monitored through the Basic Input Output System (BIOS) or dedicated software of the memory manufacturer.
- BIOS Basic Input Output System
- Each type of performance parameter has its own corresponding preset threshold.
- the value of the preset threshold can be set according to the actual needs of the user and the accumulation of actual measurement results.
- S203 Create a corresponding number of interconnection channels between the current node and the callable node according to the competition value and performance parameters of each node.
- the competition value reflects the node's demand for resources. The greater the node's competition value, the more resources the node needs to perform the current task.
- the current node can be any one of all nodes.
- tasks on nodes with higher competition values can be migrated to nodes with lower competition values, thereby realizing the balanced use of node resources and improving node resources Utilization rate.
- a node to be migrated whose competition value exceeds the preset upper limit can be selected from all nodes.
- the competition value of a node exceeds the preset upper limit, it means that the node currently needs to process more tasks and needs to occupy more resources. If only a single node is used for processing, the processing performance of a single node will be limited. The resources of other nodes with smaller competition values cannot be fully utilized. Therefore, in the embodiment of the present invention, a node with a competition value less than a preset lower limit can be selected as a node to be received.
- the values of the preset upper limit value and the preset lower limit value can be set according to actual requirements, and are not limited here.
- the number of channels that need to be opened between the node to be migrated and the node to be received can be determined according to the performance parameters of the node to be migrated and the performance parameters of the node to be received; Create interconnection channels between the nodes to be received.
- the performance parameters of the node reflect the current processing capacity of the node. Performance parameters include CPU utilization, memory occupancy, and I/O performance indicators. Take a node to be received as an example. When the CPU utilization rate of the node to be received is low, the memory usage is low, or the I/O usage and I/O saturation are low, it means that the resources of the node to be received are not sufficient In use, the node to be received has a higher processing capacity, and at this time, multiple interconnection channels can be created between the node to be migrated and the node to be received.
- each CPU can support up to 6 interconnection channels in parallel. Therefore, when creating interconnection channels between nodes, full consideration should be given to the number of channels that the node has currently opened. number.
- the competition value of each node is calculated according to the resource utilization and type parameters of each node; the competition value reflects the degree of demand for resources by the node. The greater the competition value of the node, the node performs the current task The more resources are needed. For a node with a higher competition value, if only a single node is used for processing, the processing performance of a single node will be limited, and the resources of other nodes with a lower competition value will not be fully utilized. Therefore, in this technical solution, callable nodes whose performance parameters meet preset conditions can be selected; according to the competition value and performance parameters of each node, a corresponding number of interconnection channels are created between the current node and callable nodes.
- the tasks of the nodes with higher resource requirements can be migrated to the nodes with lower resource requirements, so as to realize the full use of the resources of each node.
- the purpose of balanced resource allocation is to improve resource utilization.
- an interconnection channel can be created between a node with a larger competition value and a node with a smaller competition value to realize the migration of node tasks, thereby improving the utilization rate of node resources.
- the competition value of each node can be detected in real time or periodically. When it is detected that the competition value of the node to be migrated is less than the preset lower limit, the node to be migrated and the node to be received are shut down. The interconnection channel between.
- the number of interconnection channels between the nodes is more suitable for the actual needs of each node, and the balanced distribution of the entire system resources is realized.
- FIG. 3 is a schematic structural diagram of a device for adjusting an interconnection channel provided by an embodiment of the present invention, which includes a calculation unit 31, a selection unit 32, and a creation unit 33;
- the calculation unit 31 is configured to calculate the competition value of each node according to the resource utilization rate and type parameters of each node;
- the selecting unit 32 is used to select callable nodes whose performance parameters meet preset conditions;
- the creation unit 33 is configured to create a corresponding number of interconnection channels between the current node and the callable node according to the competition value and performance parameters of each node.
- the selection unit includes an acquisition subunit, a judgment subunit, and as a subunit;
- the acquisition subunit is used to acquire the performance parameters of each node; among them, the performance parameters include CPU utilization, memory occupancy, and I/O performance indicators;
- the judging subunit is used to judge whether the performance parameter of each node exceeds a preset threshold; wherein, each type of performance parameter has its own corresponding preset threshold;
- it further includes a setting unit
- the setting unit is used to set a pause flag for the target node when there is a target node whose performance parameter exceeds the preset threshold, so as to stop assigning tasks to the target node.
- the calculation unit is specifically configured to calculate the competition value I CPU of the current node according to the following formula,
- c 0 and c 1 represent the trained machine learning based on the type of parameters used to characterize the current node performance
- X i represents the current node resource usage of tasks based on the i-th training learning machine obtained
- n is represented by the current node The total number of tasks.
- creating a unit includes selecting a subunit and creating a subunit
- the selection subunit is used to select from all nodes the nodes to be migrated whose competition value exceeds the preset upper limit; from all callable nodes, select the nodes to be received whose competition value is less than the preset lower limit;
- it further includes a closing unit
- the closing unit is configured to close the interconnection channel between the node to be migrated and the node to be received when it is detected that the competition value of the node to be migrated is less than the preset lower limit.
- the competition value of each node is calculated according to the resource utilization and type parameters of each node; the competition value reflects the degree of demand for resources by the node. The greater the competition value of the node, the node performs the current task The more resources are needed. For a node with a higher competition value, if only a single node is used for processing, the processing performance of a single node will be limited, and the resources of other nodes with a lower competition value will not be fully utilized. Therefore, in this technical solution, callable nodes whose performance parameters meet preset conditions can be selected; according to the competition value and performance parameters of each node, a corresponding number of interconnection channels are created between the current node and callable nodes.
- the tasks of the nodes with higher resource requirements can be migrated to the nodes with lower resource requirements, so as to realize the full use of the resources of each node.
- the purpose of balanced resource allocation is to improve resource utilization.
- FIG. 4 is a schematic structural diagram of a system for adjusting interconnection channels provided by an embodiment of the present invention, including multiple nodes 41 and a channel controller 42; according to a structure in which multiple nodes 41 are serially connected in a circle, each adjacent two A fixed channel is set between the nodes 41; the channel controller 42 is respectively specifically communicatively connected with a plurality of nodes 41;
- the channel controller 42 is used to calculate the competition value of each node 41 according to the resource utilization and type parameters of each node 41; select the callable node 41 whose performance parameter meets the preset condition; and dynamically according to the competition value of each node 41 Adjust the interconnection channels between the current node 41 and each callable node 41.
- the interconnection channel between nodes refers to the connection between the CPUs in the nodes.
- Figure 4 is a schematic diagram of the connection relationship with 4 CPUs as an example.
- One CPU belongs to one node.
- CPU0, CPU1, CPU2, and CPU3 set their own channels according to the serial connection.
- CPU0 has established two interconnection channels (UPI) with CPU1 and CPU3, respectively.
- P0-P5 in each CPU refers to the port of the CPU.
- the dotted lines between the CPUs in Figure 4 refer to interconnection channels that can be dynamically created or closed.
- the competition value of each node is calculated according to the resource utilization and type parameters of each node; the competition value reflects the degree of demand for resources by the node. The greater the competition value of the node, the node performs the current task The more resources are needed. For a node with a higher competition value, if only a single node is used for processing, the processing performance of a single node will be limited, and the resources of other nodes with a lower competition value will not be fully utilized. Therefore, in this technical solution, callable nodes whose performance parameters meet preset conditions can be selected; according to the competition value and performance parameters of each node, a corresponding number of interconnection channels are created between the current node and callable nodes.
- the tasks of the nodes with higher resource requirements can be migrated to the nodes with lower resource requirements, so as to realize the full use of the resources of each node.
- the purpose of balanced resource allocation is to improve resource utilization.
- FIG. 5 is a schematic structural diagram of a device 50 for adjusting an interconnection channel according to an embodiment of the present invention, including:
- the memory 51 is used to store computer programs
- the processor 52 is configured to execute a computer program to implement the steps of the method for adjusting the interconnection channel as described in any of the foregoing embodiments.
- the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
- a computer program is stored on the computer-readable storage medium.
- the steps of the method or algorithm described in combination with the embodiments disclosed herein can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
- the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
- Stored Programmes (AREA)
Abstract
一种互联通道的调整方法、装置、系统、设备和介质,根据各节点的资源利用率以及类型参数,计算各节点的竞争值;竞争值反映了节点对资源的需求程度,对于竞争值较大的节点而言,如果仅依靠单个节点处理,会导致单个节点的处理性能受限,而其它竞争值较小的节点的资源得不到充分的利用。因此,可以选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。通过依据各节点的竞争值和性能参数,调整节点之间的互联通道,可以将资源需求较高的节点的任务迁移至资源需求较低的节点上,实现对各节点资源的充分调用,达到资源均衡分配的目的,提高资源利用率。
Description
本申请要求于2020年06月19日提交中国专利局、申请号为202010568700.6、发明名称为“一种互联通道的调整方法、装置、系统、设备和介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明涉及云处理技术领域,特别是涉及一种互联通道的调整方法、装置、系统、设备和计算机可读存储介质。
云已经成为新兴信息技术时代下的一个热词,云计算和云处理成为各大企业在市场争夺和技术竞争的重要保证。近期疫情的出现也更进一步推动了云办公等新工作模式的诞生,对数据处理、网络维护、服务器资源的最大化利用也提出了更高的要求。面对服务器处理更大信息风暴的压力和挑战,服务器资源的合理利用、资源竞争的合理分配成为服务器研发时关注的一个重点。
单个中央处理器(Central Processing Unit,CPU)的处理能力有限,云平台一般采用多个CPU提供服务。超级通道互联(Ultra Path Interconnect,UPI)可以实现多个CPU之间的互连。
如图1所示为现有技术中一种2路服务器平台的UPI高速互联拓扑图,两个CPU之间通过4个UPI进行互联。但是该互联方式较为复杂,在面对简单计算时,资源调度存在很多不合理情况。在面对复杂高密度计算时,多UPI拓扑的资源调配和资源合理竞争存在挑战,导致资源分配不合理。
可见,如何实现资源的均衡分配,提高资源利用率,是本领域技术人员需要解决的问题。
发明内容
本发明实施例的目的是提供一种互联通道的调整方法、装置、系统、 设备和计算机可读存储介质,可以实现资源的均衡分配,提高资源利用率。
为解决上述技术问题,本发明实施例提供一种互联通道的调整方法,包括:
根据各节点的资源利用率以及类型参数,计算各节点的竞争值;
选取出性能参数满足预设条件的可调用节点;
依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道。
可选地,所述选取出性能参数满足预设条件的可调用节点包括:
获取各节点的性能参数;其中,所述性能参数包括CPU利用率、内存占用率以及I/O性能指标;
判断各节点的性能参数是否超过预设阈值;其中,每种类型的性能参数有其各自对应的预设阈值;
将性能参数均未超过预设阈值的节点作为可调用节点。
可选地,在所述获取各节点的性能参数之后还包括:
当存在性能参数超过预设阈值的目标节点时,则对所述目标节点设置暂停标识,以停止向所述目标节点分配任务。
可选地,所述根据各节点的资源利用率以及类型参数,计算各节点的竞争值包括:
按照如下公式,计算当前节点的竞争值I
CPU,
其中,c
0和c
1表示基于机器学习训练得到的用于表征当前节点性能的类型参数,X
i表示基于机器学习训练得到的当前节点的第i个任务的资源利用率,n表示当前节点的任务总数。
可选地,所述依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道包括:
从所有所述节点中选取出竞争值超过预设上限值的待迁移节点;从所有所述可调用节点中选取竞争值小于预设下限值的待接收节点;
根据所述待迁移节点的性能参数以及所述待接收节点的性能参数,确定出所述待迁移节点与所述待接收节点之间所需开启的通道个数;
按照所述通道个数,在所述待迁移节点和所述待接收节点之间创建互联通道。
可选地,在所述待迁移节点和所述待接收节点之间创建互联通道之后还包括:
当检测到所述待迁移节点的竞争值小于预设下限值时,则关闭所述待迁移节点与所述待接收节点之间的互联通道。
本发明实施例还提供了一种互联通道的调整装置,包括计算单元、选取单元和创建单元;
所述计算单元,用于根据各节点的资源利用率以及类型参数,计算各节点的竞争值;
所述选取单元,用于选取出性能参数满足预设条件的可调用节点;
所述创建单元,用于依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道。
可选地,所述选取单元包括获取子单元、判断子单元和作为子单元;
所述获取子单元,用于获取各节点的性能参数;其中,所述性能参数包括CPU利用率、内存占用率以及I/O性能指标;
所述判断子单元,用于判断各节点的性能参数是否超过预设阈值;其中,每种类型的性能参数有其各自对应的预设阈值;
所述作为子单元,用于将性能参数均未超过预设阈值的节点作为可调用节点。
可选地,还包括设置单元;
所述设置单元,用于当存在性能参数超过预设阈值的目标节点时,则对所述目标节点设置暂停标识,以停止向所述目标节点分配任务。
可选地,所述计算单元具体用于按照如下公式,计算当前节点的竞争值I
CPU,
其中,c
0和c
1表示基于机器学习训练得到的用于表征当前节点性能的类型参数,X
i表示基于机器学习训练得到的当前节点的第i个任务的资源利用率,n表示当前节点的任务总数。
可选地,所述创建单元包括选取子单元和创建子单元;
所述选取子单元,用于从所有所述节点中选取出竞争值超过预设上限值的待迁移节点;从所有所述可调用节点中选取竞争值小于预设下限值的待接收节点;
所述创建子单元,用于根据所述待迁移节点的性能参数以及所述待接收节点的性能参数,确定出所述待迁移节点与所述待接收节点之间所需开启的通道个数;按照所述通道个数,在所述待迁移节点和所述待接收节点之间创建互联通道。
可选地,还包括关闭单元;
所述关闭单元,用于当检测到所述待迁移节点的竞争值小于预设下限值时,则关闭所述待迁移节点与所述待接收节点之间的互联通道。
本发明实施例还提供了一种互联通道的调整系统,包括多个节点和通道控制器;按照多个节点串行连接成一圈的结构,每相邻的两个节点之间设置有固定通道;所述通道控制器分别与所述多个节点具体通信连接;
所述通道控制器,用于根据各节点的资源利用率以及类型参数,计算各节点的竞争值;选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道。
本发明实施例还提供了一种互联通道的调整设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序以实现如上述任意一项所述互联通道的调整方法的步骤。
本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述任意一项所述互联通道的调整方法的步骤。
由上述技术方案可以看出,根据各节点的资源利用率以及类型参数,计算各节点的竞争值;竞争值反映了节点对资源的需求程度,节点的竞争值越大,说明节点执行当前的任务所需的资源越多。对于竞争值较大的节点而言,如果仅依靠单个节点处理,会导致单个节点的处理性能受限,而其它竞争值较小的节点的资源得不到充分的利用。因此,在该技术方案中,可以选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。通过依据各节点的竞争值和性能参数,调整节点之间的互联通道,可以将资源需求较高的节点的任务迁移至资源需求较低的节点上,从而实现对各节点资源的充分调用,达到资源均衡分配的目的,提高资源利用率。
为了更清楚地说明本发明实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术提供的一种2路服务器平台的UPI高速互联拓扑图;
图2为本发明实施例提供的一种互联通道的调整方法的流程图;
图3为本发明实施例提供的一种互联通道的调整装置的结构示意图;
图4为本发明实施例提供的一种互联通道的调整系统的结构示意图;
图5为本发明实施例提供的一种互联通道的调整设备的结构示意图。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下,所获得的所有其他实施例,都属于本发明保护范围。
为了使本技术领域的人员更好地理解本发明方案,下面结合附图和具体实施方式对本发明作进一步的详细说明。
接下来,详细介绍本发明实施例所提供的一种互联通道的调整方法。图2为本发明实施例提供的一种互联通道的调整方法的流程图,该方法包括:
S201:根据各节点的资源利用率以及类型参数,计算各节点的竞争值。
在本发明实施例中,可以通过通道控制器,实现多节点之间通道的建立与关闭。为了满足各节点之间的基本通信功能,可以按照多个节点串行连接成一圈的结构,在每相邻的两个节点之间设置固定通道。通道控制器分别与多个节点具体通信连接,可以根据资源需求,在节点之间建立新的互联通道。
每个节点计算竞争值的方式相同,在本发明实施例中,均以一个节点为例进行介绍。
每个节点上所需处理的任务可能有多个,每个任务有其对应的资源利用率。在本发明实施例中,可以通过机器学习的方式,获取当前节点的第i个任务的资源利用率。
考虑到不同节点的CPU、内存、I/O等服务器核心硬件的配置不同,针对于每个节点有其各自对应的类型参数。在实际应用中,可以基于机器学习训练得到的用于表征当前节点性能的类型参数。
节点的类型参数和节点上各任务的资源利用率是影响节点性能的重要因素,在具体实现中,可以按照如下公式,计算当前节点的竞争值I
CPU,
其中,c
0和c
1表示基于机器学习训练得到的用于表征当前节点性能的类型参数,X
i表示基于机器学习训练得到的当前节点的第i个任务的资源利用率,n表示当前节点的任务总数。
S202:选取出性能参数满足预设条件的可调用节点。
节点的性能参数指的是影响节点处理能力的参数,可以包括CPU利用 率、内存占用率以及I/O性能指标等。其中,I/O性能指标可以包括I/O使用率和I/O饱和度。
预设条件可以是节点能够接收新任务时所满足的条件。在具体实现中,可以对CPU利用率、内存占用率以及I/O性能指标等性能参数进行综合评估,也可以分别对CPU利用率、内存占用率以及I/O性能指标等不同类型的性能参数进行单独评估。
以不同类型的性能参数的单独评估为例,在具体实现中,可以判断各节点的性能参数是否超过预设阈值,将性能参数均未超过预设阈值的节点作为可调用节点。当存在性能参数超过预设阈值的目标节点时,则说明该目标节点已经无法实现对新任务的处理,此时可以对目标节点设置暂停标识,以便于服务器系统停止向目标节点分配任务。
在实际应用中,可以通过复杂可编程逻辑器件(Complex Programmable Logic Device,CPLD)、基板管理控制器(Baseboard Management Controller,BMC)或者PCA9555等IC对关键的硬件I/O性能指标进行监控。通过基本输入输出系统(Basic Input Output System,BIOS)或者内存厂商专用软件对内存占用率进行监控。
每种类型的性能参数有其各自对应的预设阈值。预设阈值的取值可以根据用户的实际需求和实测结果的累积设置。
S203:依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。
竞争值反映了节点对资源的需求程度,节点的竞争值越大,说明节点执行当前的任务所需的资源越多。
当前节点可以是所有节点中的任意一个节点。在本发明实施例中,通过在节点间建立互联通道的方式,可以将竞争值较大的节点上的任务迁移至竞争值较小的节点上,从而实现各节点资源的均衡使用,提升节点资源的利用率。
在具体实现中,可以从所有节点中选取出竞争值超过预设上限值的待迁移节点。当节点的竞争值超过预设上限值时,则说明该节点当前所需处理的任务较多,需要占用更多的资源,如果仅依靠单个节点处理,会导致 单个节点的处理性能受限,而其它竞争值较小的节点的资源得不到充分的利用。因此,在本发明实施例中,可以选取竞争值小于预设下限值的节点作为待接收节点。
预设上限值以及预设下限值的取值可根据实际需求设定,在此不做限定。
在实际应用中,可以根据待迁移节点的性能参数以及待接收节点的性能参数,确定出待迁移节点与待接收节点之间所需开启的通道个数;按照通道个数,在待迁移节点和待接收节点之间创建互联通道。
节点的性能参数反映了节点当前的处理能力。性能参数包括CPU利用率、内存占用率以及I/O性能指标等。以一个待接收节点为例,当待接收节点的CPU利用率较低、内存占用率较低或者I/O使用率以及I/O饱和度较低时,则说明待接收节点的资源未被充分使用,待接收节点具有较高的处理能力,此时可以在待迁移节点和待接收节点之间创建多个互联通道。
需要说明的是,受限于节点中CPU的性能要求,每个CPU最多可支持6个互联通道的并行运行,因此,在创建节点间的互联通道时,应充分考虑节点目前已经开启的通道个数。
由上述技术方案可以看出,根据各节点的资源利用率以及类型参数,计算各节点的竞争值;竞争值反映了节点对资源的需求程度,节点的竞争值越大,说明节点执行当前的任务所需的资源越多。对于竞争值较大的节点而言,如果仅依靠单个节点处理,会导致单个节点的处理性能受限,而其它竞争值较小的节点的资源得不到充分的利用。因此,在该技术方案中,可以选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。通过依据各节点的竞争值和性能参数,调整节点之间的互联通道,可以将资源需求较高的节点的任务迁移至资源需求较低的节点上,从而实现对各节点资源的充分调用,达到资源均衡分配的目的,提高资源利用率。
在本发明实施例中,可以在竞争值较大的节点与竞争值较小的节点之间创建互联通道,来实现节点任务的迁移,从而提升节点资源的利用率。但是随着时间的推移,各节点上所需执行的任务量会发生变化,导致节点 之间创建的互联通道可能并不适用于节点当前的任务需求。因此,在本发明实施例中,可以对各节点的竞争值进行实时或周期性的检测,当检测到待迁移节点的竞争值小于预设下限值时,则关闭待迁移节点与待接收节点之间的互联通道。
通过依据各节点的竞争值,动态的调整节点之间的互联通道,使得节点之间的互联通道的个数更加贴合各节点的实际需求,实现了整个系统资源的均衡分配。
图3为本发明实施例提供的一种互联通道的调整装置的结构示意图,包括计算单元31、选取单元32和创建单元33;
计算单元31,用于根据各节点的资源利用率以及类型参数,计算各节点的竞争值;
选取单元32,用于选取出性能参数满足预设条件的可调用节点;
创建单元33,用于依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。
可选地,选取单元包括获取子单元、判断子单元和作为子单元;
获取子单元,用于获取各节点的性能参数;其中,性能参数包括CPU利用率、内存占用率以及I/O性能指标;
判断子单元,用于判断各节点的性能参数是否超过预设阈值;其中,每种类型的性能参数有其各自对应的预设阈值;
作为子单元,用于将性能参数均未超过预设阈值的节点作为可调用节点。
可选地,还包括设置单元;
设置单元,用于当存在性能参数超过预设阈值的目标节点时,则对目标节点设置暂停标识,以停止向目标节点分配任务。
可选地,计算单元具体用于按照如下公式,计算当前节点的竞争值I
CPU,
其中,c
0和c
1表示基于机器学习训练得到的用于表征当前节点性能的 类型参数,X
i表示基于机器学习训练得到的当前节点的第i个任务的资源利用率,n表示当前节点的任务总数。
可选地,创建单元包括选取子单元和创建子单元;
选取子单元,用于从所有节点中选取出竞争值超过预设上限值的待迁移节点;从所有可调用节点中选取竞争值小于预设下限值的待接收节点;
创建子单元,用于根据待迁移节点的性能参数以及待接收节点的性能参数,确定出待迁移节点与待接收节点之间所需开启的通道个数;按照通道个数,在待迁移节点和待接收节点之间创建互联通道。
可选地,还包括关闭单元;
关闭单元,用于当检测到待迁移节点的竞争值小于预设下限值时,则关闭待迁移节点与待接收节点之间的互联通道。
图3所对应实施例中特征的说明可以参见图2所对应实施例的相关说明,这里不再一一赘述。
由上述技术方案可以看出,根据各节点的资源利用率以及类型参数,计算各节点的竞争值;竞争值反映了节点对资源的需求程度,节点的竞争值越大,说明节点执行当前的任务所需的资源越多。对于竞争值较大的节点而言,如果仅依靠单个节点处理,会导致单个节点的处理性能受限,而其它竞争值较小的节点的资源得不到充分的利用。因此,在该技术方案中,可以选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。通过依据各节点的竞争值和性能参数,调整节点之间的互联通道,可以将资源需求较高的节点的任务迁移至资源需求较低的节点上,从而实现对各节点资源的充分调用,达到资源均衡分配的目的,提高资源利用率。
图4为本发明实施例提供的一种互联通道的调整系统的结构示意图,包括多个节点41和通道控制器42;按照多个节点41串行连接成一圈的结构,每相邻的两个节点41之间设置有固定通道;通道控制器42分别与多个节点41具体通信连接;
通道控制器42,用于根据各节点41的资源利用率以及类型参数,计算各节点41的竞争值;选取出性能参数满足预设条件的可调用节点41; 依据各节点41的竞争值,动态调整当前节点41与各可调用节点41之间的互联通道。
节点之间的互联通道指的是节点中CPU之间的连接,图4中是以4个CPU为例的连接关系示意图,一个CPU归属于一个节点。图4中CPU0、CPU1、CPU2和CPU3按照串行连接的方式设置固有通道,以CPU0为例,CPU0分别与CPU1和CPU3建立有2路互联通道(UPI)。各CPU内的P0-P5指的是CPU的端口。图4中各CPU之间的虚线指的是可以动态创建或关闭的互联通道。
图4所对应实施例中特征的说明可以参见图2所对应实施例的相关说明,这里不再一一赘述。
由上述技术方案可以看出,根据各节点的资源利用率以及类型参数,计算各节点的竞争值;竞争值反映了节点对资源的需求程度,节点的竞争值越大,说明节点执行当前的任务所需的资源越多。对于竞争值较大的节点而言,如果仅依靠单个节点处理,会导致单个节点的处理性能受限,而其它竞争值较小的节点的资源得不到充分的利用。因此,在该技术方案中,可以选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与可调用节点之间创建相应个数的互联通道。通过依据各节点的竞争值和性能参数,调整节点之间的互联通道,可以将资源需求较高的节点的任务迁移至资源需求较低的节点上,从而实现对各节点资源的充分调用,达到资源均衡分配的目的,提高资源利用率。
图5为本发明实施例提供的一种互联通道的调整设备50的结构示意图,包括:
存储器51,用于存储计算机程序;
处理器52,用于执行计算机程序以实现如上述任意实施例所述的互联通道的调整方法的步骤。
本发明实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上述任意实施例所述的互联通道的调整方法的步骤。
以上对本发明实施例所提供的一种互联通道的调整方法、装置、系统、 设备和计算机可读存储介质进行了详细介绍。说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
Claims (10)
- 一种互联通道的调整方法,其特征在于,包括:根据各节点的资源利用率以及类型参数,计算各节点的竞争值;选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道。
- 根据权利要求1所述的方法,其特征在于,所述选取出性能参数满足预设条件的可调用节点包括:获取各节点的性能参数;其中,所述性能参数包括CPU利用率、内存占用率以及I/O性能指标;判断各节点的性能参数是否超过预设阈值;其中,每种类型的性能参数有其各自对应的预设阈值;将性能参数均未超过预设阈值的节点作为可调用节点。
- 根据权利要求2所述的方法,其特征在于,在所述获取各节点的性能参数之后还包括:当存在性能参数超过预设阈值的目标节点时,则对所述目标节点设置暂停标识,以停止向所述目标节点分配任务。
- 根据权利要求4所述的方法,其特征在于,所述依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道包括:从所有所述节点中选取出竞争值超过预设上限值的待迁移节点;从所有所述可调用节点中选取竞争值小于预设下限值的待接收节点;根据所述待迁移节点的性能参数以及所述待接收节点的性能参数,确定出所述待迁移节点与所述待接收节点之间所需开启的通道个数;按照所述通道个数,在所述待迁移节点和所述待接收节点之间创建互联通道。
- 根据权利要求5所述的方法,其特征在于,在所述待迁移节点和所述待接收节点之间创建互联通道之后还包括:当检测到所述待迁移节点的竞争值小于预设下限值时,则关闭所述待迁移节点与所述待接收节点之间的互联通道。
- 一种互联通道的调整装置,其特征在于,包括计算单元、选取单元和创建单元;所述计算单元,用于根据各节点的资源利用率以及类型参数,计算各节点的竞争值;所述选取单元,用于选取出性能参数满足预设条件的可调用节点;所述创建单元,用于依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道。
- 一种互联通道的调整系统,其特征在于,包括多个节点和通道控制器;按照多个节点串行连接成一圈的结构,每相邻的两个节点之间设置有固定通道;所述通道控制器分别与所述多个节点具体通信连接;所述通道控制器,用于根据各节点的资源利用率以及类型参数,计算各节点的竞争值;选取出性能参数满足预设条件的可调用节点;依据各节点的竞争值和性能参数,在当前节点与所述可调用节点之间创建相应个数的互联通道。
- 一种互联通道的调整设备,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于执行所述计算机程序以实现如权利要求1至6任意一项所述互联通道的调整方法的步骤。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介 质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6任意一项所述互联通道的调整方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010568700.6 | 2020-06-19 | ||
CN202010568700.6A CN111858458B (zh) | 2020-06-19 | 2020-06-19 | 一种互联通道的调整方法、装置、系统、设备和介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021253817A1 true WO2021253817A1 (zh) | 2021-12-23 |
Family
ID=72987746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/071202 WO2021253817A1 (zh) | 2020-06-19 | 2021-01-12 | 一种互联通道的调整方法、装置、系统、设备和介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111858458B (zh) |
WO (1) | WO2021253817A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111858458B (zh) * | 2020-06-19 | 2022-05-24 | 苏州浪潮智能科技有限公司 | 一种互联通道的调整方法、装置、系统、设备和介质 |
CN113722265B (zh) * | 2021-08-19 | 2024-07-05 | 飞腾信息技术有限公司 | 一种用于多cpu系统中互联通道的调试优化方法及装置 |
CN117519951B (zh) * | 2024-01-04 | 2024-05-03 | 深圳博瑞天下科技有限公司 | 基于消息中台的实时数据处理方法及系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916239A (zh) * | 2010-08-27 | 2010-12-15 | 上海交通大学 | 提高片上多处理器通信速度的方法 |
CN105335229A (zh) * | 2014-07-25 | 2016-02-17 | 杭州华三通信技术有限公司 | 一种业务资源的调度方法和装置 |
CN105528199A (zh) * | 2014-09-30 | 2016-04-27 | 华为技术有限公司 | 一种节点的处理方法及装置 |
CN106255228A (zh) * | 2016-04-27 | 2016-12-21 | 北京智谷睿拓技术服务有限公司 | 一种建立网络连接的方法、终端设备及节点设备 |
US20180234486A1 (en) * | 2017-02-16 | 2018-08-16 | Intel Corporation | Device, system and method for adaptive payload compression in a network fabric |
CN109154924A (zh) * | 2016-07-01 | 2019-01-04 | 英特尔公司 | 多个上行链路端口设备 |
US20200042608A1 (en) * | 2018-08-01 | 2020-02-06 | EMC IP Holding Company LLC | Distributed file system load balancing based on available node capacity |
CN111858458A (zh) * | 2020-06-19 | 2020-10-30 | 苏州浪潮智能科技有限公司 | 一种互联通道的调整方法、装置、系统、设备和介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101945308B (zh) * | 2009-07-07 | 2013-05-08 | 中兴通讯股份有限公司 | 一种自动交换光网络中业务迁移的方法和装置 |
KR101555266B1 (ko) * | 2011-09-01 | 2015-09-23 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 자원 이주를 위한 방법, 장치 및 시스템 |
CN107220125A (zh) * | 2017-05-27 | 2017-09-29 | 郑州云海信息技术有限公司 | 一种云资源调度方法及装置 |
-
2020
- 2020-06-19 CN CN202010568700.6A patent/CN111858458B/zh active Active
-
2021
- 2021-01-12 WO PCT/CN2021/071202 patent/WO2021253817A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916239A (zh) * | 2010-08-27 | 2010-12-15 | 上海交通大学 | 提高片上多处理器通信速度的方法 |
CN105335229A (zh) * | 2014-07-25 | 2016-02-17 | 杭州华三通信技术有限公司 | 一种业务资源的调度方法和装置 |
CN105528199A (zh) * | 2014-09-30 | 2016-04-27 | 华为技术有限公司 | 一种节点的处理方法及装置 |
CN106255228A (zh) * | 2016-04-27 | 2016-12-21 | 北京智谷睿拓技术服务有限公司 | 一种建立网络连接的方法、终端设备及节点设备 |
CN109154924A (zh) * | 2016-07-01 | 2019-01-04 | 英特尔公司 | 多个上行链路端口设备 |
US20180234486A1 (en) * | 2017-02-16 | 2018-08-16 | Intel Corporation | Device, system and method for adaptive payload compression in a network fabric |
US20200042608A1 (en) * | 2018-08-01 | 2020-02-06 | EMC IP Holding Company LLC | Distributed file system load balancing based on available node capacity |
CN111858458A (zh) * | 2020-06-19 | 2020-10-30 | 苏州浪潮智能科技有限公司 | 一种互联通道的调整方法、装置、系统、设备和介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111858458A (zh) | 2020-10-30 |
CN111858458B (zh) | 2022-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021253817A1 (zh) | 一种互联通道的调整方法、装置、系统、设备和介质 | |
CN109714395B (zh) | 云平台资源使用预测方法及终端设备 | |
US7444459B2 (en) | Methods and systems for load balancing of virtual machines in clustered processors using storage related load information | |
US7467291B1 (en) | System and method for calibrating headroom margin | |
CN108845874B (zh) | 资源的动态分配方法及服务器 | |
Pandey et al. | Neural network-based approach for ATC estimation using distributed computing | |
US20130326028A1 (en) | Server migration | |
CN104092756A (zh) | 一种基于dht机制的云存储系统的资源动态分配方法 | |
CN111756760B (zh) | 基于集成分类器的用户异常行为检测方法及相关设备 | |
CN108762686A (zh) | 数据一致性校验流控方法、装置、电子设备及存储介质 | |
WO2023131121A1 (zh) | 集成电路自动化并行仿真方法和仿真装置 | |
Guleria et al. | Quadd: Quantifying accelerator disaggregated datacenter efficiency | |
WO2021057023A1 (zh) | 一种基于部件温度自动分配计算资源的降功耗方法和系统 | |
Gupta et al. | Long range dependence in cloud servers: a statistical analysis based on google workload trace | |
CN113568759B (zh) | 一种基于云计算的大数据处理方法及其系统 | |
WO2022052479A1 (zh) | 一种功耗调控方法、装置、设备及可读存储介质 | |
CN112306628B (zh) | 一种基于多核服务器的虚拟网络功能资源管理系统 | |
CN113890842A (zh) | 一种信息传输时延上界计算方法、系统、设备和存储介质 | |
CN202261410U (zh) | 渲染农场高性能集群系统 | |
US11138086B2 (en) | Collecting hardware performance data | |
WO2019104844A1 (zh) | 货币基金系统自动性能测试方法、装置、设备及存储介质 | |
US11374869B2 (en) | Managing bandwidth based on user behavior | |
CN112363609A (zh) | 一种降低片上网络功耗的方法、装置、cpu芯片及服务器 | |
CN111652652A (zh) | 计算平台的成本计算方法、装置、计算机设备及存储介质 | |
CN116361703A (zh) | 一种数据中心的节能控制方法、装置、电子设备及可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21825759 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21825759 Country of ref document: EP Kind code of ref document: A1 |