CN116088758A - Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product - Google Patents

Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product Download PDF

Info

Publication number
CN116088758A
CN116088758A CN202211731340.2A CN202211731340A CN116088758A CN 116088758 A CN116088758 A CN 116088758A CN 202211731340 A CN202211731340 A CN 202211731340A CN 116088758 A CN116088758 A CN 116088758A
Authority
CN
China
Prior art keywords
target
numa node
network card
optimization
binding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211731340.2A
Other languages
Chinese (zh)
Inventor
匙沛华
焦岩
潘潘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN202211731340.2A priority Critical patent/CN116088758A/en
Publication of CN116088758A publication Critical patent/CN116088758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present application relates to an optimization method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: obtaining a target optimization strategy, and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment. By adopting the method, the performance of the Ceph distributed block storage system can be improved to the maximum extent.

Description

Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product
Technical Field
The present application relates to the field of data processing technology, and in particular, to an optimization method, an apparatus, a computer device, a storage medium, and a computer program product.
Background
Ceph is one of the most popular distributed storage technologies in the market, can provide storage services of three storage interfaces of blocks, files and objects, has high expandability and reliability, and is taken as a primary storage back end by Openstack, 70% -80% of cloud platforms in the market are all storage platforms adopting Ceph as a bottom layer, a block storage module of Ceph provides a block storage interface for an upper physical host or a virtual machine by library, the back end is in butt joint with the library interfaces, and the bottom layer adopts a RADOS object storage system.
At present, due to the limitation of integrated circuit manufacturing process, a CPU architecture is developing towards a multi-NUMA architecture, and multi-NUMA architecture CPUs represented by AMD in the market are becoming more and more widespread, so that in order to maximize performance of a Ceph block storage server based on such architecture CPU platform, an optimization upgrade is often selected on hardware specification, that is, a high-performance NVMe protocol SSD is adopted as a bottom storage hardware device.
However, this optimization method is only aimed at the optimization of the hardware level, so that the user cannot obtain the highest performance improvement on the basis of the expensive hardware cost, namely, high cost and low return.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an optimization method, apparatus, computer device, computer readable storage medium, and computer program product that can maximize the enhancement of the performance of Ceph distributed block storage.
In a first aspect, the present application provides an optimization method. The method comprises the following steps:
obtaining a target optimization strategy, and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
In one embodiment, the method further comprises: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In one embodiment, configuring the target number of request queues of the interrupt request of the target network card based on the number of target CPU cores includes: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
In one embodiment, binding the target NUMA node with the target storage according to the distance between the target NUMA node and each storage in the server includes: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
In one embodiment, binding the target NUMA node with a storage management process corresponding to the target storage according to a distance between the target NUMA node and each storage includes: according to the distance between the target NUMA node and each storage device, determining the target storage device closest to the target NUMA node from each storage device, and determining a storage device management process corresponding to the target storage device; binding the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, binding the target NUMA node with a storage management process corresponding to the target storage includes: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, binding the target network card with the target CPU core based on the correspondence between the target network card and the target NUMA node includes: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one embodiment, binding the target network card with the target CPU core based on the correspondence between the target network card and the target NUMA node includes: determining a binding script according to the type of the target network card; and binding the target network card with the target CPU core by using the binding script.
In one embodiment, before the obtaining the target optimization strategy, the method further comprises: acquiring a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process; and determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
In a second aspect, the present application also provides an optimizing apparatus. The device comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target optimization strategy and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises the following steps: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
In one embodiment, the optimizing device further comprises a configuration module for: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In one embodiment, the configuration module is specifically configured to: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
In one embodiment, the obtaining module is further specifically configured to: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
In one embodiment, the obtaining module is further specifically configured to: according to the distance between the target NUMA node and each storage device, determining the target storage device closest to the target NUMA node from each storage device, and determining a storage device management process corresponding to the target storage device; binding the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the obtaining module is further specifically configured to: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the obtaining module is further specifically configured to: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one embodiment, the obtaining module is further specifically configured to: determining a binding script according to the type of the target network card; and binding the target network card with the target CPU core by using the binding script.
In one embodiment, the optimizing device further comprises an execution module for: acquiring a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process; and determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of any of the above first aspects when the computer program is executed.
In a fourth aspect, the present application also provides a computer-readable storage medium. A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the first aspects described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the first aspects described above.
The optimization method, the device, the computer equipment, the storage medium and the computer program product acquire the target optimization strategy, and execute an optimization process for the server based on the target optimization strategy, wherein the optimization process comprises the following steps: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment. According to the optimization method provided by the application, the target network card and the target CPU core are bound, so that the target CPU core can respond to the interrupt request of the target network card, the target NUMA node and the target storage device are also bound, so that the target CPU core responds to the interrupt request of the target network card through the access of the target storage device.
Drawings
FIG. 1 is a flow diagram of an optimization method in one embodiment;
FIG. 2 is a flow chart of an optimization method in another embodiment;
FIG. 3 is a flow chart of an optimization method in another embodiment;
FIG. 4 is a flow chart of an optimization method in another embodiment;
FIG. 5 is a flow chart of an optimization method in another embodiment;
FIG. 6 is a block diagram of an optimization device in one embodiment;
FIG. 7 is a block diagram of an optimizing apparatus in another embodiment;
FIG. 8 is a block diagram of an optimizing apparatus in another embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Ceph is one of the most popular distributed storage technologies in the market, can provide storage services of three storage interfaces of blocks, files and objects, has high expandability and reliability, and is taken as a primary storage back end by Openstack, 70% -80% of cloud platforms in the market are all storage platforms adopting Ceph as a bottom layer, a block storage module of Ceph provides a block storage interface for an upper physical host or a virtual machine by library, the back end is in butt joint with the library interfaces, and the bottom layer adopts a RADOS object storage system.
Currently, due to limitations of integrated circuit manufacturing processes, CPU (central processing unit ) architecture is developing toward multi-NUMA (Non Uniform Memory Access, non-uniform memory access architecture) architecture, and multi-NUMA architecture CPUs represented by AMD on the market are becoming more and more widespread, so that in order to maximize performance of a Ceph block storage server based on such architecture CPU platform, optimization and upgrading on hardware specifications, that is, using a high-performance NVMe protocol SSD as an underlying storage hardware device, are often selected.
However, this optimization method is only aimed at optimizing the hardware level, so that the user cannot obtain the highest performance improvement on the basis of expensive hardware cost, namely, the cost is high and the return is low, and the performance of the Ceph distributed block storage is low.
In view of this, the present application provides an optimization method that can maximize the performance of a Ceph distributed block storage system.
The execution subject of the optimization method provided by the embodiment of the application may be a computer device, and the computer device may be a server.
In one embodiment, as shown in FIG. 1, an optimization method is provided, comprising the steps of:
Step 101, acquiring a target optimization strategy, and executing an optimization process for the server based on the target optimization strategy.
Wherein, this optimization process includes: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
Alternatively, the target optimization strategy may be a method formulated by a skilled artisan that may be used to optimize the performance of the Ceph distributed block storage system.
Optionally, the target network card refers to a network card that sends an interrupt request to the CPU, and optionally, the network card may be a Ceph public network network card or a Ceph cluster network network card.
The target NUMA node refers to a NUMA node where the target network card is located, and the target CPU core may be a core in the target NUMA node.
The target storage device refers to the storage device closest to the target NUMA node.
In one possible implementation manner, the target NUMA node corresponding to the target network card in the server may be determined by performing algorithm processing on the parameter of the target network card, and if the target network card is Ceph public network network card and its parameter is enp33s0f0, the target NUMA node of the target network card may be determined by performing algorithm processing on the parameter.
In an alternative embodiment, the preparation of the underlying hardware and software environment is performed before the optimization process is performed for the server based on the target optimization strategy.
In one possible implementation manner, the preparation of the basic software and hardware environment may be specifically understood as preparing the server hardware to be tested, and setting at least 3 nodes, that is, configuring an NVMe SSD on the server as a cache disk or storage hardware, and installing an appropriate operating system on the server to be tested as the basic software environment deployed by the Ceph.
In an alternative embodiment, ceph is deployed before the optimization process is performed for the server based on the target optimization strategy.
In a possible implementation manner, the deployment of the Ceph specifically refers to deploying a Ceph distributed storage system of a corresponding version for all nodes by adopting an adaptive deployment tool according to service requirements based on an installed operating system, forming a set of Ceph storage clusters of not less than 3 nodes, and creating a corresponding number of rbds through rbd create commands to serve as a block storage test interface.
In an alternative embodiment, a base test is performed before the optimization process is performed for the server based on the target optimization strategy.
In one possible implementation, a FIO test tool (rbd engine) may be used at the client to perform four basic index tests, namely 4k random read-write and 128k sequential read-write, on the tested server cluster rbd, and record the benchmark performance.
The optimization method, the device, the computer equipment, the storage medium and the computer program product acquire the target optimization strategy, and execute an optimization process for the server based on the target optimization strategy, wherein the optimization process comprises the following steps: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment. According to the optimization method provided by the application, the target network card and the target CPU core are bound, so that the target CPU core can respond to the interrupt request of the target network card, the target NUMA node and the target storage device are also bound, so that the target CPU core responds to the interrupt request of the target network card through the access of the target storage device.
In one embodiment, as shown in fig. 2, binding the target network card with the target CPU core based on the corresponding relationship between the target network card and the target NUMA node, includes the following steps:
step 201, determining a binding script according to the type of the target network card.
Optionally, the type of the target network card refers to a brand of the target network card, and the binding script refers to a program provided by a network card manufacturer and used for binding the network card interrupt.
In one possible implementation, the brand of the target network card may be an Intel network card, and the corresponding network card interrupt binding script is a set_irq_affinity script.
In another possible implementation manner, the brand of the target network card may also be a melanonox network card, and the corresponding network card interrupt binding script is a set_irq_affinity_cpu list.sh script in the directory mlnx_tuning_scripts.
And 202, binding the target network card with the target CPU core by using the binding script.
Optionally, binding the target network card with the target CPU core refers to binding a network card interrupt of the target network card with the target CPU core.
In an alternative embodiment, binding the target network card with the target CPU core based on the correspondence between the target network card and the target NUMA node includes: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one possible implementation manner, a NUMA node where a target network card is located may be queried first, where the NUMA node is a target NUMA node, and then a lscpu command is used to query a CPU corresponding to the target NUMA node, where the CPU is a target CPU, and the query result is assumed to be 0-7 and 32-39, and then the network card interrupt binding script, the network card parameters, and the query result are processed by an algorithm, so that the network card interrupt is bound with a core of the target CPU.
In another possible implementation manner, the NUMA node where the target network card is located may be queried first, where the NUMA node is a target NUMA node, and then, the lscpu command is used to query a CPU corresponding to the target NUMA node, where the CPU is a target CPU, and the query result is assumed to be 0-7 and 32-39, and then, the network card interrupt binding script, the network card parameter, and the query result are processed by performing an algorithm, so that the network card interrupt is bound with a core contained in the whole CPU where the target NUMA node is located, that is, bound with a core contained in the whole Socket (slot) of the CPU where the target NUMA node is located.
In the above-mentioned binding method of network card interrupt, for Ceph public network network card, the binding of network card interrupt can be the local of network card NUMA node or the whole core of CPU where it is located, while for Ceph cluster network network card, the binding of network card interrupt needs to bind the local of network card NUMA node, so as to achieve the fastest response time and minimize the data synchronization overhead between nodes.
According to the method for binding the target network card and the target CPU core based on the corresponding relation between the target network card and the target NUMA node, the network card interrupt response time can be effectively shortened, and the performance is improved.
In one embodiment, the method further comprises: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In an alternative embodiment, configuring the target number of request queues of the interrupt request of the target network card based on the number of target CPU cores includes: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
The target number refers to the number of request queues of the interrupt request of the target network card.
In one possible implementation manner, the number of target cores and the parameters of the target network card may be algorithmically processed based on the etkool method, so as to implement configuration of the request queues having the same number as the number of target CPU cores, so that the target CPU cores may respond to the network card interrupt.
In another possible implementation manner, the number of logic CPU cores of the target NUMA node and the parameters of the target network card may be algorithmically processed based on the ethtool method, so as to implement configuring a request queue having the same number of logic CPU cores as the target NUMA node, so that the target CPU cores may respond to the network card interrupt.
In another possible implementation manner, the number of physical CPU cores of the target NUMA node and the parameters of the target network card may be algorithmically processed based on the ethtool method, so as to implement configuration of the request queues having the same number as the number of physical CPU cores of the target NUMA node, so that the target CPU cores may respond to the network card interrupt.
The method for configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores can accelerate the response of the target CPU cores to the interrupt request of the target network card.
In one embodiment, binding the target NUMA node with the target storage according to the distance between the target NUMA node and each storage in the server includes: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
Alternatively, the storage device management process may be OSD, i.e., ceph, used to manage the progress of the storage device.
In an alternative embodiment, as shown in fig. 3, according to the distance between the target NUMA node and each storage device, binding the target NUMA node with a storage device management process corresponding to the target storage device, including the following steps:
step 301, determining, from the storage devices, the target storage device closest to the target NUMA node according to the distance between the target NUMA node and the storage devices, and determining a storage device management process corresponding to the target storage device.
Step 302, binding the target NUMA node with a storage management process corresponding to the target storage device.
In an alternative embodiment, binding the target NUMA node with a storage management process that corresponds to the target storage includes: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one possible implementation manner, an OSD may be determined first, and the following binding procedure is executed on the OSD, where the storage device corresponding to the OSD is queried based on the lsblk method and algorithm processing, and then a NUMA node corresponding to the target storage device is determined, where the NUMA node is a target NUMA node, which may be understood that the target storage device is a storage device closest to the target NUMA node in a plurality of storage devices, and then the target NUMA node and the OSD are bound based on a numactl command, a service file corresponding to the OSD, and algorithm processing.
In another possible implementation manner, an OSD may be determined first, and the following binding procedure is executed on the OSD, where the storage device corresponding to the OSD is queried based on the lsblk method and algorithm processing, and then a NUMA node corresponding to the target storage device is determined, where the NUMA node is a target NUMA node, which may be understood that the target storage device is a storage device closest to the target NUMA node in a plurality of storage devices, and then a CPU core in the target NUMA node and the OSD are bound based on a numactl command, a service file corresponding to the OSD, and algorithm processing.
In another possible implementation manner, an OSD may be determined and the following binding procedure may be executed on the OSD, where the storage device corresponding to the OSD is queried based on the lsblk method and algorithm processing, and then a NUMA node corresponding to the target storage device is determined, where the NUMA node is a target NUMA node, which may be understood that the target storage device is a storage device closest to the target NUMA node in a plurality of storage devices, and then a third level Cache in the target NUMA node and the OSD are bound based on a numactl command, a service file corresponding to the OSD and algorithm processing, where the third level Cache is an L3 Cache, and the L3 Cache may be understood as a CPU core/CPU core group corresponding to the L3 Cache, that is, the CPU core/CPU core group corresponding to the L3 Cache is to be bound with the OSD as a unit.
In an alternative embodiment, after executing the binding procedure, the configuration file is also reloaded by the systemctl daemon-load and systemctl restart ceph-osd@0 commands, and the OSD is restarted, i.e., the storage device management process is restarted.
In an alternative embodiment, the above binding procedure may be applied to all OSDs, and the corresponding script may be written to increase efficiency.
According to the method for binding the target NUMA node with the storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device, the cross-NUMA access can be reduced, and the performance is further effectively improved.
In one embodiment, as shown in fig. 4, before the target optimization strategy is obtained, the method further includes the following steps:
step 401, obtaining a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process.
In one possible implementation manner, the methods provided in the foregoing embodiments may be combined, after which a plurality of candidate optimization strategies may be obtained, then the optimization process is performed based on each candidate optimization strategy, after the optimization process is performed, four basic index tests including 4k random read-write and 128k sequential read-write are performed on the tested server cluster rbd by using a FIO test tool (rbd engine) at the client, an optimized performance value is recorded, and a performance improvement parameter is determined according to the reference value performance and the optimized performance value.
Step 402, determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
In one possible implementation manner, performance improvement parameters of all candidate optimization strategies are compared, and the candidate optimization strategy with the largest performance improvement parameter is taken as the target optimization strategy.
According to the method for determining the target optimization strategy according to the performance improvement parameters, the optimization process is executed by adopting various strategies, and the candidate optimization strategy with the maximum performance improvement parameters is selected as the target optimization strategy, so that the maximum improvement performance can be realized.
In one embodiment, as shown in fig. 5, an optimization method is provided, which includes the following steps:
step 501, a target optimization strategy is obtained, and an optimization process is executed for the server based on the target optimization strategy.
Step 502, determining a binding script according to the type of the target network card.
And step 503, binding the target network card with the target CPU core by using the binding script.
Step 504, based on the number of the target CPU cores, configuring a target number of request queues of the interrupt request of the target network card.
Step 505, determining, from the storage devices, the target storage device closest to the target NUMA node according to the distance between the target NUMA node and the storage devices, and determining a storage device management process corresponding to the target storage device.
Step 506, binding the target NUMA node with a storage management process corresponding to the target storage device.
It should be noted that, the present inventors performed actual operations on the method and obtained related data results, and performed actual operations on a server platform with a multiple NUMA CPU architecture, as shown in table 1 below.
TABLE 1
Test index Basic Property After optimization Percent of rise
4k random read (kIOPS) 641 800 24.80%
4k random write (kIOPS) 62 76.9 24.03%
128k sequential read (MB/s) 2955 8234 178.65%
128k sequential writing (MB/s) 679 1449 113.40%
It should be noted that, the optimization method provided by the application is not only suitable for Ceph block storage, but also suitable for interfaces such as file storage, object storage and the like.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an optimizing device for realizing the optimizing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of the embodiment of one or more optimization devices provided below may be referred to the limitation of the optimization method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 6, there is provided an optimizing apparatus comprising: an acquisition module, wherein:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target optimization strategy and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises the following steps: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
In one embodiment, as shown in FIG. 7, another optimization apparatus is provided, the optimization apparatus 700 including a configuration module 602 in addition to the modules included in the optimization apparatus 600.
In one embodiment, the configuration module is configured to: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In one embodiment, the configuration module is specifically configured to: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
In one embodiment, the obtaining module is specifically configured to: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
In one embodiment, the obtaining module is further specifically configured to: according to the distance between the target NUMA node and each storage device, determining the target storage device closest to the target NUMA node from each storage device, and determining a storage device management process corresponding to the target storage device; binding the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the obtaining module is further specifically configured to: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the obtaining module is further specifically configured to: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one embodiment, the obtaining module is further specifically configured to: determining a binding script according to the type of the target network card; and binding the target network card with the target CPU core by using the binding script.
In one embodiment, as shown in fig. 8, another optimizing device is provided, and the optimizing device 800 includes an executing module 603 in addition to the respective modules included in the optimizing device 600 and the optimizing device 700.
In one embodiment, the execution module is configured to: acquiring a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process; and determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
The respective modules in the above-described optimizing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an optimization method.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
obtaining a target optimization strategy, and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
In one embodiment, the processor when executing the computer program further performs the steps of: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In one embodiment, the processor, when executing the computer program, further performs the steps of: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
In one embodiment, the target NUMA node is bound to a target storage device according to a distance between the target NUMA node and each storage device in the server, and the processor when executing the computer program further performs the steps of: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
In one embodiment, according to the distance between the target NUMA node and each storage device, binding the target NUMA node with a storage device management process corresponding to the target storage device, where the processor when executing the computer program further implements the following steps: according to the distance between the target NUMA node and each storage device, determining the target storage device closest to the target NUMA node from each storage device, and determining a storage device management process corresponding to the target storage device; binding the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the target NUMA node is bound with a storage management process corresponding to the target storage, and the processor when executing the computer program further performs the following steps: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the target network card is bound to the target CPU core based on the correspondence between the target network card and the target NUMA node, and the processor further implements the following steps when executing the computer program: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one embodiment, the target network card is bound to the target CPU core based on the correspondence between the target network card and the target NUMA node, and the processor further implements the following steps when executing the computer program: determining a binding script according to the type of the target network card; and binding the target network card with the target CPU core by using the binding script.
In one embodiment, before the target optimization strategy is obtained, the processor further performs the following steps when executing the computer program: acquiring a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process; and determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
obtaining a target optimization strategy, and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
In one embodiment, the computer program when executed by the processor further performs the steps of: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In one embodiment, the configuring the target number of request queues of the interrupt request of the target network card based on the number of target CPU cores, when the computer program is executed by the processor, further implements the steps of: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
In one embodiment, the target NUMA node is bound to a target storage device according to a distance between the target NUMA node and each storage device in the server, and the computer program when executed by the processor further performs the steps of: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
In one embodiment, the target NUMA node is bound to a storage management process corresponding to the target storage according to a distance between the target NUMA node and each storage, and the computer program when executed by the processor further performs the steps of: according to the distance between the target NUMA node and each storage device, determining the target storage device closest to the target NUMA node from each storage device, and determining a storage device management process corresponding to the target storage device; binding the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the target NUMA node is bound to a storage management process corresponding to the target storage, and the computer program when executed by the processor further performs the steps of: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the target network card is bound to the target CPU core based on the correspondence between the target network card and the target NUMA node, and the computer program when executed by the processor further implements the steps of: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one embodiment, the target network card is bound to the target CPU core based on the correspondence between the target network card and the target NUMA node, and the computer program when executed by the processor further implements the steps of: determining a binding script according to the type of the target network card; and binding the target network card with the target CPU core by using the binding script.
In one embodiment, the computer program, when executed by the processor, further performs the steps of: acquiring a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process; and determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
obtaining a target optimization strategy, and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises: determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to the interrupt request of the target network card through the access to the target storage equipment.
In one embodiment, the computer program when executed by the processor further performs the steps of: and configuring the target number of the request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of the request queues.
In one embodiment, the configuring the target number of request queues of the interrupt request of the target network card based on the number of target CPU cores, when the computer program is executed by the processor, further implements the steps of: configuring the target number to be the same as the number of target CPU cores; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; alternatively, the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
In one embodiment, the target NUMA node is bound to a target storage device according to a distance between the target NUMA node and each storage device in the server, and the computer program when executed by the processor further performs the steps of: and binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
In one embodiment, the target NUMA node is bound to a storage management process corresponding to the target storage according to a distance between the target NUMA node and each storage, and the computer program when executed by the processor further performs the steps of: according to the distance between the target NUMA node and each storage device, determining the target storage device closest to the target NUMA node from each storage device, and determining a storage device management process corresponding to the target storage device; binding the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the target NUMA node is bound to a storage management process corresponding to the target storage, and the computer program when executed by the processor further performs the steps of: directly binding the target NUMA node with a storage device management process corresponding to the target storage device; or binding a CPU core in the target NUMA node with a storage device management process corresponding to the target storage device; or binding the tertiary cache in the target NUMA node with a storage device management process corresponding to the target storage device.
In one embodiment, the target network card is bound to the target CPU core based on the correspondence between the target network card and the target NUMA node, and the computer program when executed by the processor further implements the steps of: binding the target network card with the target CPU core of the target NUMA node based on the corresponding relation between the target network card and the target NUMA node; or binding the target network card with the target CPU core contained in the whole CPU where the target NUMA node is located based on the corresponding relation between the target network card and the target NUMA node.
In one embodiment, the target network card is bound to the target CPU core based on the correspondence between the target network card and the target NUMA node, and the computer program when executed by the processor further implements the steps of: determining a binding script according to the type of the target network card; and binding the target network card with the target CPU core by using the binding script.
In one embodiment, the computer program, when executed by the processor, further performs the steps of: acquiring a plurality of candidate optimization strategies, executing the optimization process based on each candidate optimization strategy, and determining performance improvement parameters after executing the optimization process, wherein the performance improvement parameters are used for representing improvement conditions of read-write performance of a storage server cluster where the server is located before and after executing the optimization process; and determining the target optimization strategy from the plurality of candidate optimization strategies according to the performance improvement parameters respectively corresponding to the candidate optimization strategies.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of optimization, the method comprising:
obtaining a target optimization strategy, and executing an optimization process for a server based on the target optimization strategy, wherein the optimization process comprises:
determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to an interrupt request of the target network card through access to the target storage equipment.
2. The method according to claim 1, wherein the method further comprises:
and configuring the target number of request queues of the interrupt request of the target network card based on the number of the target CPU cores, so that the target CPU cores respond to the interrupt request of the target network card in parallel based on the target number of request queues.
3. The method of claim 2, wherein configuring the target number of request queues of the interrupt request of the target network card based on the number of target CPU cores comprises:
configuring the target number to be the same as the number of target CPU cores; or alternatively, the process may be performed,
configuring the target number to be less than the number of target CPU cores and the same as the number of logical CPU cores of the target NUMA node; or alternatively, the process may be performed,
the target number is configured to be less than the number of target CPU cores and the same as the number of physical CPU cores of the target NUMA node.
4. The method of claim 1, wherein binding the target NUMA node with a target storage device based on a distance between the target NUMA node and storage devices in the server comprises:
And binding the target NUMA node with a storage device management process corresponding to the target storage device according to the distance between the target NUMA node and each storage device.
5. The method of claim 4, wherein binding the target NUMA node with a storage device management process corresponding to the target storage device according to a distance between the target NUMA node and each storage device comprises:
determining the target storage device closest to the target NUMA node from the storage devices according to the distance between the target NUMA node and the storage devices, and determining a storage device management process corresponding to the target storage device;
binding the target NUMA node with a storage device management process corresponding to the target storage device.
6. The method according to any one of claims 1 to 5, wherein binding the target network card with the target CPU core based on a correspondence between the target network card and the target NUMA node includes:
determining a binding script according to the type of the target network card;
And binding the target network card with the target CPU core by using the binding script.
7. An optimization device, the device comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target optimization strategy and executing an optimization process for a server based on the target optimization strategy, and the optimization process comprises the following steps:
determining a target NUMA node corresponding to a target network card in the server, binding the target network card with a target CPU core based on the corresponding relation between the target network card and the target NUMA node, and binding the target NUMA node with target storage equipment according to the distance between the target NUMA node and each storage equipment in the server so that the target CPU core responds to an interrupt request of the target network card through access to the target storage equipment.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202211731340.2A 2022-12-30 2022-12-30 Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product Pending CN116088758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211731340.2A CN116088758A (en) 2022-12-30 2022-12-30 Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211731340.2A CN116088758A (en) 2022-12-30 2022-12-30 Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product

Publications (1)

Publication Number Publication Date
CN116088758A true CN116088758A (en) 2023-05-09

Family

ID=86186338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211731340.2A Pending CN116088758A (en) 2022-12-30 2022-12-30 Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product

Country Status (1)

Country Link
CN (1) CN116088758A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311994A (en) * 2023-11-28 2023-12-29 苏州元脑智能科技有限公司 Processing core isolation method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311994A (en) * 2023-11-28 2023-12-29 苏州元脑智能科技有限公司 Processing core isolation method and device, electronic equipment and storage medium
CN117311994B (en) * 2023-11-28 2024-02-23 苏州元脑智能科技有限公司 Processing core isolation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110147407B (en) Data processing method and device and database management server
CN111324303B (en) SSD garbage recycling method, SSD garbage recycling device, computer equipment and storage medium
WO2017050064A1 (en) Memory management method and device for shared memory database
US20190199794A1 (en) Efficient replication of changes to a byte-addressable persistent memory over a network
CN110716845B (en) Log information reading method of Android system
US20170277439A1 (en) Techniques for Path Optimization in Storage Networks
US20230306010A1 (en) Optimizing Storage System Performance Using Data Characteristics
US20240061712A1 (en) Method, apparatus, and system for creating training task on ai training platform, and medium
CN115686932B (en) Backup set file recovery method and device and computer equipment
CN108052622A (en) A kind of storage method based on non-relational database, device and equipment
CN116088758A (en) Optimization method, optimization device, optimization computer device, optimization storage medium, and optimization program product
CN108475201A (en) A kind of data capture method in virtual machine start-up course and cloud computing system
CN115662489A (en) Hard disk test method and device, electronic equipment and storage medium
CN104598161A (en) Data reading and writing method and device and data storage structure
US9588884B2 (en) Systems and methods for in-place reorganization of device storage
CN113051102A (en) File backup method, device, system, storage medium and computer equipment
US9164678B2 (en) Merging data volumes and derivative versions of the data volumes
CN115576743B (en) Operating system recovery method, operating system recovery device, computer equipment and storage medium
CN104536764A (en) Program running method and device
US20200142995A1 (en) Intelligently Scheduling Resynchronization Jobs in a Distributed Object-Based Storage System
CN115760405A (en) Transaction execution method, device, computer equipment and medium
US11567698B2 (en) Storage device configured to support multi-streams and operation method thereof
CN115793957A (en) Method and device for writing data and computer storage medium
CN107102898B (en) Memory management and data structure construction method and device based on NUMA (non Uniform memory Access) architecture
CN112445413A (en) Data storage method and device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination