CN117596212B - Service processing method, device, equipment and medium - Google Patents

Service processing method, device, equipment and medium Download PDF

Info

Publication number
CN117596212B
CN117596212B CN202410072350.2A CN202410072350A CN117596212B CN 117596212 B CN117596212 B CN 117596212B CN 202410072350 A CN202410072350 A CN 202410072350A CN 117596212 B CN117596212 B CN 117596212B
Authority
CN
China
Prior art keywords
service
target
control node
processing unit
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410072350.2A
Other languages
Chinese (zh)
Other versions
CN117596212A (en
Inventor
李宁
苑忠科
张在理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202410072350.2A priority Critical patent/CN117596212B/en
Publication of CN117596212A publication Critical patent/CN117596212A/en
Application granted granted Critical
Publication of CN117596212B publication Critical patent/CN117596212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0659Management of faults, events, alarms or notifications using network fault recovery by isolating or reconfiguring faulty entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Hardware Redundancy (AREA)

Abstract

The invention discloses a service processing method, device, equipment and medium, and relates to the technical field of data processing. Under the condition that the service of the fault control node is transferred to the standby control node, uniformly distributing the target virtual port to the corresponding target central processor core in each current central processor core of the standby control node to serve as first uniform distribution, and ensuring the uniform processing of the service thread corresponding to the virtual port in the CPU core; and then, according to a service balance processing mechanism, distributing the service data to the corresponding target service processor core in a balance manner to perform service processing, so as to serve as balance processing of the second central processing unit core, and ensuring balance processing of the service data corresponding to the CPU core under the service thread. Under the condition of ensuring normal processing of the service of the fault control node, balanced load processing can be obtained, CPU resources are fully utilized, and the service throughput is improved.

Description

Service processing method, device, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a service processing method, apparatus, device, and medium.
Background
In the storage system of the double controllers, when one control node fails, the other surviving control node takes over the port information of the failed control node, and the service is recovered after the link of the failed control node is transferred to the other control node, so that the normal processing of the service of the failed control node is ensured.
Fig. 1 is a schematic diagram of a path transfer of the current dual controller, as shown in fig. 1, where a virtual port 2 of node a (standby control node 10) takes over a globally unique identifier (World Wide Port Name, WWPN) of a virtual port 1 of node B (failure control node 9), link 2 has elapsed, a new link 3 is established between host/server 8 and virtual port 2 of node a, and traffic of link 2 is transferred to link 3. Then, since the traffic loads of the link 1 and the link 3 are processed by the central processing unit (Central Processing Unit, CPU) core 1, the processing load corresponding to the CPU core 1 on the node a becomes large, the throughput of the original traffic cannot be maintained, and the traffic processing capability of the node a is reduced.
Therefore, how to maintain the traffic throughput of the controller to improve the traffic handling capability is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a service processing method, a device, equipment and a medium, which are used for solving the technical problem that the throughput of the original service of a standby controller cannot be maintained after path transfer of the current dual controller, so that the service processing capacity is reduced.
In order to solve the above technical problems, the present invention provides a service processing method, which is applied to a storage system of a dual control node, and the method includes:
under the condition that a service corresponding to a fault control node is transferred to a standby control node, acquiring service data of a service thread, each target virtual port of the standby control node and each current central processing unit core;
uniformly distributing each target virtual port to a corresponding target central processing unit core in each current central processing unit core, wherein the number of the target central processing unit cores is a plurality of;
and carrying out service processing on the service data which is distributed to the corresponding target central processing unit cores in an equalizing manner according to a service equalizing processing mechanism, wherein the service equalizing processing mechanism is determined based on the disk number property of the service data and/or the utilization rate of each target central processing unit core.
In one aspect, the balancing the allocation of each target virtual port to a corresponding target central processor core in each current central processor core includes:
numbering each target virtual port and each current central processing unit core respectively;
performing remainder processing on the total number of the current central processing unit cores by the number of each target virtual port to determine a remainder corresponding to the number of each target virtual port;
and matching the remainder corresponding to the number of each target virtual port to the number of each current central processing unit core to determine the corresponding target central processing unit core.
On the other hand, the service balancing processing mechanism determines based on the disk number property of the service data, and the service balancing processing mechanism distributes the service data to the corresponding target central processing unit core for service processing, including:
obtaining a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to a host;
numbering the target central processing unit cores corresponding to the target virtual ports;
performing remainder processing on the total number of the target central processing unit cores by each virtual disk number to determine a remainder corresponding to each virtual disk number;
And matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core to determine the corresponding target central processing unit core.
On the other hand, the service balance processing mechanism is determined based on the utilization rate of the central processing unit core, and the service data is distributed to the corresponding target central processing unit core in a balanced manner according to the service balance processing mechanism to perform service processing, including:
acquiring the utilization rate of each target central processing unit core;
obtaining target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
determining a difference between the maximum usage and the minimum usage;
judging whether the difference is larger than a difference threshold;
and if the service data is larger than the minimum utilization rate, distributing the service data to the target central processing unit core corresponding to the minimum utilization rate for service processing.
On the other hand, the service balancing processing mechanism determines, based on the disk number property of the service data and the usage rate of each target central processing unit core, and the service balancing processing mechanism distributes the service data to the corresponding target central processing unit core for service processing, including:
Acquiring the utilization rate of each target central processing unit core;
obtaining target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
determining a difference between the maximum usage and the minimum usage;
judging whether the difference is larger than a difference threshold;
if the service data is larger than the minimum utilization rate, the virtual disk corresponding to the service data is dropped to the target central processing unit core corresponding to the minimum utilization rate;
if not, obtaining a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to a host;
numbering the target central processing unit cores corresponding to the target virtual ports;
performing remainder processing on the total number of the target central processing unit cores by each virtual disk number to determine a remainder corresponding to each virtual disk number;
and matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core to determine the corresponding target central processing unit core.
On the other hand, a transfer determining process for transferring the service corresponding to the fault control node to the standby control node includes:
Obtaining a virtual port of the fault control node, a virtual port to be transferred of the standby control node and a port transfer list, wherein the port transfer list stores a port mapping relation between the virtual port of the fault control node and the virtual port to be transferred of the standby control node;
and parallelly transferring links of the virtual ports of the fault control node to corresponding virtual ports to be transferred of the standby control node according to the port transfer list.
On the other hand, a transfer determining process for transferring the service corresponding to the fault control node to the standby control node includes:
acquiring a link priority corresponding to a virtual port of the fault control node, wherein the link priority is determined by the importance degree of a service thread accepted by the virtual port corresponding to the link of the fault control node;
obtaining a virtual port of the fault control node, a virtual port to be transferred of the standby control node and a port transfer list, wherein the port transfer list stores a port mapping relation between the virtual port of the fault control node and the virtual port to be transferred of the standby control node;
And transferring the links of the virtual ports of the fault control node to the corresponding virtual ports to be transferred of the standby control node according to the port transfer list and the link priority.
In another aspect, the determining process of each current central processing unit core includes:
acquiring all central processing unit cores corresponding to the standby control nodes;
determining the processing types and the corresponding processing capacities of all the CPU cores;
screening the CPU cores with the processing types corresponding to the service processing types from all the CPU cores;
screening the prepared CPU cores with corresponding processing capacities larger than the preset processing capacities from other CPU cores except the service processing type in all the CPU cores;
and taking the central processor core corresponding to the service processing type and the prepared central processor core as each current central processor core.
On the other hand, after determining the corresponding target central processing unit core for the service data balanced distribution according to the service balanced processing mechanism, before landing the service data on the target central processing unit core, the method further comprises:
Acquiring the current throughput of each target central processing unit core;
if the current throughput is smaller than the preset throughput, acquiring the processing types and the throughput corresponding to the rest target central processing unit cores except the target central processing unit core corresponding to the current throughput which is smaller than the preset throughput;
and screening out the target central processing unit core which is of the service processing type, has the throughput larger than the preset throughput and corresponds to the throughput with the maximum throughput, and replacing the target central processing unit core which has the current throughput smaller than the preset throughput so as to perform service processing.
On the other hand, after the service data is distributed to the corresponding target central processing unit core according to the service balance processing mechanism for service processing, the method further comprises the following steps:
determining the corresponding processing capacity of each target central processing unit core;
judging whether each processing capacity exceeds a preset processing capacity;
if the processing capacity exceeds the preset processing capacity, marking the target central processing unit core corresponding to the exceeding of the preset processing capacity so that the next service data is not dropped to the target central processing unit core corresponding to the exceeding of the preset processing capacity.
In another aspect, the process of obtaining the target virtual port of the standby control node includes:
acquiring a virtual port to be transferred, corresponding to the fault control node, of a service transferred to the standby control node;
acquiring an initial virtual port of a link corresponding to a service processing thread formed by a host in the standby control node;
and taking the virtual port to be transferred and the initial virtual port as the target virtual port.
On the other hand, after the service data is distributed to the corresponding target central processing unit core according to the service balance processing mechanism for service processing, the method further comprises the following steps:
acquiring task completion progress corresponding to each target central processing unit core according to preset time;
and clearing and recycling the memory space to which the service data corresponding to the completed task belongs.
In order to solve the above technical problem, the present invention further provides a service processing device, which is applied to a storage system of a dual control node, and the device includes:
the first acquisition module is used for acquiring service data of a service thread, each target virtual port of the standby control node and each current central processing unit core under the condition that a service corresponding to the fault control node is transferred to the standby control node;
The first balanced distribution module is used for carrying out balanced distribution on each target virtual port to a corresponding target central processor core in each current central processor core, wherein the number of the target central processor cores is a plurality of;
and the second balanced distribution module is used for carrying out service processing on the service data which is distributed to the corresponding target central processing unit cores in an balanced manner according to a service balanced processing mechanism, wherein the service balanced processing mechanism is determined based on the disk number property of the service data and/or the utilization rate of each target central processing unit core.
In order to solve the above technical problem, the present invention further provides a service processing device, including:
a memory for storing a computer program;
and a processor for implementing the steps of the service processing method as described above when executing the computer program.
To solve the above technical problem, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the service processing method as described above.
The invention provides a service processing method, which is applied to a storage system of double control nodes, and in the case that the service of a fault control node is transferred to a standby control node, target virtual ports are uniformly distributed to corresponding target CPU cores in each current CPU core of the standby control node to serve as first uniform distribution; and then, according to a service balance processing mechanism, distributing the service data to the corresponding target service processor core in a balance mode to perform service processing, wherein the service data is used as balance processing of the second central processor core. The method has the advantages that the first balanced distribution ensures the balanced processing of the service thread corresponding to the virtual port in the CPU core, and the second balanced distribution ensures the balanced processing of the service data corresponding to the CPU core under the service thread. The twice balancing processing process avoids the condition that the load caused by the current path transfer of the double controllers is increased only on one CPU core, and simultaneously enables the original service and the transferred service on the standby control node to be balanced in load processing, and ensures the improvement of the service processing capacity and the improvement of the service throughput by fully utilizing CPU resources under the condition of ensuring the normal processing of the service of the fault control node.
In addition, the invention also provides a service processing device, equipment and medium, which have the same beneficial effects as the service processing method.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic diagram of one path transfer of a current dual controller;
FIG. 2 is a schematic diagram of another path transfer of the present dual controller;
fig. 3 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 4 is a block diagram of a service processing device according to an embodiment of the present invention;
fig. 5 is a block diagram of a service processing device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a service processing scenario provided in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
The core of the invention is to provide a service processing method, a device, equipment and a medium, so as to solve the technical problem that the throughput of the original service of a standby controller cannot be maintained after path transfer in the current dual controller, so that the service processing capacity is reduced.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
In the path transfer process of the dual controller, the host cannot sense the path change, the WWPN on the port of the host or the server and the WWPN on the storage port can uniquely determine one path, one physical port can be provided with a plurality of virtual ports, each virtual port is provided with one WWPN, and one stored physical port can generate a plurality of paths with one physical port of the host or the server. FIG. 2 is a schematic diagram of another path transfer of the current dual controller, as shown in FIG. 2, a node A (standby control node 10) has one physical port including two virtual ports, namely a virtual port 1 and a virtual port 2, wherein traffic of the virtual port 1 and the virtual port 2 is polled and processed by a traffic processing thread, the physical port of the node A is connected with a host/server for processing the host/server traffic, the virtual port 2 is used for port redundancy, and the traffic of the virtual port 1 of the node B is taken over when the node B (fault control node 9) fails; the node B has a physical port comprising two virtual ports, namely a virtual port 1 and a virtual port 2, wherein the services of the virtual port 1 and the virtual port 2 are polled and processed by a service processing thread, the physical port of the node B is connected with a host/server 8 and is used for processing the services of the host/server 8, and the virtual port 2 is used for carrying out port redundancy.
After the node B fails, the node a virtual port 2 takes over the WWPN of the node B virtual port 1, so that the link 2 is lost, a new link 3 is established between the host/server and the node a virtual port 2, and the traffic on the link 2 is naturally transferred to the link 3, so that the host traffic cannot be stopped due to the loss of the link 2. The specific transfer process host port WWPN is assumed to be WWPN-h, the WWPN on the A node virtual port 1 is assumed to be WWPN-a, the WWPN on the B node virtual port 1 is assumed to be WWPN-B, WWPN-h forms a link 1 with WWPN-a, WWPN-h forms a link 2 with WWPN-B, WWPN-B transfers to the A node virtual port 2 from the B node virtual port 1 after the B node fails, WWPN-B and WWPN-h form a new link 3, and thus the logic path between the host port WWPN-h and the storage node WWPN-B is not changed, but the physical position is changed.
In the above process, each service thread runs on one CPU core, and when the service load on link 1 and link 3 becomes large, the processing load of the CPU core becomes large, and at this time, node a cannot maintain the original service throughput, and as a whole, the stored service processing capacity decreases due to node B failure. The business processing method provided by the invention can solve the current technical problems.
Fig. 3 is a flowchart of a service processing method according to an embodiment of the present invention, where, as shown in fig. 3, the method is applied to a storage system of a dual control node, and includes:
s11: under the condition that the service corresponding to the fault control node is transferred to the standby control node, acquiring service data of a service thread, each target virtual port of the standby control node and each current central processing unit core;
s12: uniformly distributing each target virtual port to a corresponding target central processing unit core in each current central processing unit core;
the number of the target central processing unit cores is a plurality of;
s13: according to the service balance processing mechanism, distributing service data to the corresponding target central processing unit core in a balanced manner to perform service processing;
the service balance processing mechanism is determined based on the serial number property of the service data and/or the utilization rate of each target central processing unit core.
Specifically, the invention is applied to a storage system with double control nodes, the double control redundancy mode corresponding to the double control nodes is not limited, and the double control redundancy mode can be a mutual standby mode or a double active mode and can be set according to actual conditions. The mutual backup mode is that two controllers are both active, one controller takes over after failure, and the mutual backup controllers of the respective controllers are formed. The dual active mode balances the load of the system with the granularity of Input Output (IO), and the situation that one controller is busy and the other controller is idle, so that the load of one controller is far higher than that of the other controller can not occur.
In the service transfer process, when the link is changed, the standby control node can leave a certain virtual port for the fault control node so as to be convenient for transfer, for example, 10 virtual ports are arranged in the standby control node, the link actually occupies 5 virtual ports at most, and the other 5 ports are virtual ports prepared for the fault control node. Similarly, the fault control node is the same as the virtual port setting process. The two control nodes have common cluster data and are synchronized in real time.
In this embodiment, the service corresponding to the failure control node is transferred to the backup control node, regardless of which controller is specifically the failure control node and which controller is the backup control node, and the node a may be the failure control node or the node B may be the failure control node. The transition is made as long as the failed control node is not operational. It should be noted that, in this embodiment, the transfer path and load balancing between the CPU core and the service thread in the service processing process are independent from each other. And under the condition of transferring to the standby control node, acquiring service data of the service thread, each target virtual port of the standby control node and each current central processing unit core.
The service data is a series of logic operations for processing read/write instructions of a host or a server, and can be that one write operation is completed, one service processing is completed, and one read operation is completed, and one service processing is completed. For example, one CPU core includes 3 services, service 1: acquiring a host read-write instruction from a driving buffer area; service 2: analyzing the instruction content, such as analyzing the logic address of the read-write instruction, and reading the length of the write data; service 3: if the command is a read command, the data needs to be sent to the host from the local memory, and if the command is a write command, the data of the host needs to be written into the memory.
Each target virtual port of the standby control node can be all virtual ports in the standby control node; the virtual port formed by the occupied link may also include an original virtual port in the standby control node and a virtual port transferred to the standby control node by the fault control node, which is not limited herein and may be set according to actual conditions.
The current CPU cores may be all CPU cores of the controller where the standby control node is located, may be CPU cores only responsible for service processing and other processing (such as system processing), and the like, and are not limited herein, and may be set in actual situations.
In step S12, each target virtual port is uniformly distributed to the corresponding target CPU core in each current CPU core, and it can be understood that the number of target CPU cores is plural, so as to distinguish that the path transfer under the current dual controller only processes the service thread in one CPU core. In this embodiment, the balanced allocation may be an average allocation, or may be a determination of a corresponding target CPU core based on a certain balanced algorithm, where the current number of CPU cores is greater than or equal to the target CPU core. The purpose of the balanced allocation in step S12 is to ensure that each target virtual port corresponds to each target CPU core, and to avoid that all target virtual ports correspond to only one target CPU core. One target CPU core in this embodiment may correspond to one target virtual port, or may correspond to a plurality of target virtual ports.
The balanced allocation in step S13 is based on specific service data allocation, and the service data in this embodiment is flexibly processed, and after a certain service data is issued, allocation is performed to realize flexible allocation. Instead of the mapping relation of the service data and the virtual port firstly achieving an agreement, the mapping relation between each virtual port and the CPU core is considered, and then the mapping relation between the service data and the CPU core is considered and issued.
The service balancing processing mechanism in this embodiment may be based on the serial number property of the service data, the usage rate of each target CPU core, or a combination of both, or may be determined by other reference data besides the two, which is not limited herein.
The nature of the serial numbers of the corresponding disks on the host, such as E-disk and F-disk, is that the host names the disks, and in this embodiment, the serial numbers of the corresponding virtual disks on the storage device are based on the internal numbers of the corresponding virtual disks, and in general, the storage device can provide 2048 virtual disks to one host at most, the number of which is far greater than the number of CPU cores, and at this time, it needs to determine which CPU core the data (service data) of the virtual disk is processed by to balance the CPU load, so that the CPU cores are balanced for use. The corresponding drop number may be 0-2047, which CPU core is to be processed based on the nature of the number.
The usage rate of the target CPU cores is plural, and considering the balancing processing, the usage rate needs to be considered, if the difference between the usage rates is small, it is indicated that the current target CPU cores reach the balancing, and if the difference between the usage rates is large, it is indicated that the current target CPU cores do not reach the balancing, and it is necessary to fill the specific target CPU cores which do not reach the balancing based on the current service data. The specific balanced distribution can be the same as or different from the above balanced distribution processing procedure, and a new balanced processing algorithm can be used to determine a corresponding target CPU core so as to facilitate service processing.
The service processing method provided by the embodiment of the invention is applied to a storage system of double control nodes, and under the condition that the service of a fault control node is transferred to a standby control node, target virtual ports are uniformly distributed to corresponding target CPU cores in each current CPU core of the standby control node to serve as first uniform distribution; and then, according to a service balance processing mechanism, distributing the service data to the corresponding target service processor core in a balance mode to perform service processing, wherein the service data is used as balance processing of the second central processor core. The method has the advantages that the first balanced distribution ensures the balanced processing of the service thread corresponding to the virtual port in the CPU core, and the second balanced distribution ensures the balanced processing of the service data corresponding to the CPU core under the service thread. The twice balancing processing process avoids the condition that the load caused by the current path transfer of the double controllers is increased only on one CPU core, and simultaneously enables the original service and the transferred service on the standby control node to be balanced in load processing, and ensures the improvement of the service processing capacity and the improvement of the service throughput by fully utilizing CPU resources under the condition of ensuring the normal processing of the service of the fault control node.
Based on the above embodiments, in some embodiments, the balancing the allocation of each target virtual port to the corresponding target cpu core in each current cpu core in step S12 includes:
numbering each target virtual port and each current central processing unit core respectively;
performing remainder processing on the total number of the current central processing unit cores by the serial numbers of the target virtual ports to determine remainder corresponding to the serial numbers of the target virtual ports;
and matching the remainder corresponding to the number of each target virtual port to the number of each current central processing unit core to determine the corresponding target central processing unit core.
Specifically, the numbers are respectively based on each target virtual port and each current CPU core, wherein the numbers are that the target virtual port starts numbering from 0, and the current CPU core starts numbering from 0, and the two are independent from each other. And performing remainder processing on the total number of the current CPU cores by the numbers of the target virtual ports to determine corresponding remainder, and matching the specific number of the remainder to the number of each current CPU core so as to finish matching to the corresponding target CPU cores.
For example, the number of the target virtual port is 5, the total number of the current CPU cores is 3, the remainder obtained by taking the remainder is 2, and the target virtual port is matched to the current CPU core with the number of 2.
The virtual port and the CPU core are balanced and matched so as to facilitate the creation of a corresponding mapping mechanism, ensure the balance of the CPU cores in the subsequent service processing process and avoid the condition of high load pressure caused by processing concentrated in one CPU core.
In some embodiments, the service balancing processing mechanism determines based on the nature of the landing number of the service data, and in step S13, the service balancing processing mechanism distributes the service data to the corresponding target central processing unit core for service processing, including:
obtaining a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to the host;
numbering the target central processing unit cores corresponding to the target virtual ports;
performing remainder processing on the total number of the target CPU cores by each virtual disk number to determine a remainder corresponding to each virtual disk number;
and matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core to determine the corresponding target central processing unit core.
Specifically, the service balancing processing mechanism only determines by the serial number property of the dropped disc of the service data, obtains the serial number of the virtual disc corresponding to the service data, and at this time, the serial number of the virtual disc is distinguished from the serial number of the corresponding disc of the host, for example, the service data of a plurality of virtual discs are provided for the host, the serial number of the virtual disc from 0 needs to be obtained, and the serial number is carried out on the target CPU core, so that the serial number and the virtual disc serial number are independent serial numbers.
And performing remainder processing on the total number of the target CPU cores by each virtual disk number to determine a remainder corresponding to each virtual disk number, and matching each remainder to the number of each target CPU core to determine matching to the corresponding target CPU core. The remainder processing in this embodiment is the same as that of the virtual port, and will not be described here.
In some embodiments, the service balancing processing mechanism is determined based on the usage rate of the central processing unit core, and the service balancing processing mechanism distributes service data to the corresponding target central processing unit core for service processing, including:
acquiring the utilization rate of each target central processing unit core;
obtaining target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
determining a difference between the maximum usage and the minimum usage;
judging whether the difference value is larger than a difference value threshold value or not;
if the service data is larger than the minimum utilization rate, distributing the service data to a target central processing unit core corresponding to the minimum utilization rate for service processing.
Specifically, the usage rate of each target CPU core will have a difference, so that the balance degree of each target CPU core can be determined by using the parameter of the usage rate, if the usage rate has a larger difference, the corresponding balance degree of each target CPU core is worse, and the service data issued needs to be distributed to the corresponding target CPU core with worse balance degree to ensure balanced distribution, so that the balance of the usage rate of each target CPU core is achieved, and the difference is reduced.
The target CPU cores corresponding to the maximum usage rate and the minimum usage rate may be obtained by a sorting method or a screening method, and the present invention is not limited thereto. The acquisition of the corresponding ordering method may be such that the ordering of the usage rate of each target CPU core from large to small or from small to large is not limited, and may be set according to the actual setting. The difference threshold in this embodiment may be determined based on an algorithm, or may be determined by parameters such as a time difference value of completion of the same task in the service processing process, which is not limited herein.
In this embodiment, the determination of the balance of the usage rates of the target CPU cores is determined based on the difference between the maximum usage rate and the minimum usage rate, and if the difference is greater than the difference threshold, it is indicated that the usage rates of the two corresponding target CPU cores differ greatly, resulting in an excessive balance, so that service data needs to be distributed to the target CPU core corresponding to the minimum usage rate for service processing.
When the difference is smaller than or equal to the difference threshold, it is indicated that the current usage rate of each target CPU core is smaller and approximately equal, and therefore, other balanced allocation methods may be adopted.
In some embodiments, the service balancing processing mechanism determines, based on the disk number property of the service data and the usage rate of each target central processing unit core, to perform service processing on the service data distributed to the corresponding target central processing unit core according to the service balancing processing mechanism, including:
acquiring the utilization rate of each target central processing unit core;
obtaining target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
determining a difference between the maximum usage and the minimum usage;
judging whether the difference value is larger than a difference value threshold value or not;
if the service data is larger than the service data, the virtual disk corresponding to the service data is dropped to a target central processing unit core corresponding to the minimum utilization rate;
if not, obtaining a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to the host;
numbering the target central processing unit cores corresponding to the target virtual ports;
performing remainder processing on the total number of the target CPU cores by each virtual disk number to determine a remainder corresponding to each virtual disk number;
and matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core to determine the corresponding target central processing unit core.
Specifically, in the case that the difference is not greater than the difference threshold, a remainder processing manner may be used, and the corresponding specific manner is the same as the remainder manner of the virtual disc number, which is not described herein again, and may refer to the above embodiment.
The service balance processing mechanism is different in determining process, so that the corresponding balanced distribution modes for carrying out service processing on the service data to the corresponding target central processing unit cores according to the service balance processing mechanism are different, diversity and flexibility of balanced distribution are realized, and load balance of each target CPU core is ensured.
On the basis of the above embodiments, in some embodiments, a transfer determining process for transferring the service corresponding to the failure control node in step S11 to the standby control node includes:
obtaining virtual ports of a fault control node, virtual ports to be transferred of a standby control node and a port transfer list, wherein the port transfer list stores port mapping relations between the virtual ports of the fault control node and the virtual ports to be transferred of the standby control node;
and parallelly transferring links of the virtual ports of the fault control node to the virtual ports to be transferred of the corresponding standby control node according to the port transfer list.
It can be understood that the virtual port of the failure control node mainly refers to a virtual port corresponding to a link formed by the host and the failure control node, and the virtual port to be transferred of the standby control node is a receiving virtual port corresponding to the standby control node to which the service of the failure control node is to be transferred. And the port transfer list stores the port mapping relation between the virtual port of the fault control node and the virtual port to be transferred of the standby control node. The port map is a map under each virtual port stored in the dual controller in advance, and for example, virtual ports 2, 4, 6, and 8 of the control node a take over virtual ports 1, 3, 5, and 7 of the control node B, respectively, and the port map is formed in this way.
And the links of the virtual ports of the fault control node are respectively transferred to the virtual ports to be transferred of the corresponding standby control node in parallel based on the port transfer list, so that parallel processing of transfer paths is realized, and transfer time is saved.
In some embodiments, considering that the path transfer is not parallel to the above embodiment, the transfer determination process of transferring the traffic corresponding to the failure control node to the standby control node in step S12 may also be sequential transfer, including:
Acquiring a link priority corresponding to a virtual port of a fault control node, wherein the link priority is determined by the importance degree of a service thread accepted by the virtual port corresponding to the link of the fault control node;
obtaining virtual ports of a fault control node, virtual ports to be transferred of a standby control node and a port transfer list, wherein the port transfer list stores port mapping relations between the virtual ports of the fault control node and the virtual ports to be transferred of the standby control node;
and transferring the links of the virtual ports of the fault control node to the virtual ports to be transferred of the corresponding standby control node according to the port transfer list and the link priority.
Specifically, the link priority corresponding to the virtual port of the fault control node is obtained, where the link priority may be determined by the corresponding service thread priority, and in this embodiment, the service thread priority may be determined by the importance degree of the service thread, or may also be determined by the size of the occupied space of other service threads, which is not limited herein. The links of the virtual ports of the fault control node are transferred to the corresponding virtual ports with transfer based on the port transfer list and the link priority, that is, the links with higher priority are preferentially transferred through the link priority, and then the links with lower priority are transferred, so that the order of the link transfer is realized.
In the link transfer process provided by the embodiment, the corresponding parallel transfer can save transfer time, and the order of the link transfer is realized according to the transfer sequence of the link priority.
On the basis of the above embodiments, in some embodiments, the determining process of each current cpu core in step S11 includes:
acquiring all central processing unit cores corresponding to the standby control nodes;
determining the processing types and the corresponding processing capacities of all the central processing unit cores;
screening the CPU cores with the processing types corresponding to the service processing types from all the CPU cores;
screening prepared CPU cores with corresponding processing capacities larger than the preset processing capacities from other CPU cores except the service processing types in all the CPU cores;
and taking the central processor core corresponding to the service processing type and the prepared central processor core as each current central processor core.
It will be appreciated that the current CPU core in this embodiment may be based on all the CPU cores corresponding to the entire controller, and may be based on the subsequent allocation of a specific CPU core. Limitations in processing type and processing capacity are considered in this embodiment.
The CPU cores corresponding to the service processing types are screened firstly, and the corresponding screening process can be screened in a flag bit mode, or the CPU cores corresponding to the processing of the service data can be used as the setting of the CPU cores of the service processing types. In addition, the spare CPU cores with processing capacity larger than the preset processing capacity are screened from the other remaining CPU cores except the service processing type. The prepared CPU core in this embodiment may be used as an alternative for service data processing of a service thread, in consideration of a situation that the processing capability of the CPU core is higher, so as to improve the subsequent service processing efficiency.
And taking the CPU core corresponding to the service processing type and the prepared CPU core as the current CPU core so as to facilitate the subsequent balanced processing distribution in the current CPU core for distribution.
In some embodiments, after determining the corresponding target central processing unit core for service data balanced allocation according to the service balanced processing mechanism, before landing the service data on the target central processing unit core, the method further comprises:
acquiring the current throughput of each target central processing unit core;
if the current throughput is smaller than the preset throughput, acquiring the processing types and the throughput corresponding to the other target central processing unit cores except the target central processing unit core corresponding to the current throughput which is smaller than the preset throughput;
And screening out a target central processing unit core with a processing type of service processing, throughput greater than preset throughput and corresponding to the maximum throughput, and replacing the target central processing unit core with the current throughput smaller than the preset throughput so as to perform service processing.
Specifically, after determining the target CPU core, before the target CPU core is not dropped to perform service processing, the throughput of the target CPU core is checked again, so that the current dropped target CPU core can accept service data, and the throughput is ensured to reach a certain preset throughput.
And acquiring other target CPU processor cores under the condition that the current throughput is smaller than the preset throughput, wherein the other target CPU cores are CPU cores with the throughput larger than or equal to the preset throughput. And acquiring the processing types and throughput corresponding to other target CPU cores, and screening the other target CPU cores. The corresponding screening conditions are mainly that the processing type is a business processing type, the throughput is larger than the preset throughput and the throughput is maximum in other target CPU cores, and the target CPU core corresponding to the current throughput smaller than the preset throughput is replaced after screening so as to carry out business processing subsequently.
It should be noted that the filtering condition may be other conditions, such as processing capability, and the number of the replaced CPU cores may be one or more, which is not limited herein, and if the number of the replaced CPU cores is more, the target CPU core with the largest throughput may accept more than one target CPU core with the current throughput smaller than the corresponding throughput of the preset, or if the number of other target CPU cores with the largest throughput is more, balanced matching may be performed, where balanced matching may follow the above-mentioned matching process, or may use a completely new other matching processing method, which is not limited herein.
Before the service is processed by the landing, the target CPU core to be landed is checked, so that the balance of the CPU core is ensured, and the subsequent service processing efficiency is improved.
In some embodiments, after performing service processing on service data by uniformly distributing the service data to the corresponding target central processing unit core according to the service balance processing mechanism, the method further includes:
determining the corresponding processing capacity of each target central processing unit core;
judging whether each processing capacity exceeds a preset processing capacity;
if the processing capacity exceeds the preset processing capacity, marking the target central processing unit core corresponding to the exceeding of the preset processing capacity so that the next business data is not dropped to the target central processing unit core corresponding to the exceeding of the preset processing capacity.
Specifically, after the current service data is distributed to the target CPU cores to perform service processing, for the next balanced distribution of the service data, the processing capacity corresponding to each target CPU core needs to be checked, and if the processing capacity exceeds the preset processing capacity, the target CPU cores corresponding to the exceeding preset processing capacity need to be marked, so that the next service data cannot fall into the marked target CPU cores, the processing time and the computing time of balanced distribution are saved, and the overall service processing efficiency is improved.
In some embodiments, the process of obtaining the target virtual port of the standby control node includes:
obtaining a to-be-transferred virtual port of a service corresponding to a fault control node to be transferred to a standby control node;
acquiring an initial virtual port of a link corresponding to a service processing thread formed by a host in a standby control node;
and taking the virtual port to be transferred and the initial virtual port as target virtual ports.
Specifically, the target virtual port in this embodiment may be all virtual ports of the standby control node, or may be virtual ports actually forming a link. Considering the service processing between the balanced load of the CPU core and the service data, in this embodiment, only the virtual ports in the standby control node that actually form the link are subjected to subsequent balanced matching.
The virtual port of the link actually formed in the standby control node mainly comprises an initial virtual port of the link originally corresponding to the service processing thread formed by the host in the standby control node and a virtual port to be transferred of the standby control node, wherein the service corresponding to the fault control node is transferred to the virtual port to be transferred of the standby control node, so that the time for distributing balance is shortened, and resources are saved.
In some embodiments, after performing service processing on service data by uniformly distributing the service data to the corresponding target central processing unit core according to the service balance processing mechanism, the method further includes:
acquiring task completion progress corresponding to each target central processing unit core according to preset time;
and clearing and recycling the memory space to which the service data corresponding to the completed task belongs.
Specifically, the task completion progress corresponding to each target CPU core is obtained according to the preset time, and the memory space corresponding to the service data of the completed task is cleared and recovered, so that the timing cleaning of the memory space is ensured, and the memory space is saved.
The preset time in this embodiment is not limited to a specific value, and the number of services and specific data in each target CPU core may be comprehensively considered. The timing time for cleaning and recycling is not limited, and may be set according to actual conditions.
The invention further discloses a service processing device corresponding to the method, which is applied to a storage system of double control nodes, and fig. 4 is a structural diagram of the service processing device provided by the embodiment of the invention. As shown in fig. 4, the service processing apparatus includes:
a first obtaining module 11, configured to obtain service data of a service thread, each target virtual port of the standby control node, and each current central processing unit core when a service corresponding to the failure control node is transferred to the standby control node;
the first balanced distribution module 12 is configured to uniformly distribute each target virtual port to a corresponding target central processor core in each current central processor core, where the number of target central processor cores is a plurality of target central processor cores;
and the second balancing distribution module 13 is configured to balance and distribute the service data to the corresponding target central processing unit cores according to a service balancing processing mechanism for performing service processing, where the service balancing processing mechanism is determined based on the disk number property of the service data and/or the usage rate of each target central processing unit core.
In one aspect, the first balanced distribution module 12 includes:
The first coding submodule is used for numbering each target virtual port and each current central processing unit core respectively;
the first remainder processing submodule is used for carrying out remainder processing on the total number of the current central processing unit cores by the number of each target virtual port to determine a remainder corresponding to the number of each target virtual port;
and the first allocation submodule is used for matching the remainder corresponding to the number of each target virtual port with the number of each current central processing unit core so as to determine the corresponding target central processing unit core.
On the other hand, the service balancing processing mechanism is determined based on the landing number property of the service data, and the second balancing allocation module 13 includes:
the first acquisition sub-module is used for acquiring a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to the host;
the second coding submodule is used for numbering the target central processing unit cores corresponding to the target virtual ports;
the second remainder processing sub-module is used for carrying out remainder processing on the total number of the target central processing unit cores by each virtual disk number to determine the remainder corresponding to each virtual disk number;
and the second allocation submodule is used for matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core so as to determine the corresponding target central processing unit core.
On the other hand, the service equalization processing mechanism is determined based on the usage rate of the central processor core, and the second equalization distribution module 13 includes:
the second acquisition submodule is used for acquiring the utilization rate of each target central processing unit core;
the third acquisition submodule is used for acquiring target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
a first determining sub-module for determining a difference between the maximum usage and the minimum usage;
the first judging submodule is used for judging whether the difference value is larger than a difference value threshold value or not; if the first allocation sub-module is larger than the second allocation sub-module, triggering a third allocation sub-module;
and the third distribution sub-module is used for distributing the service data to the target central processing unit core corresponding to the minimum utilization rate for service processing.
On the other hand, the service balancing processing mechanism determines, based on the nature of the landing number of the service data and the usage rate of each target cpu core, the second balancing allocation module 13 includes:
the fourth acquisition submodule is used for acquiring the utilization rate of each target central processing unit core;
a fifth obtaining sub-module, configured to obtain target central processor cores corresponding to the maximum usage rate and the minimum usage rate respectively from the usage rates of the target central processor cores;
A second determining sub-module for determining a difference between the maximum usage and the minimum usage;
the second judging submodule is used for judging whether the difference value is larger than a difference value threshold value or not; if the first sub-module is larger than the second sub-module, triggering a fourth distribution sub-module, and if the first sub-module is not larger than the second sub-module, triggering a sixth acquisition sub-module;
a fourth allocation submodule, configured to drop a virtual disk corresponding to service data to a target central processing unit core corresponding to a minimum usage rate;
a sixth obtaining sub-module, configured to obtain a virtual disk number corresponding to the service data, where the virtual disk number is distinguished from a disk number corresponding to the host;
the third coding submodule is used for numbering the target central processing unit cores corresponding to the target virtual ports;
the third remainder processing sub-module is used for carrying out remainder processing on the total number of the target central processing unit cores by each virtual disk number to determine the remainder corresponding to each virtual disk number;
and the fifth allocation submodule is used for matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core so as to determine the corresponding target central processing unit core.
On the other hand, a transfer determining process for transferring the service corresponding to the fault control node to the standby control node includes:
A seventh obtaining submodule, configured to obtain a virtual port of the failure control node, a virtual port to be transferred of the standby control node, and a port transfer list, where a port mapping relationship between the virtual port of the failure control node and the virtual port to be transferred of the standby control node is stored in the port transfer list;
and the first transfer submodule is used for parallelly transferring the links of the virtual ports of the fault control node to the virtual ports to be transferred of the corresponding standby control node according to the port transfer list.
On the other hand, a transfer determining process for transferring the service corresponding to the fault control node to the standby control node includes:
an eighth obtaining sub-module, configured to obtain a link priority corresponding to a virtual port of the failure control node, where the link priority is determined by an importance level of a service thread received by the virtual port corresponding to the link of the failure control node;
a ninth obtaining submodule, configured to obtain a virtual port of the failure control node, a virtual port to be transferred of the standby control node, and a port transfer list, where a port mapping relationship between the virtual port of the failure control node and the virtual port to be transferred of the standby control node is stored in the port transfer list;
And the second transfer submodule is used for transferring the links of the virtual ports of the fault control node to the virtual ports to be transferred of the corresponding standby control node according to the port transfer list and the link priority.
On the other hand, the determining process of each current central processing unit core includes:
a tenth acquisition sub-module, configured to acquire all central processor cores corresponding to the standby control node;
a third determining submodule, configured to determine processing types and corresponding processing capacities of all central processing unit cores;
the first screening processing submodule is used for screening the central processor cores with the processing types corresponding to the service processing types from all the central processor cores;
the second screening processing sub-module is used for screening the prepared CPU cores with the corresponding processing capacities larger than the preset processing capacities from other CPU cores except the service processing types in all the CPU cores;
the first sub-module is used for taking the central processor core corresponding to the service processing type and the prepared central processor core as each current central processor core.
On the other hand, after determining the corresponding target central processing unit core for service data balanced distribution according to the service balanced processing mechanism, before landing the service data on the target central processing unit core, the method further comprises:
An eleventh acquisition sub-module, configured to acquire a current throughput of each target central processing unit core;
a twelfth obtaining sub-module, configured to obtain, if there is a current throughput less than a preset throughput, a processing type and throughput corresponding to the remaining target central processor cores except the target central processor core corresponding to the current throughput less than the preset throughput;
and the third screening processing sub-module is used for screening out a target central processing unit core which is of a service processing type, has throughput larger than the preset throughput and corresponds to the maximum throughput, and replacing the target central processing unit core which has the current throughput smaller than the preset throughput so as to process the service.
On the other hand, after the second equalization distribution module 13, further includes:
a fourth determining submodule, configured to determine a processing capability corresponding to each target central processing unit core;
the third judging submodule is used for judging whether each processing capacity exceeds a preset processing capacity or not; if the processing capacity exceeds the preset processing capacity, triggering a first marking sub-module;
the first marking sub-module is used for marking the target central processing unit core corresponding to the exceeding of the preset processing capacity so that the next business data does not fall into the target central processing unit core corresponding to the exceeding of the preset processing capacity.
In another aspect, the process of obtaining the target virtual port of the standby control node includes:
a thirteenth obtaining sub-module, configured to obtain a virtual port to be transferred, where the virtual port to be transferred is to be transferred to a standby control node, where the service corresponds to the failure control node;
a fourteenth obtaining sub-module, configured to obtain an initial virtual port of a link corresponding to a service processing thread formed by the host in the standby control node;
and the second sub-module is used for taking the virtual port to be transferred and the initial virtual port as target virtual ports.
On the other hand, after the second equalization distribution module 13, further includes:
a fifteenth acquisition sub-module, configured to acquire task completion progress corresponding to each target central processing unit core according to a preset time;
and the first recycling sub-module is used for cleaning and recycling the memory space to which the service data corresponding to the completed task belongs.
Since the embodiments of the device portion correspond to the above embodiments, the embodiments of the device portion are described with reference to the embodiments of the method portion, and are not described herein.
For the description of the service processing device provided by the present invention, refer to the above method embodiment, and the present invention is not repeated herein, which has the same advantages as the above service processing method.
Fig. 5 is a block diagram of a service processing device according to an embodiment of the present invention, as shown in fig. 5, where the device includes:
a memory 21 for storing a computer program;
a processor 22 for implementing the steps of the business processing method when executing the computer program.
The service processing device provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 22 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like, among others. The processor 22 may be implemented in hardware in at least one of a digital signal processor (Digital Signal Processor, DSP), a Field programmable gate array (Field-Programmable Gate Array, FPGA), a programmable logic array (Programmable Logic Array, PLA). The processor 22 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU, and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 22 may be integrated with an image processor (Graphics Processing Unit, GPU) for use in responsible for rendering and rendering of content required for display by the display screen. In some embodiments, the processor 22 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 21 may include one or more computer-readable storage media, which may be non-transitory. Memory 21 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 21 is at least used for storing a computer program 211, where the computer program is loaded and executed by the processor 22 to implement the relevant steps of the service processing method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 21 may further include an operating system 212, data 213, and the like, and the storage manner may be transient storage or permanent storage. The operating system 212 may include Windows, unix, linux, among other things. The data 213 may include, but is not limited to, data related to business processing methods, and the like.
In some embodiments, the service processing device may further include a display 23, an input/output interface 24, a communication interface 25, a power supply 26, and a communication bus 27.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is not limiting of the business processing apparatus and may include more or fewer components than illustrated.
The processor 22 implements the service processing method provided in any of the above embodiments by calling instructions stored in the memory 21.
For the description of the service processing device provided by the present invention, refer to the above method embodiment, and the present invention is not repeated herein, which has the same beneficial effects as the above service processing method.
Further, the present invention also provides a computer readable storage medium having a computer program stored thereon, which when executed by the processor 22 implements the steps of the business processing method as described above.
It will be appreciated that the methods of the above embodiments, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For an introduction to a computer readable storage medium provided by the present invention, please refer to the above method embodiment, the present invention is not described herein, and the method has the same advantages as the above service processing method.
Fig. 6 is a schematic diagram of a service processing scenario provided in an embodiment of the present invention, as shown in fig. 6, a node a (standby control node 10) has a physical port including two virtual ports, namely a virtual port 1 and a virtual port 2, where a service of the virtual port 1 is processed by a service processing thread 1, the service processing thread makes affinity binding with a central processing unit core 1, a service of the virtual port 2 is processed by a service processing thread 2, the service processing thread 2 makes affinity binding with the CPU core 2, the physical port of the node a is connected with a host/server 8 for processing a service of the host/server 8, the virtual port 2 makes port redundancy, and when a node B (fault control node 9) fails, the node a takes over the service of the virtual port 1 of the node B; the node B has a physical port comprising two virtual ports, namely a virtual port 1 and a virtual port 2, wherein the service of the virtual port 1 is processed by a service processing thread 1, the service of the virtual port 2 is processed by a service processing thread 2, the service of the virtual port 2 is processed by the service processing thread 2, the service processing thread 2 is bonded with the CPU core 2, the physical port of the node B is connected with a host/server and is used for processing the service of the host/server, the virtual port 2 is used for port redundancy, and the node B takes over the service of the virtual port 1 of the node A when the node A fails. Compared with the original design, when one node in the double-control system fails, the link accessible by the host side is not reduced, the service of the failed node is transferred to the partner node (the partner node has no failure), the excessive load of some CPU cores of the partner node is not caused (the CPU cores are enough), the CPU resources can be effectively utilized through the reasonable binding of the virtual ports and the CPU core affinity, the service load is balanced, and the original service throughput is ensured.
The service processing method, the device, the equipment and the medium provided by the invention are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that the present invention may be modified and practiced without departing from the spirit of the present invention.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (15)

1. A method of service processing, applied to a storage system of dual control nodes, the method comprising:
under the condition that a service corresponding to a fault control node is transferred to a standby control node, acquiring service data of a service thread, each target virtual port of the standby control node and each current central processing unit core; the virtual ports formed by the occupied links comprise original virtual ports in the standby control node and virtual ports transferred to the standby control node by a fault control node;
uniformly distributing each target virtual port to a corresponding target central processing unit core in each current central processing unit core, wherein the number of the target central processing unit cores is a plurality of;
and uniformly distributing the service data to the corresponding target central processing unit cores for service processing according to a service balance processing mechanism, wherein the service balance processing mechanism is determined based on the disk number property of the service data and/or the utilization rate of each target central processing unit core.
2. The traffic processing method according to claim 1, wherein said equally distributing each of said target virtual ports to a corresponding target central processor core within each of said current central processor cores comprises:
numbering each target virtual port and each current central processing unit core respectively;
performing remainder processing on the total number of the current central processing unit cores by the number of each target virtual port to determine a remainder corresponding to the number of each target virtual port;
and matching the remainder corresponding to the number of each target virtual port to the number of each current central processing unit core to determine the corresponding target central processing unit core.
3. The service processing method according to claim 2, wherein the service balancing processing mechanism determines based on a landing number property of the service data, and the distributing the service data to the corresponding target central processor core for service processing according to the service balancing processing mechanism includes:
obtaining a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to a host;
Numbering the target central processing unit cores corresponding to the target virtual ports;
performing remainder processing on the total number of the target central processing unit cores by each virtual disk number to determine a remainder corresponding to each virtual disk number;
and matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core to determine the corresponding target central processing unit core.
4. The service processing method according to claim 2, wherein the service balancing processing mechanism determines based on the usage rate of the central processing unit core, and the distributing the service data to the corresponding target central processing unit core for service processing according to the service balancing processing mechanism includes:
acquiring the utilization rate of each target central processing unit core;
obtaining target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
determining a difference between the maximum usage and the minimum usage;
judging whether the difference is larger than a difference threshold;
and if the service data is larger than the minimum utilization rate, distributing the service data to the target central processing unit core corresponding to the minimum utilization rate for service processing.
5. The service processing method according to claim 2, wherein the service balancing processing mechanism determines based on a landing number property of the service data and a usage rate of each of the target central processing cores, and the distributing the service data to the corresponding target central processing core for service processing according to the service balancing processing mechanism includes:
acquiring the utilization rate of each target central processing unit core;
obtaining target central processor cores corresponding to the maximum utilization rate and the minimum utilization rate respectively from the utilization rates of the target central processor cores;
determining a difference between the maximum usage and the minimum usage;
judging whether the difference is larger than a difference threshold;
if the service data is larger than the minimum utilization rate, the virtual disk corresponding to the service data is dropped to the target central processing unit core corresponding to the minimum utilization rate;
if not, obtaining a virtual disk number corresponding to the service data, wherein the virtual disk number is distinguished from a disk number corresponding to a host;
numbering the target central processing unit cores corresponding to the target virtual ports;
performing remainder processing on the total number of the target central processing unit cores by each virtual disk number to determine a remainder corresponding to each virtual disk number;
And matching the remainder corresponding to each virtual disk number to the number of each target central processing unit core to determine the corresponding target central processing unit core.
6. The service processing method according to claim 1, wherein the transfer determination process for transferring the service corresponding to the failure control node to the backup control node includes:
obtaining a virtual port of the fault control node, a virtual port to be transferred of the standby control node and a port transfer list, wherein the port transfer list stores a port mapping relation between the virtual port of the fault control node and the virtual port to be transferred of the standby control node;
and parallelly transferring links of the virtual ports of the fault control node to corresponding virtual ports to be transferred of the standby control node according to the port transfer list.
7. The service processing method according to claim 1, wherein the transfer determination process for transferring the service corresponding to the failure control node to the backup control node includes:
acquiring a link priority corresponding to a virtual port of the fault control node, wherein the link priority is determined by the importance degree of a service thread accepted by the virtual port corresponding to the link of the fault control node;
Obtaining a virtual port of the fault control node, a virtual port to be transferred of the standby control node and a port transfer list, wherein the port transfer list stores a port mapping relation between the virtual port of the fault control node and the virtual port to be transferred of the standby control node;
and transferring the links of the virtual ports of the fault control node to the corresponding virtual ports to be transferred of the standby control node according to the port transfer list and the link priority.
8. The traffic processing method according to claim 1, wherein the determining process of each of the current central processing unit cores includes:
acquiring all central processing unit cores corresponding to the standby control nodes;
determining the processing types and the corresponding processing capacities of all the CPU cores;
screening the CPU cores with the processing types corresponding to the service processing types from all the CPU cores;
screening the prepared CPU cores with corresponding processing capacities larger than the preset processing capacities from other CPU cores except the service processing type in all the CPU cores;
And taking the central processor core corresponding to the service processing type and the prepared central processor core as each current central processor core.
9. The service processing method according to any one of claims 3 to 5, further comprising, after determining the corresponding target cpu core for the service data balanced allocation according to the service balanced processing mechanism, before landing the service data on the target cpu core:
acquiring the current throughput of each target central processing unit core;
if the current throughput is smaller than the preset throughput, acquiring the processing types and the throughput corresponding to the rest target central processing unit cores except the target central processing unit core corresponding to the current throughput which is smaller than the preset throughput;
and screening out the target central processing unit core which is of the service processing type, has the throughput larger than the preset throughput and corresponds to the throughput with the maximum throughput, and replacing the target central processing unit core which has the current throughput smaller than the preset throughput so as to perform service processing.
10. The service processing method according to claim 9, further comprising, after said distributing said service data to said corresponding target cpu core for service processing according to a service balancing processing mechanism:
determining the corresponding processing capacity of each target central processing unit core;
judging whether each processing capacity exceeds a preset processing capacity;
if the processing capacity exceeds the preset processing capacity, marking the target central processing unit core corresponding to the exceeding of the preset processing capacity so that the next service data is not dropped to the target central processing unit core corresponding to the exceeding of the preset processing capacity.
11. The service processing method according to claim 6 or 7, wherein the process of acquiring the target virtual port of the standby control node includes:
acquiring a virtual port to be transferred, corresponding to the fault control node, of a service transferred to the standby control node;
acquiring an initial virtual port of a link corresponding to a service processing thread formed by a host in the standby control node;
and taking the virtual port to be transferred and the initial virtual port as the target virtual port.
12. The service processing method according to claim 1, further comprising, after said distributing said service data to said corresponding target cpu core for service processing according to a service balancing processing mechanism:
acquiring task completion progress corresponding to each target central processing unit core according to preset time;
and clearing and recycling the memory space to which the service data corresponding to the completed task belongs.
13. A service processing apparatus, characterized by a storage system applied to a dual control node, said apparatus comprising:
the first acquisition module is used for acquiring service data of a service thread, each target virtual port of the standby control node and each current central processing unit core under the condition that a service corresponding to the fault control node is transferred to the standby control node; the virtual ports formed by the occupied links comprise original virtual ports in the standby control node and virtual ports transferred to the standby control node by a fault control node;
The first balanced distribution module is used for uniformly distributing each target virtual port to a corresponding target central processor core in each current central processor core, wherein the number of the target central processor cores is a plurality of target central processor cores;
and the second balanced distribution module is used for carrying out balanced distribution on the service data to the corresponding target central processing unit cores according to a service balanced processing mechanism for carrying out service processing, wherein the service balanced processing mechanism is determined based on the disk number property of the service data and/or the utilization rate of each target central processing unit core.
14. A service processing apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the business processing method according to any of claims 1 to 12 when executing said computer program.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the service processing method according to any of claims 1 to 12.
CN202410072350.2A 2024-01-18 2024-01-18 Service processing method, device, equipment and medium Active CN117596212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410072350.2A CN117596212B (en) 2024-01-18 2024-01-18 Service processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410072350.2A CN117596212B (en) 2024-01-18 2024-01-18 Service processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN117596212A CN117596212A (en) 2024-02-23
CN117596212B true CN117596212B (en) 2024-04-09

Family

ID=89911900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410072350.2A Active CN117596212B (en) 2024-01-18 2024-01-18 Service processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117596212B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101446885A (en) * 2005-09-05 2009-06-03 株式会社日立制作所 Storage system and access control method of storage system
US8626967B1 (en) * 2012-06-29 2014-01-07 Emc Corporation Virtualization of a storage processor for port failover
CN103645864A (en) * 2013-12-26 2014-03-19 深圳市迪菲特科技股份有限公司 Magnetic disc array dual-control system and realization method thereof
CN108924272A (en) * 2018-06-26 2018-11-30 新华三信息安全技术有限公司 A kind of port resource distribution method and device
CN110275760A (en) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 Process based on fictitious host computer processor hangs up method and its relevant device
CN110362402A (en) * 2019-06-25 2019-10-22 苏州浪潮智能科技有限公司 A kind of load-balancing method, device, equipment and readable storage medium storing program for executing
CN111866210A (en) * 2020-07-08 2020-10-30 苏州浪潮智能科技有限公司 Virtual IP balance distribution method, system, terminal and storage medium
CN113742098A (en) * 2021-08-20 2021-12-03 苏州浪潮智能科技有限公司 Kernel message processing method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101446885A (en) * 2005-09-05 2009-06-03 株式会社日立制作所 Storage system and access control method of storage system
US8626967B1 (en) * 2012-06-29 2014-01-07 Emc Corporation Virtualization of a storage processor for port failover
CN103645864A (en) * 2013-12-26 2014-03-19 深圳市迪菲特科技股份有限公司 Magnetic disc array dual-control system and realization method thereof
CN108924272A (en) * 2018-06-26 2018-11-30 新华三信息安全技术有限公司 A kind of port resource distribution method and device
CN110362402A (en) * 2019-06-25 2019-10-22 苏州浪潮智能科技有限公司 A kind of load-balancing method, device, equipment and readable storage medium storing program for executing
CN110275760A (en) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 Process based on fictitious host computer processor hangs up method and its relevant device
CN111866210A (en) * 2020-07-08 2020-10-30 苏州浪潮智能科技有限公司 Virtual IP balance distribution method, system, terminal and storage medium
CN113742098A (en) * 2021-08-20 2021-12-03 苏州浪潮智能科技有限公司 Kernel message processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN117596212A (en) 2024-02-23

Similar Documents

Publication Publication Date Title
US10545762B2 (en) Independent mapping of threads
CN107943421B (en) Partition division method and device based on distributed storage system
US20160179621A1 (en) Prioritizing Data Reconstruction in Distributed Storage Systems
CN102867035B (en) A kind of distributed file system cluster high availability method and device
CN108351806A (en) Database trigger of the distribution based on stream
US11973650B2 (en) Multi-protocol communication fabric control
US9354826B2 (en) Capacity expansion method and device
US11474919B2 (en) Method for managing multiple disks, electronic device and computer program product
CN106095622A (en) Data back up method and device
CN1581853B (en) Method for treating group to be transmitted on network, system and program thereof
CN102413183B (en) Cloud intelligence switch and processing method and system thereof
CN117596212B (en) Service processing method, device, equipment and medium
CN106293509A (en) Date storage method and system
CN117130723A (en) Determination method and device of allocation information, computer equipment and storage medium
CN115202589B (en) Placement group member selection method, device and equipment and readable storage medium
CN115225642B (en) Elastic load balancing method and system of super fusion system
CN116204448A (en) Multi-port solid state disk, control method and device thereof, medium and server
CN116974489A (en) Data processing method, device and system, electronic equipment and storage medium
CN106293501A (en) Data read-write method and device
CN116048878A (en) Business service recovery method, device and computer equipment
US11442776B2 (en) Execution job compute unit composition in computing clusters
CN103888510A (en) Service high availability method of cloud computing data center
LU101681B1 (en) Maintenance mode for storage nodes
CN113347238A (en) Message partitioning method, system, device and storage medium based on block chain
CN106020975A (en) Data operation method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant