CN113094199A - Service switching strategy management method, device and equipment in disaster area - Google Patents

Service switching strategy management method, device and equipment in disaster area Download PDF

Info

Publication number
CN113094199A
CN113094199A CN202110409594.1A CN202110409594A CN113094199A CN 113094199 A CN113094199 A CN 113094199A CN 202110409594 A CN202110409594 A CN 202110409594A CN 113094199 A CN113094199 A CN 113094199A
Authority
CN
China
Prior art keywords
capacity
target
area
node group
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110409594.1A
Other languages
Chinese (zh)
Other versions
CN113094199B (en
Inventor
张严诺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110409594.1A priority Critical patent/CN113094199B/en
Publication of CN113094199A publication Critical patent/CN113094199A/en
Application granted granted Critical
Publication of CN113094199B publication Critical patent/CN113094199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The utility model provides a business switching strategy management method, a device and equipment in disaster areas, wherein the method comprises the following steps: responding to a service switching instruction to determine a disaster area and a target area, wherein the disaster area and the target area share at least one service of at least one application; determining at least one target business of at least one target application shared by the disaster area and the target area; acquiring all target service demand capacity in the disaster area and residual capacity in the target area; and switching the services according to the required capacity and the residual capacity and a preset switching strategy so that the target area takes over all the target services of the disaster area.

Description

Service switching strategy management method, device and equipment in disaster area
Technical Field
The present disclosure relates to the field of communications, and in particular, to a method, an apparatus, and a device for managing a service switching policy in a disaster area.
Background
With the increasing requirements of customers on the continuity of internet financial services, in order to reduce the service impact caused by various faults, the high-availability deployment of the application architecture is particularly important. However, on the basis, the requirements for reasonably and efficiently completing various environment switching and degrading are more urgent through scientifically and reasonably configuring judgment conditions such as monitoring indexes and the like, different service scenes and key transaction division. However, at present, aiming at an area-level disaster, a single area needs to take over a disaster area transaction scene in full, but due to the limited capacity of the single area, all service scenes of the disaster area are difficult to take over, and at present, production operators on duty are difficult to fully understand all application groups in the area, and the real required capacity of the disaster area is difficult to obtain, so that accurate service switching is difficult to provide, and therefore how to improve the management efficiency of the service switching of the disaster area becomes a technical problem which needs to be solved urgently at present.
Disclosure of Invention
In view of the foregoing problems in the prior art, an object of the present invention is to provide a method, an apparatus and a device for managing a service switching policy in a disaster area, which can improve the efficiency of service switching management in a disaster area.
In order to solve the technical problems, the specific technical scheme is as follows:
in one aspect, a method for traffic switching policy management in a disaster area is provided herein, the method comprising:
responding to a service switching instruction to determine a disaster area and a target area, wherein the disaster area and the target area share at least one service of at least one application;
determining at least one target business of at least one target application shared by the disaster area and the target area;
acquiring all target service demand capacity in the disaster area and residual capacity in the target area;
and switching services according to a preset switching strategy according to the required capacity and the residual capacity so that the target area takes over all target services of the disaster area.
Further, the target service is determined by the following steps:
acquiring a grade sequence of services in the common application of the disaster area and the target area;
and determining at least one target service of at least one target application shared by the disaster area and the target area according to a preset grade threshold value.
Further, the acquiring of the total target business demand capacity in the disaster area comprises:
for each target service, performing pressure measurement processing on each preset node group on the target service to obtain the maximum transaction performance of all the preset node groups, marking the node group with the lowest maximum transaction performance in all the preset node groups as a bottleneck node group, and marking the rest node groups as key node groups;
acquiring historical transaction processing capacity of the key node group and historical transaction processing capacity of the bottleneck node group, and calculating the demand proportion of the key node group relative to the demand capacity of the bottleneck node group according to the historical transaction processing capacity;
calculating the transaction demand capacity of each key node group according to the transaction demand capacity of the bottleneck node group and the demand proportion, wherein the transaction demand capacity of the bottleneck node group is determined by the maximum transaction performance and the current usage of the bottleneck node group in a target area;
and calculating the required capacity of each target service according to the transaction required capacities of the bottleneck node group and the key node group, so as to obtain the required capacity of all target services in the disaster area.
Further, the calculating the required capacity of each target service according to the transaction required capacities of the bottleneck node group and the key node group includes:
aiming at any node group A in the target service;
when the node group A is marked as a bottleneck node group in all target services at least once, taking the difference value between the maximum transaction performance of the node group A in a target area and the current usage amount as the transaction demand capacity of the node group A; when all the node group A in all the target services are marked as a key group, determining the maximum value of the transaction demand capacity of the node group A in all the target services, and comparing the maximum value with the difference value between the maximum transaction performance of the node group A in a target area and the current usage amount, wherein the smaller value is taken as the transaction demand capacity of the node group A;
and taking the sum of the transaction demand capacities of the node groups on the target service as the demand capacity of the target service.
Further, the performing service switching according to a preset switching strategy according to the required capacity and the remaining capacity includes:
judging whether the residual capacity is smaller than the required capacity;
if the residual capacity is not less than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area;
if the residual capacity is smaller than the required capacity, carrying out capacity reduction on the existing application in the target area to obtain the capacity after capacity reduction, and controlling the target area to take over all target services in all target applications in the disaster area until the capacity after capacity reduction is not smaller than the required capacity.
Further, the obtaining of the residual capacity after the capacity reduction by performing the capacity reduction processing on the target area includes:
determining a capacity reduction object of the target area, wherein the capacity reduction object is a non-target application and/or a non-target service in the target area;
acquiring a capacity reduction coefficient of the capacity reduction object, wherein the capacity reduction coefficient is determined according to a preset grade of the capacity reduction object;
carrying out capacity reduction processing on the capacity reduction object according to the capacity reduction coefficient to obtain capacity reduction;
and calculating to obtain the residual capacity after the capacity reduction according to the residual capacity and the capacity reduction.
Optionally, if the remaining capacity is smaller than the required capacity, performing capacity reduction processing on the target area to obtain a capacity-reduced remaining capacity, and controlling the target area to take over all target services in all target applications in the disaster area until the capacity-reduced remaining capacity is not smaller than the required capacity includes:
if the residual capacity after capacity reduction is not less than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area;
and if the residual capacity after capacity reduction is smaller than the required capacity, performing degradation processing on all target services in the disaster area to obtain the required capacity after degradation, so that the residual capacity after capacity reduction is not smaller than the required capacity after degradation, and the target area is controlled to take over all target services in all target applications in the disaster area.
Further, the step of performing degradation processing on all target services in the disaster area to obtain the degraded required capacity includes:
calculating degradation coefficients of all the target services according to the residual capacity after capacity reduction and the required capacity;
and performing degradation processing on the required capacity of all the target services according to the degradation coefficient to obtain the degraded required capacity.
In another aspect, this document also provides an apparatus for managing traffic switching policies in a disaster area, the apparatus comprising:
the system comprises a switching instruction response module, a service switching instruction processing module and a service switching module, wherein the switching instruction response module is used for responding to a service switching instruction so as to determine a disaster area and a target area, and the disaster area and the target area share at least one service of at least one application;
a target service determination module, configured to determine at least one target service of at least one target application shared by the disaster area and the target area;
the calculation module is used for acquiring all target service demand capacity in the disaster area and the residual capacity in the target area;
and the strategy execution module is used for switching services according to the required capacity and the residual capacity and a preset switching strategy so that the target area takes over all target services of the disaster area.
In another aspect, a computer device is also provided herein, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method steps as described above when executing the computer program.
Finally, a computer-readable storage medium is also provided herein, which stores a computer program that, when executed by a processor, carries out the method steps as described above.
By adopting the technical scheme, the method, the device and the equipment for managing the service switching strategy in the disaster area have the advantages that the areas with the service sharing relationship are determined, when a disaster occurs in one area, the other area completes the takeover of the service shared by the disaster area, so that the normal operation of the shared service is ensured, further, the required capacity of the service shared by the disaster area is compared with the residual capacity of the takeover area, the preset switching strategy is combined, the quick and flexible takeover of the service shared by the disaster area is realized, and the service switching management efficiency of the disaster at the area level is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 shows a schematic representation of an implementation environment for a method provided by embodiments herein;
FIG. 2 is a schematic diagram illustrating the relationship of regions in which a common relationship exists in the embodiments herein;
FIG. 3 is a schematic diagram illustrating steps of a method for managing a traffic switching policy in a disaster area according to an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of a target service determination step in the embodiments herein;
FIG. 5 is a schematic diagram illustrating the required capacity determination step in an embodiment herein;
FIG. 6 shows a schematic diagram of taking over a disaster area in an embodiment herein;
FIG. 7 is a schematic diagram illustrating a target area reduction process in an embodiment herein;
FIG. 8 illustrates a disaster area degradation processing schematic in an embodiment herein;
FIG. 9 is a flow diagram that illustrates a method for traffic switching policy management in a disaster area, in one embodiment herein;
FIG. 10 is a schematic diagram of a traffic switching policy management device in a disaster area provided by an embodiment herein;
fig. 11 shows a schematic structural diagram of a computer device provided in embodiments herein.
Description of the symbols of the drawings:
10. a control unit;
20. a data acquisition unit;
30. a parameter configuration unit;
40. a server;
100. a switching instruction response module;
200. a target service determination module;
300. a calculation module;
400. a policy enforcement module;
1102. a computer device;
1104. a processor;
1106. a memory;
1108. a drive mechanism;
1110. an input/output module;
1112. an input device;
1114. an output device;
1116. a presentation device;
1118. a graphical user interface;
1120. a network interface;
1122. a communication link;
1124. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments herein described are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
In the prior art, in order to ensure continuity of service operation and reduce service influence caused by disaster events such as service failure and the like, high-availability deployment is usually set on an application architecture, in a general scene, a solution of cold standby is adopted, but the solution occupies more resources, a standby system cannot be used for a long time, and resource waste is caused, so that a double-active high-availability scheme can be adopted, particularly when a regional disaster occurs, a transaction scene of the disaster region is taken over through a single region, thereby avoiding partial core service interruption of the disaster region, causing great influence, and further greatly improving the utilization rate of service resources, but because the capacity of the single region is limited, all service scenes of the disaster region are difficult to take over, and at present, production operators on duty are difficult to fully understand all application groups in the region, and obtain the real required capacity of the disaster region, thus making it difficult to give an accurate service switching.
In order to solve the above problem, an embodiment of the present disclosure provides a method for managing a service switching policy in a disaster area, as shown in fig. 1, which is a schematic diagram of an implementation environment of the method, where an area a and an area b are two areas having a service sharing relationship, a data acquisition unit 20 is used to acquire operating parameters and service resources of the area a and the area b in real time, and store the acquired data in a server 40, a parameter configuration unit 30 is used to configure a switching policy when a disaster occurs in the area a or the area b, and store the switching policy in the server 40, and a control unit 10 is used to implement a corresponding switching instruction according to specific data information, for example, when a disaster occurs in the area a, the control unit 10 receives an instruction that a disaster occurs in the area a (disaster area) and needs to switch a service, quickly determines the area b (target area) as a take-over area, and further determines at least one target business of at least one target application shared by the disaster area and the target area Affairs; then acquiring all target service demand capacity in the disaster area and residual capacity in the target area according to data information in a server; and switching services according to a preset switching strategy according to the required capacity and the residual capacity so that the target area takes over all target services of the disaster area. The method can adjust the switching strategy according to the actual conditions of the disaster area and the target area, and improves the service switching management efficiency of the area-level disaster.
Specifically, embodiments herein provide a method for managing a service switching policy in a disaster area, which can improve the efficiency of service switching management for area-level disasters. Fig. 3 is a schematic step diagram of a method for managing a service switching policy in a disaster area provided in an embodiment of the present disclosure, and the present specification provides the method operation steps as described in the embodiment or the flowchart, but more or less operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual system or apparatus product executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures. Specifically, as shown in fig. 3, the method may include:
s101: responding to a service switching instruction to determine a disaster area and a target area, wherein the disaster area and the target area share at least one service of at least one application;
s102: determining at least one target business of at least one target application shared by the disaster area and the target area;
s103: acquiring all target service demand capacity in the disaster area and residual capacity in the target area;
s104: and switching services according to a preset switching strategy according to the required capacity and the residual capacity so that the target area takes over all target services of the disaster area.
In the embodiment of the description, when an area-level disaster occurs, a disaster area (a taken-over area) and a target area (a taken-over area) with an application working relationship are quickly determined, then at least one target service of at least one target application needing to be taken over is further determined, and according to the calculated relationship between the required capacity of all target services in the disaster area and the residual capacity of the target services, a preset switching strategy is combined to realize quick taking over of the target services.
It can be understood that, when an area-level disaster occurs, since the total transaction flow of a single area is large and is hardly received by a takeover area completely, in order to ensure that the core service of the disaster area can operate normally and continuously, it can be implemented by taking over part of the service of the disaster area, and ensure that the core rights and interests of the disaster area are not damaged, wherein the disaster area and the target area are in an application sharing relationship, so that when an application cannot operate continuously when the disaster occurs in the disaster area, the disaster area can be taken over through the target area, and due to the sharing relationship, the architecture of the shared application has been deployed in the target area, so that the rapid takeover of the target area can be implemented, and the takeover efficiency and the success rate are improved.
It should be noted that, in an actual application, an application may have a relationship shared by a plurality of areas, and therefore, when a disaster occurs in one of the areas, the takeover area may be determined according to preset configuration information, for example, the takeover of each area from the disaster area may be determined according to a distance between the area and the disaster area, or a preset takeover sequence, in some other embodiments, the takeover sequence may also be determined according to a remaining capacity of other areas, and a specific takeover rule is not limited in this specification.
The target application can be an important application in the application shared by the disaster area and the target area, and the target service can be a key service in the target application, so that the core service of the disaster area can be ensured to be taken over under the condition that the whole disaster area cannot be taken over, the capability of coping with risks is enhanced, and the recovery efficiency of the disaster area is further improved.
As shown in fig. 4, the target service may be determined by the following steps:
s201: acquiring a grade sequence of services in the common application of the disaster area and the target area;
s202: and determining at least one target service of at least one target application shared by the disaster area and the target area according to a preset grade threshold value.
It can be understood that the rank sequence may represent the importance degree of the service, rank the importance degrees of the services in all the common applications, and determine, when a disaster occurs, that the corresponding service is the target service according to the rank from high to low, so that the application where the target service is located is the target application, the preset rank threshold may be information configured in advance, in some other embodiments, different thresholds may also be set according to different target areas, so that a suitable number of target services may be selected according to the characteristics of the different target areas, thereby ensuring the reasonability and efficiency of service switching.
In some other embodiments, the application and the importance degree of the service in each application may also be determined, which may be set in advance in the service configuration process, in a specific implementation, the levels in different applications may be determined first, then the level of different service in each application is determined, when a target service is determined, the allocation proportion may be determined according to the levels of different applications, so as to obtain the number of services in different applications, and further determine the target application according to the level of different service in each application, so the determination manner of the target service is not limited in this specification.
In this embodiment, as shown in fig. 5, the acquiring all target business demand capacity in the disaster area includes:
s301: for each target service, performing pressure measurement processing on each preset node group on the target service to obtain the maximum transaction performance of all the preset node groups, marking the node group with the lowest maximum transaction performance in all the preset node groups as a bottleneck node group, and marking the rest node groups as key node groups;
s302: acquiring historical transaction processing capacity of the key node group and historical transaction processing capacity of the bottleneck node group, and calculating to obtain a required capacity demand proportion of the key node group relative to the bottleneck node group according to the historical transaction processing capacity;
s303: calculating the transaction demand capacity of each key node group according to the transaction demand capacity of the bottleneck node group and the demand capacity demand proportion, wherein the transaction demand capacity of the bottleneck node group is determined by the maximum transaction performance and the current usage of the bottleneck node group in a target area;
s304: and calculating the required capacity of each target service according to the transaction required capacities of the bottleneck node group and the key node group, so as to obtain the required capacity of all target services in the disaster area.
The node group may be configured to process each microservice involved in each service, and all microservices are executed in a certain order to implement the corresponding service. And recording the transaction processing capacity TPS of each node group by carrying out full link pressure measurement on a single service scene. In the actual test, a single node group can be tested, all node groups of the whole link can be tested together, and when the number of the node groups in the link increases, the situation that all the groups can not reach the maximum performance use can occur. For example, the plates surrounding the bucket are different in size, and the maximum amount of water that can be contained in the bucket depends on the shortest plate, which is the bottle neck.
Exemplarily, as shown in fig. 2, since the area a and the area b have a parallel sharing relationship, the performance of a node group in a service shared by the two areas can be obtained by predicting any one of the area a and the area b, and meanwhile, the node group therein is labeled by combining the function of each node group on a service processing link, such as a key node group, a bottleneck node group, and a non-key node group.
The preset node group can be a main link node group in a single service scene and can be configured in advance according to actual conditions, so that other non-key node groups can be prevented from being taken over, and more key applications can be guaranteed to be taken over.
In each target service, after a bottleneck node group is determined, correspondingly, the maximum value of the capacity of the bottleneck node group in a target area, which can be expanded, is determined to be the transaction demand capacity of the bottleneck node group, so that the expansion capacity theoretically required by other key node groups in the target area, namely the transaction demand capacity of the key node group in a disaster area, can be obtained through the transaction demand capacity and the calculated demand proportion, and finally, the sum of the transaction demand capacities of the bottleneck node group and the key node group on a target service transaction link is taken as the demand capacity of the target service, and the actual demand is calculated on the node group, so that the accurate expansion demand capacity is calculated, and the calculation of redundant capacity is avoided.
Wherein, the transaction demand capacity of the bottleneck node group (bottleneck node group in the determined target service) can be obtained by the following formula:
Figure BDA0003023635460000101
wherein R isBottle neckTransaction demand capacity, H, for a bottleneck node group in any target serviceBottle neckFor the bottleneck node group background bearable transaction amount obtained by pressure measurement, MBottle neckIndividual server transaction processing capabilities TPS for a bottleneck node group, which may be the current real-time transaction processing capabilities, N, of the bottleneck node groupBottle neckCapacity is taken up for the bottleneck node group.
In detail, the maximum transaction performance of any node group can be obtained by the ratio of the acceptable transaction amount of the background to the daily transaction processing capacity, and then the difference between the maximum transaction performance and the occupied capacity of the existing node group is used as the theoretically maximum transaction demand capacity of the node group.
Illustratively, the proportion of the demand of the key node group in each single service scenario is modulo by historical data, and the bottleneck node group C and any key node group a are known in the service scenario:
transaction amount of bottleneck node group C per unit time: t isC
The total transaction amount of the key node group A in unit time is as follows: t isA
The required proportion of the required capacity of the key node group A in the service scene is as follows: xA
The following formula (2) is obtained by obtaining N key node groups A andobtaining the required proportion X of the node group A in the service scene by the transaction proportion mean value of the bottleneck node group CASimilarly, the demand proportion X of all key node groups in the service scene is obtainedB、XC、XD...XN. Correspondingly, the required capacity requirement ratio X of the key node group in other service scenesCriticalCan be obtained by the formula (3).
Figure BDA0003023635460000111
Figure BDA0003023635460000112
Wherein, XCriticalThe requirement proportion, T, of the transaction required capacity of the key node group in any service sceneCriticalIs the transaction throughput, T, of a group of key nodes in a unit timeBottle neckThe transaction processing amount in unit time of the bottleneck node group.
Further, in the above service scenario, the transaction demand capacity of the key node group a should be the ratio of the demand of the key node group a to the transaction demand capacity (R) of the bottleneck node group CC) The following formula (4) is a calculation formula of the transaction demand capacity of the key node group a. Accordingly, the transaction demand capacity of the key node group in other traffic scenarios can be obtained by (5).
RA=XA×RC (4)
RCritical=XCritical×RBottle neck (5)
Wherein R isCriticalTransaction demand capacity, X, for a key node group in any service scenarioCriticalThe requirement proportion, R, of the transaction requirement capacity of the key node group in any service sceneBottle neckTrading demand capacity (i.e., maximum available demand capacity), R, for a bottleneck node groupCAnd RBottle neckCan be obtained by the formula (1).
In actual work, the same node group may exist on different service scenario operation links, so when calculating the transaction demand capacity of a single node group, the transaction demand capacity corresponding to the node group under different scenarios needs to be considered, and optionally, the calculating the demand capacity of each target service according to the transaction demand capacities of the bottleneck node group and the key node group includes:
aiming at any node group A in the target service;
when the node group A is marked as a bottleneck node group in all target services at least once, taking the difference value between the maximum transaction performance of the node group A in a target area and the current usage amount as the transaction demand capacity of the node group A; when all the node group A in all the target services are marked as a key group, determining the maximum value of the transaction demand capacity of the node group A in all the target services, and comparing the maximum value with the difference value between the maximum transaction performance of the node group A in a target area and the current usage amount, wherein the smaller value is taken as the transaction demand capacity of the node group A;
and taking the sum of the transaction demand capacities of the node groups on the target service as the demand capacity of the target service.
It can be understood that the marking information may be a key node group or a bottleneck node group, and thus, for one node group, it may be on the operation links of different target services, so that only one transaction demand capacity needs to be determined, which is convenient for calculating the demand capacity of each target service, and the determined transaction demand capacity can completely load the demands of all target services, and at the same time, cannot exceed its own maximum transaction performance, thereby avoiding services that cannot meet larger transaction demands therein.
Illustratively, the only transaction demand capacity determined by any node group A under all service scenes can be the actual demand capacity QAIf there is a bottleneck node group in the label information of node group a, then its actual required capacity is the capacity expansion capacity in its target area (obtained by formula (1))) When all the label information of the node group a is the key node group, the label information can be obtained by the following formula (7):
when A has a bottleneck node group: qA=RA bottleneck (6);
When all A are key node groups: qA=min(max(RKey 1,RKey 2...RKey N),RA max) (7)
Wherein R isKey NTransaction demand capacity, R, calculated for node group A under different target servicesA maxThe difference between the maximum transaction performance in the target area and the current usage (i.e., the maximum transaction demand capacity that can be obtained) for node group a.
Therefore, through the above steps, the unique transaction demand capacity (actual transaction demand capacity) corresponding to the node group in all the target services can be obtained, and then, for each target service, the actual transaction demand capacity corresponding to the node group on each target service is added to obtain the demand capacity of the target service.
The calculation formula of the required capacity can be as follows:
Figure BDA0003023635460000121
wherein, YTRequired capacity, Q, for all target services in disaster areaKey ijIs the actual demand capacity, Q, of the ith key node group under the jth target serviceBottleneck jThe actual required capacity of the bottleneck node group under the jth target service.
Such as: three target services are determined in the disaster area, and the required capacity of each target service can be:
target service 1 demand capacity: y is1=QA+QB+QC+QD+QE
Target service 2 demand capacity: y is2=QB+QO+QC+QW
Targeted services3, required capacity: y is2=QB+QO+QH+QW
The total required capacity of the final disaster area is then: y isT=Y1+Y2+Y3
It should be noted that, in the process of obtaining the required capacity of all target services, because a single node group may be on the operating links of multiple target services, the actual transaction required capacity of the single node group may be superimposed many times, in order to further reduce the required capacity in a disaster area, the number of node groups in all target services may also be determined, and then the actual transaction required capacities of all node groups are added to obtain the required capacity of all target services, so that multiple superimposition of the same node group when calculating the final required capacity is avoided, and on the basis of ensuring that the target services in the target application can normally operate after taking over, the required capacity required to be taken over is reduced, and further the difficulty of taking over is reduced.
In the embodiment of the present specification, the remaining capacity in the target area may be obtained by the following formula:
PB=TB-MB (9)
wherein P isBIs the remaining capacity, T, in the target areaBFor the total amount of resources in the target area, MBThe amount of occupied resources in the target area.
The required capacity of the disaster area and the remaining capacity of the target area are obtained through the above steps, and according to the relationship between the required capacity and the remaining capacity, the fast and accurate service switching is realized according to a preset switching policy, as shown in fig. 6, optionally, the method may include the following steps:
s401: judging whether the residual capacity is smaller than the required capacity;
s402: if the residual capacity is not less than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area;
s403: if the residual capacity is smaller than the required capacity, carrying out capacity reduction on the existing application in the target area to obtain the capacity after capacity reduction, and controlling the target area to take over all target services in all target applications in the disaster area until the capacity after capacity reduction is not smaller than the required capacity.
It can be understood that, when the remaining capacity is not less than (i.e. greater than or equal to) the required capacity, it indicates that the target area can take over all the target services in the disaster area, so that the target area can be controlled to take over the disaster area, and the capability of responding to risks and reacting quickly is improved.
When the remaining capacity is smaller than the required capacity, in order to ensure that all target services in the disaster area are received, capacity reduction processing may be performed on the target area, where the capacity reduction processing may be to reduce currently occupied (existing application) resources of the target area, so as to obtain more available capacity, and optionally, as shown in fig. 7, the capacity reduction processing performed on the target area to obtain the capacity after capacity reduction includes:
s501: determining a capacity reduction object of the target area, wherein the capacity reduction object is a non-target application and/or a non-target service in the target area;
s502: acquiring a capacity reduction coefficient of the capacity reduction object, wherein the capacity reduction coefficient is determined according to a preset grade of the capacity reduction object;
s503: carrying out capacity reduction processing on the capacity reduction object according to the capacity reduction coefficient to obtain capacity reduction;
s504: and calculating to obtain the residual capacity after the capacity reduction according to the residual capacity and the capacity reduction.
The capacity reduction coefficient can be set in advance according to different applications, and in order to ensure normal operation of other key applications when resources are in shortage, a certain degree of capacity reduction processing can be performed on non-key applications (and/or non-key services) according to the capacity reduction coefficient, different applications or services correspond to different capacity reduction coefficients, specifically, preset levels of different applications or services can be set in advance, the preset levels can represent the importance degrees of the applications or services, and then when a target application or a target service is determined, capacity reduction processing can be performed on the non-target applications or services which are distributed in the levels, so that the available capacity of a target area is increased.
And performing capacity reduction on the capacity reduction object according to the capacity reduction coefficient to obtain capacity reduction, wherein the capacity reduction can represent the extra residual capacity obtained in the target area, and the residual capacity after capacity reduction can be obtained by combining the residual capacity obtained by initial calculation.
It should be noted that after the capacity reduction processing is performed on the capacity reduction object, the normal operation of the capacity reduction object should be ensured, so that the normal operation of the service of the target area itself can be ensured, and therefore, the setting of the capacity reduction coefficient is set according to the characteristics of each capacity reduction object itself, for example, when the capacity reduction object is initially configured, an application framework person or a developer may configure a certain capacity reduction coefficient (for example, 50%) in advance, and then store the mapping relationship between each application and the capacity reduction coefficient, so that when the capacity reduction object (i.e., a non-target application) is determined, the corresponding capacity reduction coefficient may be extracted according to each application, and then the capacity reduction processing is performed to release the capacity.
Exemplarily, the non-target application in the target area is obtained, the occupied capacity and the corresponding capacity reduction coefficient of the non-target application are obtained according to the non-target application, and the capacity reduction of each non-target application can be calculated according to the occupied capacity and the capacity reduction coefficient, so as to obtain the capacity reduction of all the non-target applications, wherein the capacity reduction can be obtained by the following formula:
Figure BDA0003023635460000141
wherein, UTTotal reduction capacity for all non-target applications, IkOccupied capacity for kth non-target application, OkThe scaling factor for the kth non-target application.
Correspondingly, the residual capacity after capacity reduction is PB+UT
In this embodiment of the present specification, when a non-target application releases all occupied capacity (a theoretical maximum value) through a capacity reduction factor in a target area, if the remaining capacity after capacity reduction is not less than the required capacity of the disaster area, the target area may be controlled to take over all services in the disaster area, but when the remaining capacity after capacity reduction is still less than the required capacity of the disaster area, the target area still cannot take over all the services, as shown in fig. 8, and therefore, the following steps are further required:
s601: if the residual capacity after capacity reduction is not less than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area;
s602: and if the residual capacity after capacity reduction is smaller than the required capacity, performing degradation processing on all target services in the disaster area to obtain the required capacity after degradation, so that the residual capacity after capacity reduction is not smaller than the required capacity after degradation, and the target area is controlled to take over all target services in all target applications in the disaster area.
It can be understood that, since the target area cannot accept the whole transaction amount of the target service in the disaster area and does not release more resources by itself, the corresponding percentage of transaction amount can be taken over according to the percentage that the resources can provide, and the degradation process can be understood as performing a "capacity reduction" process on the target service in the disaster area, thereby further reducing the capacity requirement to be taken over. Such as by calculating that the required capacity in the disaster area is 2000 containers, but the remaining capacity in the target area can only reach 1000, then the service needs to be downgraded by half.
Therefore, the degrading all the target services in the disaster area to obtain the degraded required capacity includes:
calculating degradation coefficients of all the target services according to the residual capacity after capacity reduction and the required capacity;
and performing degradation processing on the required capacity of all the target services according to the degradation coefficient to obtain the degraded required capacity.
The capacity reduction processing can be realized on all target services in the disaster area by setting the degradation coefficient, so that all the target services can be ensured to be finally taken over, and the flexibility and the convenience of a disaster switching strategy are improved.
In some other embodiments, different degradation coefficients may be set for target services in a certain rank order, which may further ensure that more important services reserve more resources, thereby ensuring more core rights and interests in a disaster area, where the setting of different degradation coefficients is not limited in this specification, as long as it is ensured that the target services obtained after completion of degradation can be received by the target area in full.
Illustratively, the degradation coefficient may be obtained by the following formula:
Figure BDA0003023635460000151
wherein, VBFor the degradation factor, YTIs the required capacity of the disaster area.
After the degradation factor is obtained, the final degraded demanded capacity may be: y isT*VB. Therefore, all target services in the disaster area can be ensured to be successfully taken over by the target area.
In an embodiment of the present specification, there is further provided a method for managing a service switching policy in a disaster area, which relates to a campus (disaster area) and a campus (target area), as shown in fig. 9, the method may include the following steps:
s701: when a disaster accident happens to the park a, triggering a park level switching instruction, and determining that the park b takes over the park a;
s702: calculating and obtaining the required capacity Y of all target services of the a parkTAnd b remaining capacity P of the parkB
S703: judgment of YTWhether or not less than PB
S704: if Y isTLess than PBThen adjust the b parkCapacity expansion capacity Z ofTIs equal to YTAnd controlling the park b to take over all the target services of the park a;
s705: if Y isTNot less than PBThen, the volume reduction processing is carried out on the b park, and the volume reduction U of the b park is obtained through calculationTAnd the residual capacity P after shrinkageB+UT
S706: judgment of YTWhether or not less than PB+UT(ii) a If Y isTLess than PB+UTStep S704 is entered;
s707: if Y isTNot less than PB+UTIf yes, the degradation process is carried out on the park a, and the capacity expansion capacity Z of the park b is adjustedTIs equal to PB+UTTo control b-park to take over all target services of a-park, wherein the demanded capacity Y of the degraded a-parkT*VBThe residual capacity P after the capacity reduction of the b-parkB+UTAre equal.
In the embodiment of the specification, when a park a and a park b have a parallel sharing relationship, different switching strategies are selected to realize smooth takeover of the park b when a disaster of the park a fails, specifically, by calculating a comparison relationship between the required capacity of a key service in the park a and the residual capacity in the park b and combining with corresponding capacity reduction and degradation rules, quick decision and switching when a park-level disaster occurs are realized, under the large environment of quick iteration and complicated service scene logic, the flexible parameter configuration effectively improves the capacities of coping risk, automatic adjustment and quick response, and meanwhile, the service influence can be effectively evaluated according to the capacity reduction coefficient and the degradation coefficient, so that the operation and maintenance automation efficiency and level can be remarkably improved.
On the basis of the method provided above, as shown in fig. 10, an embodiment of the present specification provides a device for managing a service switching policy in a disaster area, where the device includes:
a switching instruction response module 100, configured to respond to a service switching instruction to determine a disaster area and a target area, where the disaster area and the target area share at least one service of at least one application;
a target service determination module 200, configured to determine at least one target service of at least one target application shared by the disaster area and the target area;
a calculating module 300, configured to obtain all target service demand capacities in the disaster area and remaining capacities in the target area;
and a policy executing module 400, configured to perform service switching according to a preset switching policy according to the required capacity and the remaining capacity, so that the target area takes over all target services of the disaster area.
The beneficial effects that can be obtained by the device are consistent with the beneficial effects obtained by the method, and the description is omitted.
As shown in fig. 11, for a computer device provided for embodiments herein, the computer device 1102 may include one or more processors 1104, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device 1102 may also include any memory 1106 for storing any kind of information, such as code, settings, data, etc. For example, and without limitation, memory 1106 may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of computer device 1102. In one case, when the processor 1104 executes the associated instructions, which are stored in any memory or combination of memories, the computer device 1102 can perform any of the operations of the associated instructions. The computer device 1102 also includes one or more drive mechanisms 1108, such as a hard disk drive mechanism, an optical disk drive mechanism, etc., for interacting with any memory.
Computer device 1102 may also include an input/output module 1110(I/O) for receiving various inputs (via input device 1112) and for providing various outputs (via output device 1114). One particular output mechanism may include a presentation device 1116 and an associated Graphical User Interface (GUI) 1118. In other embodiments, input/output module 1110(I/O), input device 1112, and output device 1114 may also be excluded, as only one computer device in a network. Computer device 1102 can also include one or more network interfaces 1120 for exchanging data with other devices via one or more communication links 1122. One or more communication buses 1124 couple the above-described components together.
Communication link 1122 may be implemented in any manner, e.g., via a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. Communications link 1122 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
Corresponding to the methods in fig. 3-9, the embodiments herein also provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the above-described method.
Embodiments herein also provide computer readable instructions, wherein when executed by a processor, a program thereof causes the processor to perform the method as shown in fig. 3-9.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, meaning that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional units in the embodiments herein may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (11)

1. A method for managing a service switching policy in a disaster area, the method comprising:
responding to a service switching instruction to determine a disaster area and a target area, wherein the disaster area and the target area share at least one service of at least one application;
determining at least one target business of at least one target application shared by the disaster area and the target area;
acquiring all target service demand capacity in the disaster area and residual capacity in the target area;
and switching services according to a preset switching strategy according to the required capacity and the residual capacity so that the target area takes over all target services of the disaster area.
2. The method of claim 1, wherein the target traffic is determined by:
acquiring a grade sequence of services in the common application of the disaster area and the target area;
and determining at least one target service of at least one target application shared by the disaster area and the target area according to a preset grade threshold value.
3. The method of claim 1, wherein the obtaining of the total target business demand capacity in the disaster area comprises:
for each target service, performing pressure measurement processing on each preset node group on the target service to obtain the maximum transaction performance of all the preset node groups, marking the node group with the lowest maximum transaction performance in all the preset node groups as a bottleneck node group, and marking the rest node groups as key node groups;
acquiring historical transaction processing capacity of the key node group and historical transaction processing capacity of the bottleneck node group, and calculating the demand proportion of the key node group relative to the demand capacity of the bottleneck node group according to the historical transaction processing capacity;
calculating the transaction demand capacity of each key node group according to the transaction demand capacity of the bottleneck node group and the demand proportion, wherein the transaction demand capacity of the bottleneck node group is determined by the maximum transaction performance and the current usage of the bottleneck node group in a target area;
and calculating the required capacity of each target service according to the transaction required capacities of the bottleneck node group and the key node group, so as to obtain the required capacity of all target services in the disaster area.
4. The method of claim 3, wherein the calculating the required capacity of each target service according to the transaction required capacities of the bottleneck node group and the key node group comprises:
aiming at any node group A in the target service;
when the node group A is marked as a bottleneck node group in all target services at least once, taking the difference value between the maximum transaction performance of the node group A in a target area and the current usage amount as the transaction demand capacity of the node group A; when all the node group A in all the target services are marked as a key group, determining the maximum value of the transaction demand capacity of the node group A in all the target services, and comparing the maximum value with the difference value between the maximum transaction performance of the node group A in a target area and the current usage amount, wherein the smaller value is taken as the transaction demand capacity of the node group A;
and taking the sum of the transaction demand capacities of the node groups on the target service as the demand capacity of the target service.
5. The method according to claim 1, wherein the performing service handover according to a preset handover policy based on the required capacity and the remaining capacity comprises:
judging whether the residual capacity is smaller than the required capacity;
if the residual capacity is not less than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area;
if the residual capacity is smaller than the required capacity, carrying out capacity reduction on the existing application in the target area to obtain the capacity after capacity reduction, and controlling the target area to take over all target services in all target applications in the disaster area until the capacity after capacity reduction is not smaller than the required capacity.
6. The method according to claim 5, wherein the reducing the target area to obtain the reduced remaining capacity comprises:
determining a capacity reduction object of the target area, wherein the capacity reduction object is a non-target application and/or a non-target service in the target area;
acquiring a capacity reduction coefficient of the capacity reduction object, wherein the capacity reduction coefficient is determined according to a preset grade of the capacity reduction object;
carrying out capacity reduction processing on the capacity reduction object according to the capacity reduction coefficient to obtain capacity reduction;
and calculating to obtain the residual capacity after the capacity reduction according to the residual capacity and the capacity reduction.
7. The method according to claim 5, wherein if the remaining capacity is smaller than the required capacity, performing capacity reduction processing on the target area to obtain a capacity-reduced remaining capacity, and until the capacity-reduced remaining capacity is not smaller than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area, includes:
if the residual capacity after capacity reduction is not less than the required capacity, controlling the target area to take over all target services in all target applications in the disaster area;
and if the residual capacity after capacity reduction is smaller than the required capacity, performing degradation processing on all target services in the disaster area to obtain the required capacity after degradation, so that the residual capacity after capacity reduction is not smaller than the required capacity after degradation, and the target area is controlled to take over all target services in all target applications in the disaster area.
8. The method of claim 7, wherein the downgrading all target traffic in the disaster area to obtain a downgraded demanded capacity comprises:
calculating degradation coefficients of all the target services according to the residual capacity after capacity reduction and the required capacity;
and performing degradation processing on the required capacity of all the target services according to the degradation coefficient to obtain the degraded required capacity.
9. An apparatus for managing a traffic switching policy in a disaster area, the apparatus comprising:
the system comprises a switching instruction response module, a service switching instruction processing module and a service switching module, wherein the switching instruction response module is used for responding to a service switching instruction so as to determine a disaster area and a target area, and the disaster area and the target area share at least one service of at least one application;
a target service determination module, configured to determine at least one target service of at least one target application shared by the disaster area and the target area;
the calculation module is used for acquiring all target service demand capacity in the disaster area and the residual capacity in the target area;
and the strategy execution module is used for switching services according to the required capacity and the residual capacity and a preset switching strategy so that the target area takes over all target services of the disaster area.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method steps of any of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the method steps of any one of claims 1 to 8.
CN202110409594.1A 2021-04-16 2021-04-16 Method, device and equipment for managing service switching strategy in disaster area Active CN113094199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110409594.1A CN113094199B (en) 2021-04-16 2021-04-16 Method, device and equipment for managing service switching strategy in disaster area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110409594.1A CN113094199B (en) 2021-04-16 2021-04-16 Method, device and equipment for managing service switching strategy in disaster area

Publications (2)

Publication Number Publication Date
CN113094199A true CN113094199A (en) 2021-07-09
CN113094199B CN113094199B (en) 2024-07-09

Family

ID=76678109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110409594.1A Active CN113094199B (en) 2021-04-16 2021-04-16 Method, device and equipment for managing service switching strategy in disaster area

Country Status (1)

Country Link
CN (1) CN113094199B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888277A (en) * 2012-12-19 2014-06-25 中国移动通信集团公司 Gateway disaster recovery backup method, apparatus and system
US20180365116A1 (en) * 2017-06-19 2018-12-20 International Business Machines Corporation Scaling out a hybrid cloud storage service
CN111209178A (en) * 2020-01-13 2020-05-29 中信银行股份有限公司 Full link bottleneck testing method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888277A (en) * 2012-12-19 2014-06-25 中国移动通信集团公司 Gateway disaster recovery backup method, apparatus and system
US20180365116A1 (en) * 2017-06-19 2018-12-20 International Business Machines Corporation Scaling out a hybrid cloud storage service
CN111209178A (en) * 2020-01-13 2020-05-29 中信银行股份有限公司 Full link bottleneck testing method and system

Also Published As

Publication number Publication date
CN113094199B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
EP3627767B1 (en) Fault processing method and device for nodes in cluster
EP2493118B1 (en) Information processing system
JP4304535B2 (en) Information processing apparatus, program, modular system operation management system, and component selection method
CN108965014B (en) QoS-aware service chain backup method and system
CN105337780B (en) A kind of server node configuration method and physical node
JP6664812B2 (en) Automatic virtual resource selection system and method
EP3793206B1 (en) Physical optical network virtualization mapping method and apparatus, and controller and storage medium
EP3117315A1 (en) Management of resource allocation in a mobile telecommunication network
CN113032185B (en) Backup task management method, device, equipment and storage medium
Di Mauro et al. Service function chaining deployed in an NFV environment: An availability modeling
CN108319618B (en) Data distribution control method, system and device of distributed storage system
WO2021136335A1 (en) Method for controlling edge node, node, and edge computing system
CN113114491B (en) Method, device and equipment for constructing network topology
EP3152659B1 (en) Scheduling access to resources for efficient utilisation of network capacity and infrastructure
KR20150124642A (en) Communication failure recover method of parallel-connecte server system
CN113079427B (en) ASON network service availability evaluation method based on network evolution model
CN114244713A (en) Resource backup method and device for power 5G network slice
CN113094199A (en) Service switching strategy management method, device and equipment in disaster area
US20050022048A1 (en) Fault tolerance in networks
CN112269693B (en) Node self-coordination method, device and computer readable storage medium
US10382301B2 (en) Efficiently calculating per service impact of ethernet ring status changes
CN111371600B (en) Method and device for determining expansion rationality
CN106161068B (en) Recovery prompting and distributing method for network resources and controller
CN112231142B (en) System backup recovery method, device, computer equipment and storage medium
CN113515524A (en) Automatic dynamic allocation method and device for distributed cache access layer nodes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant