CN112398945A - Service processing method and device based on backpressure - Google Patents

Service processing method and device based on backpressure Download PDF

Info

Publication number
CN112398945A
CN112398945A CN202011276950.9A CN202011276950A CN112398945A CN 112398945 A CN112398945 A CN 112398945A CN 202011276950 A CN202011276950 A CN 202011276950A CN 112398945 A CN112398945 A CN 112398945A
Authority
CN
China
Prior art keywords
service
identifier
service identifier
capacity
flow control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011276950.9A
Other languages
Chinese (zh)
Other versions
CN112398945B (en
Inventor
吴冕冠
周文泽
陆新龙
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202011276950.9A priority Critical patent/CN112398945B/en
Publication of CN112398945A publication Critical patent/CN112398945A/en
Application granted granted Critical
Publication of CN112398945B publication Critical patent/CN112398945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a business processing method and device based on backpressure, which can be used in the financial field or other technical fields. The method comprises the following steps: receiving each service request, wherein the service request comprises a service identifier; if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier; and if the cached service requests corresponding to any one service identifier are judged to be larger than the corresponding capacity expansion threshold, updating the flow control strategy to be current limiting and performing service capacity expansion. The device is used for executing the method. The business processing method and device based on the backpressure, provided by the embodiment of the invention, improve the reliability of service.

Description

Service processing method and device based on backpressure
Technical Field
The invention relates to the technical field of computers, in particular to a business processing method and device based on backpressure.
Background
In the internet finance era, financial products and service modes are undergoing a revolution, the number of services is continuously increasing, and the service modes are iterated frequently.
In the face of the brand new development situation of internet finance, the traditional single IT architecture is increasingly exposed to the defect of low efficiency, and the development of distributed technology is promoted. By transforming the single application into a distributed service mode, the pressure on the system caused by rapid service growth can be reduced. However, the distributed service mode complicates the call relationship between services, and the performance capacities that different services can carry are different, and when the request volume of a service is suddenly increased, if a service in the entire call link cannot bear the service pressure and fails, the entire transaction will fail. In order to solve the problem that a transaction fails due to the fact that a service cannot bear service pressure because the service request concurrency amount suddenly increases, in the prior art, each service ensures high availability of the service, for example, use thresholds of a CPU and a memory are set, and when the utilization rates of the CPU and the memory are higher than a certain value, a capacity expansion strategy is triggered.
Disclosure of Invention
To solve the problems in the prior art, embodiments of the present invention provide a service processing method and apparatus based on backpressure, which can at least partially solve the problems in the prior art.
In one aspect, the present invention provides a service processing method based on backpressure, including:
receiving each service request, wherein the service request comprises a service identifier;
if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier;
and if the cached service requests corresponding to any one service identifier are judged to be larger than the corresponding capacity expansion threshold, updating the flow control strategy to be current limiting and performing service capacity expansion.
In another aspect, the present invention provides a service processing apparatus based on backpressure, including:
a receiving unit, configured to receive each service request, where the service request includes a service identifier;
the first statistical unit is used for caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier after knowing that the current flow control strategy is normal;
and the first updating unit is used for updating the flow control strategy to be flow limiting and performing service capacity expansion after judging that the number of the cached service requests corresponding to any one service identifier is larger than the corresponding capacity expansion threshold.
In another aspect, the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the backpressure-based traffic processing method described in any of the above embodiments are implemented.
In yet another aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the steps of the backpressure-based traffic processing method according to any one of the above embodiments.
The service processing method and device based on backpressure provided by the embodiment of the invention can receive each service request, after the current flow control strategy is normal, the service request corresponding to each service identifier is cached according to the service identifier included in each service request, the number of the service requests in the cache corresponding to each service identifier is counted, and after the condition that the number of the cached service requests corresponding to any one service identifier is larger than the corresponding capacity expansion threshold value is judged, the flow control strategy is updated to be current limiting and service capacity expansion is carried out, so that the distributed service system is ensured not to be dragged down due to sudden increase of the service request quantity of a certain service or failure of the certain service, and the reliability of the service is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
fig. 1 is a schematic structural diagram of a distributed service system according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a backpressure-based traffic processing method according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating a backpressure-based traffic processing method according to another embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to another embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a back pressure-based traffic processing device according to yet another embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to still another embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a backpressure-based traffic processing apparatus according to still another embodiment of the present invention.
Fig. 9 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
In order to facilitate understanding of the technical solutions provided in the present application, the following first describes relevant contents of the technical solutions in the present application. In order to solve the problem that the transaction fails because the service pressure cannot be borne by the service due to sudden increase of the concurrency of the service requests, the embodiment of the invention arranges the flow control server in the distributed service system, and all the service requests firstly pass through the flow control server and are redistributed to each service server of the distributed service system for processing.
Fig. 1 is a schematic structural diagram of a distributed service system according to an embodiment of the present invention, and as shown in fig. 1, the distributed service system according to the embodiment of the present invention includes a flow control server 1, a plurality of service servers 2, and a capacity adjustment server 3, where:
the flow control server 1 is connected to each of the service servers 2 in communication, and the flow control server 1 is connected to the capacity adjustment server 3 in communication. The traffic control server 1 is configured to execute the service processing method based on backpressure provided in the embodiment of the present invention, and the service server 2 is configured to receive a service processing request sent by the traffic control server 1 and process the service processing request. The capacity adjustment server 3 is configured to perform service capacity expansion according to the service capacity expansion request sent by the traffic control server 1, or perform service capacity reduction according to the service capacity reduction request sent by the traffic control server 1. The specific process of the capacity adjustment server 3 expanding the capacity of the service and reducing the capacity of the service is the prior art, and the embodiment of the present invention is not described in detail.
Fig. 2 is a schematic flow chart of a back pressure-based service processing method according to an embodiment of the present invention, and as shown in fig. 2, the back pressure-based service processing method according to the embodiment of the present invention includes:
s201, receiving each service request, wherein the service request comprises a service identifier;
specifically, each user may send a service request to a traffic control server through a client, and the traffic control server may receive each service request, where the service request includes a service identifier. The service identification corresponds to the service one by one.
S202, if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier;
specifically, the traffic control server may obtain the current traffic control policy after receiving the service request, and if the current traffic control policy is normal, it indicates that each service request may be processed normally, the traffic control server may cache each service request and store the cached service request in the cache corresponding to the service identifier included in each service request, and the service request stored in the cache corresponding to the service identifier may be sent to the service server for processing according to a first-in first-out principle. The traffic control server may count the number of the service requests in the cache corresponding to each service identifier, that is, count the number of the service requests stored in the cache corresponding to each service identifier. And the service identification corresponds to the cache one by one. The individual business servers are deployed through a distributed service model.
And S203, if the number of the service requests in the cache corresponding to any service identifier is judged to be larger than the corresponding capacity expansion threshold value, updating the flow control strategy to be current limiting and performing service capacity expansion.
Specifically, after obtaining the number of the service requests in the cache corresponding to each service identifier, the traffic control server compares the number of the service requests in the cache corresponding to each service identifier with the capacity expansion threshold corresponding to each service identifier, and if the number of the service requests in the cache corresponding to one service identifier is greater than the capacity expansion threshold corresponding to the service identifier, it indicates that more service requests cannot be processed for the service corresponding to the service identifier, and capacity expansion needs to be performed for the service, and the traffic control server updates the traffic control policy to be limited and sends the service expansion request to the capacity adjustment server, so that the capacity adjustment server expands the capacity of the service to be expanded. Wherein the specific capacity of the service expansion may be preset.
The service processing method based on backpressure provided by the embodiment of the invention can receive each service request, after the current flow control strategy is known to be normal, the service request corresponding to each service identifier is cached according to the service identifier included in each service request, the number of the service requests in the cache corresponding to each service identifier is counted, and after the condition that the number of the cached service requests corresponding to any one service identifier is larger than the corresponding capacity expansion threshold value is judged, the flow control strategy is updated to be current limiting and service capacity expansion is carried out, so that the distributed service system is ensured not to be dragged down due to sudden increase of the service request quantity of a certain service or a certain service fault, and the reliability of the service is improved.
On the basis of the foregoing embodiments, further, the service processing method based on backpressure provided in an embodiment of the present invention further includes:
and if the current flow control strategy is the current limit, rejecting each service request.
Specifically, the traffic control server obtains a current traffic control policy, and if the current traffic control policy is current limiting, it indicates that if the service request is received again, there is a risk that the service request corresponding to at least one service identifier exceeds the processing capability of the current distributed service system, and the traffic control server will reject each service request to ensure normal operation of the distributed service system, and will not crash due to sudden increase of the traffic request amount.
In addition, sometimes, a service has no performance problem, but a downstream service of the service fails, which may cause the service to wait for a result returned by the downstream service, and a new service request continuously comes in, at this time, since many services are in a waiting and suspending state due to the failure of the downstream service, usage rates of a CPU and a memory are not necessarily high, but a thread pool is exhausted, which may cause a cascading failure or even an avalanche to occur in the distributed service system in the prior art.
Fig. 3 is a schematic flow chart of a back pressure-based traffic processing method according to another embodiment of the present invention, and as shown in fig. 3, on the basis of the foregoing embodiments, further, the back pressure-based traffic processing method according to the embodiment of the present invention further includes:
s301, counting the number of the service requests in the cache corresponding to each service identifier again;
specifically, after the flow control policy is updated to be flow-limited and service expansion is performed, the service request is processed continuously and gradually reduced, and after the service expansion, the corresponding expansion threshold may also be increased. The traffic control server may re-count the number of service requests in the cache corresponding to each service identifier.
And S302, if the number of the service requests in the cache corresponding to each re-counted service identifier is judged to be smaller than the corresponding capacity expansion threshold, updating the flow control strategy to be normal.
Specifically, the traffic control server compares the number of the service requests in the cache corresponding to each of the re-counted service identifiers with the capacity expansion threshold corresponding to each of the service identifiers, and if the number of the service requests in the cache corresponding to each of the re-counted service identifiers is smaller than the capacity expansion threshold corresponding to each of the service identifiers, the traffic control server changes the flow control policy from the flow limit to normal, so that a new service request can be received and processed.
On the basis of the foregoing embodiments, further, the service processing method based on backpressure provided in an embodiment of the present invention further includes:
and after the service expansion is completed, updating the expansion threshold value corresponding to the expanded service.
Specifically, after the capacity adjustment server expands the capacity of the service that needs to be expanded, the capacity adjustment server may feed back the expanded capacity to the flow control server, and the flow control server may update the expansion threshold corresponding to the expanded service according to the expanded capacity of the expanded service. The capacity expansion threshold may be updated according to a proportion of the service capacity, or may be set to be a fixed value different from the service capacity, and the capacity expansion threshold is set according to actual needs.
For example, service a is expanded, the capacity is expanded from 100 to 200, the expansion threshold corresponding to service a before expansion is 80, and since the capacity of service a after expansion is doubled, the expansion threshold corresponding to service a after expansion may be updated to 160.
For example, the service C is expanded, the capacity is expanded from 200 to 300, the expansion threshold corresponding to the service C before expansion is 150, the expansion threshold corresponding to the service C is different from the capacity 200 of the service C by 50, and the expansion threshold corresponding to the service C after expansion can be updated to 300-50 to 250 as the capacity of the service C after expansion is expanded to 300.
On the basis of the foregoing embodiments, further, the service processing method based on backpressure provided in an embodiment of the present invention further includes:
and if the number of the service requests in the cache corresponding to the service identifier in a preset time period is judged and obtained to meet a capacity reduction rule, carrying out capacity reduction on the service corresponding to the service identifier and updating a capacity expansion threshold corresponding to the service identifier.
Specifically, the traffic control server may count the number of service requests in a cache corresponding to the service identifier within a preset time period, and if the number of service requests in the cache corresponding to the service identifier within the preset time period satisfies a capacity reduction rule, the traffic control server may send a service capacity reduction request to a capacity adjustment server to reduce the capacity of the service corresponding to the service identifier. And after the service capacity expansion corresponding to the service identifier is completed, the traffic control server updates the capacity expansion threshold corresponding to the service identifier. Wherein the capacity reduction rule is preset. The preset time period is set according to actual experience, and the embodiment of the invention is not limited.
For example, in the capacity reduction rule, if the number of service requests in the cache corresponding to the service identifier is smaller than the corresponding capacity reduction threshold value within a preset time period, the capacity reduction is performed on the service corresponding to the service identifier. The traffic control server may periodically count the number of the service requests in the cache corresponding to the service identifier within a preset time period, obtain the number of the service requests in the caches corresponding to the service identifiers at a plurality of time points, compare the number of the service requests in the caches corresponding to the service identifiers at each time point with the capacity reduction threshold corresponding to the service identifier, and if the number of the service requests in the caches corresponding to the service identifiers at each time point is less than the capacity reduction threshold corresponding to the service identifier, perform capacity reduction on the service corresponding to the service identifier. The capacity reduction threshold is set according to actual needs, and the embodiment of the invention is not limited.
For example, the capacity reduction threshold is 20% of the capacity of the service corresponding to the service identifier, and when the capacity reduction is performed on the service corresponding to the service identifier, the capacity of the service corresponding to the service identifier is reduced by half, and at this time, the capacity reduction threshold corresponding to the service identifier after the capacity reduction may be reduced by half.
The service processing method based on backpressure provided by the embodiment of the invention solves the problem of uncertainty of the distributed service system caused by sudden increase of service request quantity in the process of calling the distributed service, improves the overall control of the distributed service system on flow, and avoids fault cascade among services. Each service can be according to the performance capacity setting of self and expand the dilatation threshold value, in case the quantity of service request surpasss the dilatation threshold value in the buffer memory that the service corresponds, flow to getting into distributed service system is controlled, the service that needs the dilatation simultaneously carries out urgent dilatation, treat after the service completion dilatation, cancel the control to the flow that gets into distributed service system again, can guarantee that bottleneck service can not be collapsed by the business pressure that increases suddenly, also can guarantee that upstream service can not be because downstream service comes not to handle and the thread pool that the long-time thread hangs up and leads to consumes the almost all fault cascade condition.
Fig. 4 is a schematic structural diagram of a backpressure-based traffic processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the backpressure-based traffic processing apparatus according to the embodiment of the present invention includes a receiving unit 401, a first statistical unit 402, and a first updating unit 403, where:
the receiving unit 401 is configured to receive each service request, where the service request includes a service identifier; the first statistical unit 402 is configured to, after it is known that the current flow control policy is normal, cache the service request corresponding to each service identifier according to the service identifier included in each service request, and count the number of service requests in the cache corresponding to each service identifier; the first updating unit 403 is configured to update the flow control policy to be limited and perform service expansion after determining that the number of cached service requests corresponding to any service identifier is greater than the corresponding expansion threshold.
Specifically, each user may send a service request to the receiving unit 401 through a client, and the receiving unit 401 receives each service request, where the service request includes a service identifier. The service identification corresponds to the service one by one.
After receiving the service requests, the first statistical unit 402 may obtain the current flow control policy, and if the current flow control policy is normal, it indicates that each service request may be processed normally, the first statistical unit 402 may cache each service request, store the service request in the cache corresponding to the service identifier included in each service request, and send the service request stored in the cache corresponding to the service identifier to the service server for processing according to a first-in first-out principle. The first statistical unit 402 may count the number of service requests in the cache corresponding to each service identifier, that is, count the number of service requests stored in the cache corresponding to each service identifier. And the service identification corresponds to the cache one by one. The individual business servers are deployed through a distributed service model.
After obtaining the number of the service requests in the cache corresponding to each service identifier, the first updating unit 403 may compare the number of the service requests in the cache corresponding to each service identifier with the capacity expansion threshold corresponding to each service identifier, and if there is a condition that the number of the service requests in the cache corresponding to one service identifier is greater than the capacity expansion threshold corresponding to the service identifier, it indicates that more service requests cannot be processed for the service corresponding to the service identifier, and capacity expansion needs to be performed for the service, the first updating unit 403 may update the flow control policy to be limited, and send the service capacity expansion request to the capacity adjustment server, so that the capacity adjustment server expands the capacity of the service that needs to be expanded. Wherein the specific capacity of the service expansion may be preset.
The backpressure-based service processing device provided by the embodiment of the invention can receive each service request, after the current flow control strategy is known to be normal, the service request corresponding to each service identifier is cached according to the service identifier included in each service request, the number of the service requests in the cache corresponding to each service identifier is counted, and after the condition that the number of the cached service requests corresponding to any one service identifier is larger than the corresponding capacity expansion threshold value is judged, the flow control strategy is updated to be current limiting and service capacity expansion is carried out, so that the distributed service system is ensured not to be dragged down due to sudden increase of the service request quantity of a certain service or a certain service fault, and the reliability of the service is improved.
Fig. 5 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to another embodiment of the present invention, and as shown in fig. 5, on the basis of the foregoing embodiments, the back pressure-based traffic processing apparatus according to the embodiment of the present invention further includes:
the rejecting unit 404 is configured to reject each service request after learning that the current flow control policy is current limiting.
Specifically, the rejecting unit 404 obtains the current flow control policy, and if the current flow control policy is current limiting, it indicates that if the service request is received again, there is a risk that the service request corresponding to at least one service identifier exceeds the processing capability of the current distributed service system, and the flow control server rejects each service request, so as to ensure normal operation of the distributed service system, and the service request does not crash due to sudden increase of the service request amount.
Fig. 6 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to yet another embodiment of the present invention, and as shown in fig. 6, on the basis of the foregoing embodiments, further, the back pressure-based traffic processing apparatus according to an embodiment of the present invention further includes a second statistics unit 405 and a second updating unit 406, where:
the second counting unit 405 is configured to count the number of service requests in the cache corresponding to each service identifier again; the second updating unit 406 is configured to update the traffic control policy to be normal after determining that the number of the service requests in the cache corresponding to each re-counted service identifier is smaller than or equal to the corresponding capacity expansion threshold.
Specifically, after the flow control policy is updated to be flow-limited and service expansion is performed, the service request is processed continuously and gradually reduced, and after the service expansion, the corresponding expansion threshold may also be increased. The second counting unit 405 may re-count the number of service requests in the cache corresponding to each service identifier.
The second updating unit 406 compares the number of the service requests in the cache corresponding to each of the re-counted service identifiers with the capacity expansion threshold corresponding to each of the service identifiers, and if the number of the service requests in the cache corresponding to each of the re-counted service identifiers is smaller than the capacity expansion threshold corresponding to each of the service identifiers, the second updating unit 406 changes the flow control policy from the flow limit to the normal state, so that a new service request can be received and processed.
Fig. 7 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to still another embodiment of the present invention, and as shown in fig. 7, on the basis of the foregoing embodiments, further, the back pressure-based traffic processing apparatus according to an embodiment of the present invention further includes a third updating unit 407, where:
the third updating unit 407 is configured to update the expansion threshold corresponding to the expanded service after the service expansion is completed.
Specifically, after the capacity adjustment server expands the capacity of the service that needs to be expanded, the expanded capacity may be fed back to the third updating unit 407, and the third updating unit 407 may update the expansion threshold corresponding to the expanded service according to the expanded capacity of the expanded service. The capacity expansion threshold may be updated according to a proportion of the service capacity, or may be set to be a fixed value different from the service capacity, and the capacity expansion threshold is set according to actual needs.
Fig. 8 is a schematic structural diagram of a back pressure-based traffic processing apparatus according to yet another embodiment of the present invention, and as shown in fig. 8, on the basis of the foregoing embodiments, the back pressure-based traffic processing apparatus according to the embodiment of the present invention further includes:
the determining unit 408 is configured to, after it is determined that the number of the cached service requests corresponding to the service identifier in the preset time period satisfies the capacity reduction rule, reduce the capacity of the service corresponding to the service identifier and update the capacity expansion threshold corresponding to the service identifier.
Specifically, the determining unit 408 may count the number of the service requests in the cache corresponding to the service identifier in a preset time period, and if the number of the service requests in the cache corresponding to the service identifier in the preset time period satisfies a capacity reduction rule, the determining unit 408 may send a service capacity reduction request to the capacity adjustment server to reduce the capacity of the service corresponding to the service identifier. After the service capacity expansion corresponding to the service identifier is completed, the determining unit 408 updates the capacity expansion threshold corresponding to the service identifier. Wherein the capacity reduction rule is preset. The preset time period is set according to actual experience, and the embodiment of the invention is not limited.
The embodiment of the apparatus provided in the embodiment of the present invention may be specifically configured to execute the processing flows of the above method embodiments, and the functions of the apparatus are not described herein again, and refer to the detailed description of the above method embodiments.
According to the backpressure-based service processing method and device provided by the embodiment of the invention, a user only needs to set the capacity expansion threshold and the capacity reduction threshold of each service before the distributed service system is started, after the distributed service system is started, manual intervention is not needed at all, normal operation of each service and the whole distributed service system is ensured through self-adjustment, the problems of service fault cascade and avalanche which are difficult to prevent by the distributed service are solved, system paralysis caused by overlarge service pressure can be avoided, the capacity of each service can be dynamically adjusted according to the size of service flow, when the service flow is lower than the capacity reduction threshold set by the user, the capacity reduction can be carried out on the service, the condition of resource waste is reduced, and the resource utilization rate is effectively improved.
It should be noted that the service processing method and apparatus based on backpressure provided by the embodiment of the present invention may be used in the financial field, and may also be used in any technical field other than the financial field.
Fig. 9 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 9, the electronic device may include: a processor (processor)901, a communication Interface (Communications Interface)902, a memory (memory)903 and a communication bus 904, wherein the processor 901, the communication Interface 902 and the memory 903 are communicated with each other through the communication bus 904. The processor 901 may call logic instructions in the memory 903 to perform the following method: receiving each service request, wherein the service request comprises a service identifier; if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier; and if the cached service requests corresponding to any one service identifier are judged to be larger than the corresponding capacity expansion threshold, updating the flow control strategy to be current limiting and performing service capacity expansion.
In addition, the logic instructions in the memory 903 may be implemented in a software functional unit and stored in a computer readable storage medium when the logic instructions are sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the above-mentioned method embodiments, for example, comprising: receiving each service request, wherein the service request comprises a service identifier; if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier; and if the cached service requests corresponding to any one service identifier are judged to be larger than the corresponding capacity expansion threshold, updating the flow control strategy to be current limiting and performing service capacity expansion.
The present embodiment provides a computer-readable storage medium, which stores a computer program, where the computer program causes the computer to execute the method provided by the above method embodiments, for example, the method includes: receiving each service request, wherein the service request comprises a service identifier; if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier; and if the cached service requests corresponding to any one service identifier are judged to be larger than the corresponding capacity expansion threshold, updating the flow control strategy to be current limiting and performing service capacity expansion.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the description herein, reference to the description of the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," "an example," "a particular example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

1. A service processing method based on backpressure is characterized by comprising the following steps:
receiving each service request, wherein the service request comprises a service identifier;
if the current flow control strategy is normal, caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier;
and if the cached service requests corresponding to any one service identifier are judged to be larger than the corresponding capacity expansion threshold, updating the flow control strategy to be current limiting and performing service capacity expansion.
2. The method of claim 1, further comprising:
and if the current flow control strategy is the current limit, rejecting each service request.
3. The method of claim 1, further comprising:
counting the number of the service requests in the cache corresponding to each service identifier again;
and if the number of the service requests in the cache corresponding to each re-counted service identifier is judged to be smaller than the corresponding capacity expansion threshold value, updating the flow control strategy to be normal.
4. The method of claim 1, further comprising:
and after the service expansion is completed, updating the expansion threshold value corresponding to the expanded service.
5. The method of any of claims 1 to 4, further comprising:
and if the number of the cached service requests corresponding to the service identification in the preset time period is judged and obtained to meet the capacity reduction rule, carrying out capacity reduction on the service corresponding to the service identification and updating the capacity expansion threshold value corresponding to the service identification.
6. A backpressure-based traffic processing apparatus, comprising:
a receiving unit, configured to receive each service request, where the service request includes a service identifier;
the first statistical unit is used for caching the service request corresponding to each service identifier according to the service identifier included in each service request and counting the number of the service requests in the cache corresponding to each service identifier after knowing that the current flow control strategy is normal;
and the first updating unit is used for updating the flow control strategy to be flow limiting and performing service capacity expansion after judging that the number of the cached service requests corresponding to any one service identifier is larger than the corresponding capacity expansion threshold.
7. The apparatus of claim 6, further comprising:
and the rejecting unit is used for rejecting each service request after the current flow control strategy is known to be the current limit.
8. The apparatus of claim 6, further comprising:
the second counting unit is used for counting the number of the service requests in the cache corresponding to each service identifier again;
and the second updating unit is used for updating the flow control strategy to be normal after judging that the number of the service requests in the cache corresponding to each re-counted service identifier is less than or equal to the corresponding capacity expansion threshold.
9. The apparatus of claim 6, further comprising:
and the third updating unit is used for updating the capacity expansion threshold value corresponding to the expanded service after the service capacity expansion is completed.
10. The apparatus of any one of claims 6 to 9, further comprising:
and the judging unit is used for carrying out capacity reduction on the service corresponding to the service identifier and updating the capacity expansion threshold corresponding to the service identifier after judging that the number of the cached service requests corresponding to the service identifier in the preset time period meets the capacity reduction rule.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 5 are implemented when the computer program is executed by the processor.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202011276950.9A 2020-11-16 2020-11-16 Service processing method and device based on backpressure Active CN112398945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011276950.9A CN112398945B (en) 2020-11-16 2020-11-16 Service processing method and device based on backpressure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011276950.9A CN112398945B (en) 2020-11-16 2020-11-16 Service processing method and device based on backpressure

Publications (2)

Publication Number Publication Date
CN112398945A true CN112398945A (en) 2021-02-23
CN112398945B CN112398945B (en) 2022-12-20

Family

ID=74601092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011276950.9A Active CN112398945B (en) 2020-11-16 2020-11-16 Service processing method and device based on backpressure

Country Status (1)

Country Link
CN (1) CN112398945B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113589881A (en) * 2021-08-02 2021-11-02 北京汇钧科技有限公司 Method and device for solving clock callback problem
CN113806068A (en) * 2021-07-30 2021-12-17 上海晶赞融宣科技有限公司 Method and device for expanding business system, readable storage medium and terminal
CN114679412A (en) * 2022-04-19 2022-06-28 浪潮卓数大数据产业发展有限公司 Method, device, equipment and medium for forwarding traffic to service node
CN114745329A (en) * 2022-03-30 2022-07-12 青岛海尔科技有限公司 Flow control method and apparatus, storage medium, and electronic apparatus
CN115016952A (en) * 2022-08-10 2022-09-06 中邮消费金融有限公司 Dynamic capacity expansion and reduction method and system based on service calling terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109885399A (en) * 2019-01-17 2019-06-14 平安普惠企业管理有限公司 Data processing method, electronic device, computer equipment and storage medium
CN110677459A (en) * 2019-09-02 2020-01-10 金蝶软件(中国)有限公司 Resource adjusting method and device, computer equipment and computer storage medium
US20200052957A1 (en) * 2018-08-07 2020-02-13 International Business Machines Corporation Centralized rate limiters for services in cloud based computing environments
CN110933097A (en) * 2019-12-05 2020-03-27 美味不用等(上海)信息科技股份有限公司 Multi-service gateway oriented current limiting and automatic capacity expanding and shrinking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200052957A1 (en) * 2018-08-07 2020-02-13 International Business Machines Corporation Centralized rate limiters for services in cloud based computing environments
CN109885399A (en) * 2019-01-17 2019-06-14 平安普惠企业管理有限公司 Data processing method, electronic device, computer equipment and storage medium
CN110677459A (en) * 2019-09-02 2020-01-10 金蝶软件(中国)有限公司 Resource adjusting method and device, computer equipment and computer storage medium
CN110933097A (en) * 2019-12-05 2020-03-27 美味不用等(上海)信息科技股份有限公司 Multi-service gateway oriented current limiting and automatic capacity expanding and shrinking method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806068A (en) * 2021-07-30 2021-12-17 上海晶赞融宣科技有限公司 Method and device for expanding business system, readable storage medium and terminal
CN113806068B (en) * 2021-07-30 2023-12-12 上海晶赞融宣科技有限公司 Capacity expansion method and device for service system, readable storage medium and terminal
CN113589881A (en) * 2021-08-02 2021-11-02 北京汇钧科技有限公司 Method and device for solving clock callback problem
CN114745329A (en) * 2022-03-30 2022-07-12 青岛海尔科技有限公司 Flow control method and apparatus, storage medium, and electronic apparatus
CN114745329B (en) * 2022-03-30 2024-03-22 青岛海尔科技有限公司 Flow control method and device, storage medium and electronic device
CN114679412A (en) * 2022-04-19 2022-06-28 浪潮卓数大数据产业发展有限公司 Method, device, equipment and medium for forwarding traffic to service node
CN114679412B (en) * 2022-04-19 2024-05-14 浪潮卓数大数据产业发展有限公司 Method, device, equipment and medium for forwarding traffic to service node
CN115016952A (en) * 2022-08-10 2022-09-06 中邮消费金融有限公司 Dynamic capacity expansion and reduction method and system based on service calling terminal
CN115016952B (en) * 2022-08-10 2022-10-28 中邮消费金融有限公司 Dynamic capacity expansion and reduction method and system based on service calling terminal

Also Published As

Publication number Publication date
CN112398945B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
CN112398945B (en) Service processing method and device based on backpressure
CN108376118B (en) Service distribution system, method, device and storage medium
CN108595207B (en) Gray scale publishing method, rule engine, system, terminal and storage medium
US11032210B2 (en) Software load balancer to maximize utilization
CN110611623B (en) Current limiting method and device
CN109450987B (en) Number generation method, device and system and storage medium
CN110300067B (en) Queue adjusting method, device, equipment and computer readable storage medium
CN112650575B (en) Resource scheduling method, device and cloud service system
US11018965B1 (en) Serverless function scaling
CN112383585A (en) Message processing system and method and electronic equipment
CN114598658A (en) Flow limiting method and device
CN114448989B (en) Method, device, electronic equipment, storage medium and product for adjusting message distribution
CN111586140A (en) Data interaction method and server
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN112887407B (en) Job flow control method and device for distributed cluster
CN111400241B (en) Data reconstruction method and device
CN112260962A (en) Bandwidth control method and device
CN111158896A (en) Distributed process scheduling method and system
CN113568706B (en) Method and device for adjusting container for business, electronic equipment and storage medium
CN112380011A (en) Dynamic adjustment method and device for service capacity
CN114296959A (en) Message enqueuing method and device
CN109766363B (en) Streaming data processing method, system, electronic device and storage medium
CN113268327A (en) Transaction request processing method and device and electronic equipment
CN111107019A (en) Data transmission method, device, equipment and computer readable storage medium
CN113918093B (en) Capacity reduction optimization method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant