CN114138357A - Request processing method and device, electronic equipment, storage medium and product - Google Patents

Request processing method and device, electronic equipment, storage medium and product Download PDF

Info

Publication number
CN114138357A
CN114138357A CN202111276977.2A CN202111276977A CN114138357A CN 114138357 A CN114138357 A CN 114138357A CN 202111276977 A CN202111276977 A CN 202111276977A CN 114138357 A CN114138357 A CN 114138357A
Authority
CN
China
Prior art keywords
request
service
preheating
type
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111276977.2A
Other languages
Chinese (zh)
Inventor
江之鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111276977.2A priority Critical patent/CN114138357A/en
Publication of CN114138357A publication Critical patent/CN114138357A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The disclosure provides a request processing method, a request processing device, electronic equipment, a storage medium and a product, and belongs to the technical field of networks. In the embodiment of the present disclosure, when the first service starts to request the current limitation, for a received first request, a request type of the first request is detected first, and when the request type of the first request is a warm-up request, that is, when the first request is a warm-up request sent by a second service that depends on the first service, the first request is responded to directly. The first request is discarded when the request type of the first request is a non-warm-up request and the first request meets the discarding condition of the current limiting strategy. Therefore, the preheating request can be ensured not to be discarded, so that the problem that the preheating request cannot be normally processed and the starting speed is reduced can be solved.

Description

Request processing method and device, electronic equipment, storage medium and product
Technical Field
The present disclosure relates to network technologies, and in particular, to a request processing method and apparatus, an electronic device, a storage medium, and a product.
Background
Currently, when a service is started, in order to increase the starting speed of a service instance included in the service, a preheating step is performed during the starting process. Specifically, a warm-up request may be sent to the dependent service, and the warm-up may be completed by the dependent service responding to the warm-up request. The service that depends on is the service indicated by the call information defined in the service.
However, in the related art, when the dependent service performs the request current limiting, the received warm-up request may not be processed normally, and the normal start may not be performed, thereby slowing down the start speed.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a request processing method, apparatus, electronic device, storage medium, and product.
According to a first aspect of the present disclosure, there is provided a request processing method for opening a first service requesting current limitation, the method including:
detecting a request type of a received first request; the first request is used for indicating the first service to execute preset operation;
if the request type of the first request is a preheating request, responding to the first request; the preheating request is a request sent by a second service, and the second service defines the calling information of the first service;
and if the request type of the first request is a non-preheating request and the first request meets the discarding condition of the current limiting strategy, discarding the first request.
Optionally, before detecting the request type of the first request, the method further includes:
determining whether the received second request meets the discarding condition based on the current limiting policy; the second request comprises the first request;
and if the second request meets the discarding condition, taking the second request meeting the discarding condition as the first request, and executing the step of detecting the request type of the first request.
Optionally, the detecting the request type of the first request includes:
detecting whether a specified preheating request mark parameter is included in the first request;
if the first request comprises the preheating request marking parameter, determining that the request type is a preheating request;
if the first request does not include the warming request marking parameter, determining that the request type is a non-warming request.
Optionally, the first service and the second service belong to the same service cluster; the detecting the request type of the first request comprises:
reading an IP address of a service instance currently started in the service cluster based on a specified interface provided by the service cluster;
if the IP address carried by the first request belongs to the IP address of the service instance which is started currently, determining that the request type is a preheating request;
and if the IP address carried by the first request does not belong to the IP address of the service instance which is started currently, determining that the request type is a non-preheating request.
Optionally, if the request type is a warm-up request, the method further includes: setting the processing priority of the first request as a designated priority; the assigned priority is used to indicate that the first service prioritizes processing the request.
Optionally, the detecting the request type of the first request includes:
detecting whether the service instance receiving the first request is a preset preheating request processing instance or not, or detecting whether an access interface receiving the first request is a preset preheating request processing interface or not; the service instance and the access interface belong to the first service;
if so, determining the request type of the first request as a preheating request;
if not, determining that the request type of the first request is a non-preheating request.
Optionally, the preheating request is a request sent in a link of restarting and/or expanding the volume of the service instance in the second service.
Optionally, the current-limiting policy includes discarding the subsequently received requests within a predetermined time period when the number of the received requests within the predetermined time period is greater than a preset number threshold.
According to a second aspect of the present disclosure, there is provided a request processing apparatus for enabling a first service requesting current limitation, the apparatus comprising:
a detection module configured to detect, for a received first request, a request type of the first request; the first request is used for indicating the first service to execute preset operation;
the response module is configured to respond to the first request if the request type of the first request is a preheating request; the preheating request is a request sent by a second service, and the second service defines the calling information of the first service;
a discarding module configured to discard the first request if a request type of the first request is a non-warm-up request and the first request meets a discarding condition of a current limit policy.
Optionally, the apparatus further comprises:
a judging module configured to judge whether the received second request meets the discarding condition based on the current limiting policy before detecting the request type of the first request; the second request comprises the first request;
and the execution module is configured to take the second request meeting the discarding condition as the first request and execute the step of detecting the request type of the first request if the second request meets the discarding condition.
Optionally, the detection module is specifically configured to:
detecting whether a specified preheating request mark parameter is included in the first request;
if the first request comprises the preheating request marking parameter, determining that the request type is a preheating request;
if the first request does not include the warming request marking parameter, determining that the request type is a non-warming request.
Optionally, the first service and the second service belong to the same service cluster; the detection module is specifically configured to:
reading an IP address of a service instance currently started in the service cluster based on a specified interface provided by the service cluster;
if the IP address carried by the first request belongs to the IP address of the service instance which is started currently, determining that the request type is a preheating request;
and if the IP address carried by the first request does not belong to the IP address of the service instance which is started currently, determining that the request type is a non-preheating request.
Optionally, the apparatus further comprises: a setting module configured to set a processing priority of the first request to a designated priority if the request type is a warm-up request; the assigned priority is used to indicate that the first service prioritizes processing the request.
Optionally, the detection module is specifically configured to:
detecting whether the service instance receiving the first request is a preset preheating request processing instance or not, or detecting whether an access interface receiving the first request is a preset preheating request processing interface or not; the service instance and the access interface belong to the first service;
if so, determining the request type of the first request as a preheating request;
if not, determining that the request type of the first request is a non-preheating request.
Optionally, the preheating request is a request sent in a link of restarting and/or expanding the volume of the service instance in the second service.
Optionally, the current-limiting policy includes discarding the subsequently received requests within a predetermined time period when the number of the received requests within the predetermined time period is greater than a preset number threshold.
In accordance with a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the request processing method of any of the first aspects.
According to a fourth aspect of the present disclosure, there is provided a storage medium, wherein instructions, when executed by a processor of an electronic device, cause the electronic device to perform the request processing method according to any one of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising readable program instructions which, when executed by a processor of an electronic device, cause the electronic device to perform the request processing method of any of the first aspects.
Compared with the related art, the method has the following advantages and positive effects:
in the request processing method provided by the embodiment of the present disclosure, when the first service starts the request current limit, for the received first request, the request type of the first request is detected first, and when the request type of the first request is the warm-up request, that is, when the first request is the warm-up request sent by the second service that depends on the first service, the first request is responded to directly. The first request is discarded when the request type of the first request is a non-warm-up request and the first request meets the discarding condition of the current limiting strategy. Therefore, the preheating request can be ensured not to be discarded, so that the problem that the preheating request cannot be normally processed and the starting speed is reduced can be solved.
The foregoing description is only an overview of the technical solutions of the present disclosure, and the embodiments of the present disclosure are described below in order to make the technical means of the present disclosure more clearly understood and to make the above and other objects, features, and advantages of the present disclosure more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating steps of a method for processing a request according to an embodiment of the present disclosure;
FIG. 2 is a schematic process flow diagram provided by an embodiment of the present disclosure;
fig. 3 is a block diagram of a request processing apparatus provided in an embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating an apparatus for request processing in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating an apparatus for request processing in accordance with an example embodiment.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An exemplary application scenario related to the embodiments of the present disclosure is described below. At present, a large-scale service cluster is arranged behind each large website or application for support, and front-end services, back-end services, RPC services and the like in the service cluster are deployed on a physical server or a container cloud host to run. To ensure availability, these services typically deploy multiple identical service instances in one or more rooms and support scaling to cope with traffic variations and service host bursty failures that are characterized by significant peaks and valleys over time. Normally, the occasional few service instances are abnormal, which does not affect the service availability, and the failure can be eliminated only by restarting the service instances.
However, the service may face a sudden increase in traffic, such as a total failure of a part of the equipment room, a problem with the service of a competitor, a large number of users rushing in, an effect of the drainage activity exceeding the expectation, and the like. In case of sudden traffic increase, some hosts may go down for a large area, and a current limiting policy needs to be turned on to protect the surviving service instances to prevent cluster avalanches. At this time, in order to ensure normal operation of the service, development and maintenance personnel need not only restart the service that has been down, but also need to expand capacity in time to deal with the traffic that exceeds the current service capability, thereby realizing fault recovery.
Further, the start-up procedure of a service will usually have a warm-up (warmup) step. For example, since there are implementation dependencies among many modules in the service code, there are many places where lazy init (lazy init) is used to avoid resource waste caused by pulling up unnecessary implementations. The delayed loading results in the first few requests being processed very slowly or even timed out when the service is just started. It is therefore necessary to initialize what is going to be used by this service, i.e. to perform a warm-up step, at the start of the process. When restarting or starting the expanded service instance, a preheating step is required. At this time, if the service dependent on the service starts the request current limiting, the preheating request sent by the service may not be responded normally, and then the restarting and capacity expansion speeds of the service cluster are slowed down, so that the fault recovery speed is slowed down.
For example, assume that there are two services, service A relies on service B. The service A, B each has multiple service instances. When the traffic volume suddenly increases, the two services face the traffic volume exceeding the processing capacity of the two services, and partial service instances are down. To avoid a full collapse, the remaining services open a current limit policy. At this point, the service instance that has been down needs to be restarted A, B and expanded. However, if the capacity expansion service a is restarted first, since the dependent service B needs to be called in the preheating step when the capacity expansion is restarted, and the service B already opens the current limiting policy, normal response cannot be guaranteed. Thus, the service instance may fail to start because the warm-up procedure cannot be completed. In this case, the warm-up request may be repeatedly sent again only when the response is not received, or the restart and expansion of the service B may be completed first, and then the current limiting policy may be closed, and then the restart and expansion may be normally completed by the service a. That is, the restart/capacity expansion needs to be performed strictly according to the sequence of the calling topology corresponding to the dependency relationship. Thus, the starting speed is reduced, and the fault recovery speed is slow.
For this reason, the following describes the request processing method provided by the embodiments of the present disclosure in detail.
Fig. 1 is a flowchart of steps of a request processing method provided by an embodiment of the present disclosure, where the method may be applied to start a first service requesting current limitation, and as shown in fig. 1, the method may include:
step 101, aiming at a received first request, detecting a request type of the first request; the first request is used for indicating the first service to execute preset operation.
In the embodiment of the present disclosure, a service may be composed of a plurality of service instances, and a service instance may be deployed in a physical machine, for example, a server deployed in a machine room. Different services may correspondingly implement different business functions, for example, a service may be used to implement a login function, an authentication function, a payment function, and the like. The first request may be a partial request/a full request received by the first service, the preset operation indicated by the first request may be preset according to an actual requirement, and different first requests may be used to indicate that the first service performs different preset operations. For example, the first request may be used to instruct the first service to return survival information, to return pre-set pre-loaded data, to perform login information verification, to perform payment information verification, or to turn off a specific function, etc. Further, the first request may include a warm-up request sent by a second service dependent on the first service. Specifically, the first service and the second service may be services having a dependency relationship, and the second service may depend on the first service. The second service may define the call information of the first service to the first service, that is, the second service may implement some functions by calling the first service, and thus may be referred to as the second service depending on the first service. In a specific implementation scenario, from the perspective of program development, the first service may be regarded as an upstream service, and the second service may be regarded as a downstream service. From a calling relationship perspective, the second service may be considered an upstream service and the first service may be considered a downstream service. Further, the first service may start the requested current limiting to limit the current flow when the current flow is too large and the number of requests to be processed is too large. Wherein a request may be considered a flow.
Further, since the first service currently opens the request for current limiting, the first request may be regarded as a request to be limited for entering the first service, and the first request sent by the second service may be discarded due to current limiting, so that the request type of each first request may be further identified. The request type may be predefined in the embodiments of the present disclosure. For example, the request types may be divided into a warm-up traffic and a normal traffic, that is, may be divided into a warm-up request and a non-warm-up request.
Step 102, if the request type of the first request is a preheating request, responding to the first request; the preheating request is a request sent by a second service, and the second service defines the calling information of the first service.
In the embodiment of the present disclosure, if the request type of the first request is a warm-up request, it may be determined that the first request is a request sent by a service instance in the second service in order to implement a warm-up step during the starting process. Correspondingly, the first request can be directly responded, the preheating request is ensured not to be discarded due to current limiting, the received preheating request can be ensured to be normally processed to a certain extent, and further the problem that the starting cannot be normally completed is solved, so that the fault recovery speed is reduced. The manner of responding to the preheating request may be set according to an actual requirement, for example, the survival information may be returned to the second service, the preset preloaded data may be returned to establish the data connection, and the like, which is not limited in the embodiment of the present disclosure.
Step 103, if the request type of the first request is a non-preheating request and the first request meets the discarding condition of the current limiting policy, discarding the first request.
In the embodiment of the present disclosure, the current limiting policy may be preset for the request current limiting function. Optionally, the throttling policy may include discarding the subsequently received requests within the predetermined time period if the number of the received requests within the predetermined time period is greater than a preset number threshold. The predetermined time period may include a setting according to actual requirements, and for example, the predetermined time period may be 5 minutes, or may also be 10 minutes. For example, one current limiting algorithm may correspond to one current limiting strategy. For example, the throttling algorithm corresponding to the throttling policy may include a counter (fixed window) algorithm, a sliding window algorithm, a leaky bucket algorithm, a token bucket algorithm, and so on. These algorithms may determine the status of the service by the number of requests currently or over a period of time. If the requests are too many and exceed the currently configured amount, the service overload is considered to be caused, and correspondingly, the requests exceeding the amount are considered to meet the discarding condition; if the number of requests does not exceed the currently configured amount, the discard condition is considered not to be met. That is, the discard condition is that the receive time is within the current time window and is the Xth received request within the current time window, where X is greater than the currently configured amount. The length of the current time window may be a predetermined length of time. In the embodiment of the disclosure, by adopting the strategy that the number of the received requests in the preset time is greater than the preset number threshold, the subsequently received requests in the preset time are discarded, so that the problem that the first service is broken down due to too large request concurrency can be avoided to a certain extent.
Further, if the request type is a non-warm-up request, the first request may be limited, specifically, it may be determined whether the first request meets a discarding condition of the current limiting policy, and if so, the first request may be discarded to limit the current.
To sum up, in the request processing method provided by the embodiment of the present disclosure, under the condition that the first service starts requesting for limiting the flow, for the received first request, the request type of the first request is detected first, where the first request is used to indicate that the first service executes the preset operation. In the case where the request type of the first request is a warm-up request, that is, in the case where the first request is a warm-up request transmitted by a second service dependent on the first service, the first request is directly responded to. The first request is discarded when the request type of the first request is a non-warm-up request and the first request meets the discarding condition of the current limiting strategy. Therefore, the preheating request can be ensured not to be discarded, so that the problem that the preheating request cannot be normally processed and the starting speed is reduced can be solved.
Optionally, in an application scenario, that is, the preheating request in the embodiment of the present disclosure may be a request sent in a link of restarting and/or expanding the capacity of the service instance in the second service. The problem of slowing down the starting speed is avoided, so that the fault recovery is carried out on the second service through the restart/expansion starting, and the normal operation of the restart/expansion starting can be ensured under the condition that the preheating request is sent by the restart/expansion starting link, the speed of the restart/expansion starting is ensured, and the problem of slowing down the fault recovery is further avoided. Meanwhile, the normal response processing of the preheating request can be ensured, and the normal completion of the preheating step can be ensured, so that the restarting/capacity expansion can be carried out without calling the sequence of the topological relation, and the multi-service restarting/capacity expansion can be carried out simultaneously, thereby accelerating the fault recovery speed.
Optionally, before detecting the request type of the first request, an embodiment of the present disclosure may further include:
step S21, judging whether the received second request meets the discarding condition based on the current limiting strategy; the second request comprises the first request.
In the embodiment of the present disclosure, the request participating in the request type detection may be a request determined to need to be discarded, the second request may be a request sent by any service to the first service, the second request may include a request sent by the second service, and the second request may include the first request. Specifically, after receiving the second request, the first service may directly determine whether the second request meets the discarding condition based on the current limiting policy. Specifically, the determination method may refer to the foregoing related description, and is not described herein again.
Step S22, if the second request meets the discarding condition, taking the second request meeting the discarding condition as the first request, and executing the step of detecting the request type of the first request.
If the second request meets the discarding condition, it indicates that the second request is not processed in response, but discarded, however, the second request determined to meet the discarding condition may include a warming request sent by the second service, and therefore, the second requests may be used as first requests, and further warming traffic refloating is performed from the first requests, that is, the request types of the first requests are identified, and the first requests in which the second requests are warming traffic are retained and responded. The first request, which is not a warm-up request, is discarded.
In the embodiment of the disclosure, the judgment is performed based on the preset current limiting strategy, the second request which is judged to be in accordance with the discarding condition is used as the first request participating in the bailing, so as to identify the preheating request, and further, the detection processing amount of the request type is reduced while the preheating request is prevented from being discarded, thereby improving the processing efficiency.
Meanwhile, in the embodiment of the disclosure, the preheating request is performed on the hit current-limiting strategy, namely, the request meeting the discarding condition, and the back fishing is performed. Therefore, when the first service requests current limiting, the original service code implementation logic and the original current limiting algorithm do not need to be modified, the service code implementation logic and the original current limiting algorithm are not affected, and the invasiveness is weak.
Optionally, the operation of detecting the request type of the first request may specifically include, in a first implementation manner:
and step S31, detecting whether the first request includes the specified preheating request mark parameter.
In the embodiment of the present disclosure, the preheating request flag parameter may be pre-agreed, and the specific content of the preheating request flag parameter may be set according to actual requirements, for example, the preheating request flag parameter may be a number, a letter, a special flag, and the like, which is not limited in the present disclosure. When the service sends a request, a preheat request flag parameter may be added to a designated flag bit of the preheat request to represent that the request is a preheat request, whereas if the sent request is not a preheat request, the preheat request flag parameter may not be added to the designated flag bit to represent that the request is not a preheat request. The designated flag bit may be designated according to actual requirements, which is not limited by this disclosure. Accordingly, when the first service receives the request, the first service may parse the first request and then detect whether a warm-up request flag parameter is added to the designated flag bit.
Step S32, if the first request includes the warming request flag parameter, determining that the request type is a warming request.
Step S33, if the first request does not include the warming request flag parameter, determining that the request type is a non-warming request.
In the embodiment of the disclosure, by agreeing the mark parameter of the preheating request in advance, whether the type of the request is the preheating request can be conveniently identified by identifying whether the request carries the mark parameter of the preheating request, so that the processing efficiency can be ensured to a certain extent.
Optionally, the first service and the second service may belong to the same service cluster. The operation of detecting the request type of the first request may specifically include, in a second implementation manner:
step S41, based on the specified interface provided by the service cluster, reading an IP address of the service instance currently being started in the service cluster.
In the embodiment of the present disclosure, an IP Address (Internet Protocol Address) refers to an Internet Protocol Address, and may also be referred to as an Internet Protocol Address. The IP address is a uniform address format provided by the IP protocol, and it allocates a logical address to each network and each host on the internet, so as to mask the difference of physical addresses. The specific interface provided by the service cluster may be pre-set. By way of example, the specified interface may be used to provide a dynamically configured IP address for the currently launching service instance. A device dedicated to collecting IP addresses of service instances currently being started and providing a specified interface may be configured in the service cluster. Illustratively, the service instance may actively report the IP address to the device at startup. The IP addresses of the service instances of different services deployed in the same host machine are different in corresponding ports, and correspondingly, the IP addresses are different. The service instance being started may be a service instance that is restarted for recovery from a failure or a service instance that has increased capacity.
Further, the first service may call the specified interface in real time when the request type needs to be detected, so as to obtain the IP address of the service instance currently being started.
Step S42, if the IP address carried by the first request belongs to the IP address of the service instance currently being started, determining that the request type is a warm-up request.
Specifically, the first request may be parsed to obtain an IP address carried by the first request. And the IP address carried by the first request is the IP address of the service instance sending the first request. Then, the IP address carried by the first request may be compared with the IP address of the service instance currently being started, and if there is an IP address identical to the IP address carried by the first request, it may be determined that the request type of the first request is a warm-up request. Otherwise, it may be determined not to be a warm-up request.
Step S43, if the IP address carried by the request does not belong to the IP address of the service instance currently being started, determining that the request type is a non-warm-up request.
In the embodiment of the disclosure, the IP address of the currently started service instance in the service cluster is read based on the specified interface provided by the service cluster. In the case that the IP address carried by the first request belongs to the IP address of the service instance currently being started, it may be determined that the request type is a warm-up request. In the case that the IP address carried by the first request does not belong to the IP address of the service instance currently being started, it may be determined that the request type is a non-warm-up request. Therefore, whether the request type is a preheating request can be conveniently identified by only comparing the IP addresses, and the processing efficiency can be ensured to a certain extent.
Furthermore, the implementation cost of the first implementation mode and the second implementation mode is low, the original codes cannot be invaded to a large extent, and the code redundancy can be avoided.
Optionally, in a case that the request type of the first request is a warm-up request, the following operations may be further performed: setting the processing priority of the first request as a designated priority; the assigned priority is used to indicate that the first service prioritizes processing the request. The assigned priority may be the highest priority. In a practical application scenario, if a request meets the discarding condition of the current limiting policy, the request is discarded without processing priority. In the embodiment of the disclosure, for the preheating request salvaged from the requests meeting the discarding condition of the current limiting policy, a designated priority for indicating priority processing is further set for the preheating request, that is, the priority is increased, so that the preheating request can be responded as soon as possible, the preheating step can be accelerated to a certain extent, the starting speed is increased, and the fault recovery efficiency is increased.
Optionally, in another implementation manner, the operation of detecting the request type of the first request may specifically include:
step S51, detecting whether the instance receiving the first request is a preset preheating request processing instance, or detecting whether the access interface receiving the first request is a preset preheating request processing interface; the service instance and the access interface belong to the first service.
In the embodiment of the present disclosure, the preset preheating request processing instance may be a service instance that is specifically allocated for the first service individually and is used for receiving the preheating request, and the service instance is only responsible for processing all preheating requests and does not participate in processing of access traffic of the online normal user. Accordingly, the second service may use the IP address of the preheat request processing instance as a target address to directly send the preheat request to the preheat request processing instance in case that the sent request is a preset request. On the contrary, if the sent request is not the preset request, the request may be sent to the non-warm-up request processing instance to determine whether the request meets the discarding condition of the current limiting policy based on the non-warm-up request processing instance.
Alternatively, in the embodiment of the present disclosure, a preheating request processing interface may be specially provided in advance, and the second service may call the preheating request processing interface to send the request when the sent request is a preset request. Otherwise, if the sent request is not the preset request, the normal interface can be called to send the request. The interface may also be referred to as a method, and the first service may split the provided external interface/method into two types in the service instance: a normal interface/method and a warm-up interface (i.e., warm-up request processing interface)/method that does not turn on a current limit policy to ensure that the warm-up request can be directly responded to, avoiding the warm-up request being discarded. It should be noted that, this implementation requires a dedicated warm-up request processing instance, and in case of a problem with this warm-up request processing instance, the conventional service start-up may be disturbed. Or, because two interfaces need to be split, and accordingly redundant codes are introduced, the first implementation manner or the second implementation manner may be preferentially adopted in practical application. Of course, the required recognition logic can also be selected according to the requirement, that is, the implementation mode is selected, and the adaptability is strong.
Further, the first service may determine whether the request type of the first request is a warm-up request by detecting whether the service instance receiving the first request is a preset warm-up request processing instance, and detecting whether the access interface receiving the first request, that is, the interface called when the first request is sent, is a preset warm-up request processing interface.
Step S52, if yes, determining that the request type of the first request is a warm-up request.
Step S53, if not, determining that the request type of the first request is a non-warm-up request.
In the embodiment of the disclosure, whether the service instance receiving the first request is a preset preheating request processing instance is detected. Or detecting whether the access interface receiving the first request is a preset preheating request processing interface. If so, determining that the request type of the first request is a preheating request. If not, determining that the request type of the first request is a non-preheating request. Therefore, whether the service instance is a preset preheating request processing instance/whether the access interface is a preset preheating request processing interface or not can be conveniently identified through identification, whether the request type is a preheating request or not can be determined, accordingly, more implementation modes are provided, and selectivity is improved.
Fig. 2 is a schematic view of a processing flow provided by an embodiment of the present disclosure, and as shown in fig. 2, the whole processing link may be divided into a conventional flow limiting part and a preheating flow back-fishing part. The logic of the conventional current limiting part does not need to be changed, whether the current service state of the first service can respond to the access request (namely, the second request) is still judged through the conventional current limiting algorithm, and if the operation result of the current limiting algorithm is that the current flow is passed, the current request is normally responded, namely, the request processing is carried out. If the current first service load is larger, the access request hits the current limiting strategy, namely, the original discarding condition is met, the preheating flow back fishing part can be entered.
Further, the preheat flow bailing part can directly classify the flow which is originally judged to meet the discarding condition again, namely, preheat flow identification is carried out, and whether the request type of the first request is preheat flow (namely, a preheat request) is determined. The non-preheat request may be regarded as normal traffic, and may be discarded directly, and the preheat traffic may be from an upstream service instance in restart or capacity expansion start, and may be regarded as special traffic, and the request is processed to ensure that the preheat request can be responded to normally.
In a conventional situation, a conventional current-limiting policy is adopted to protect a service instance, but when a large-area traffic sudden increase occurs and a service cluster needs to be restarted/expanded, the conventional current-limiting policy causes that a preheating request cannot be normally processed, thereby bringing negative effects on the restarting/expansion speed. In the embodiment of the disclosure, on the basis of adopting a conventional current-limiting strategy, on the premise of ensuring that a service instance can be protected, through preheating flow salvaging, preheating requests generated by other services depending on the service during restarting/capacity expansion are identified, the execution priority of the preheating requests is promoted, the preheating requests are prevented from being discarded, and the preheating requests are ensured to be normally responded. Therefore, the problems that when a large-scale downtime occurs to the suddenly increased traffic of the service cluster, the startup failure is caused and the fault recovery speed is reduced because the current limiting strategy at the downstream of the call chain intercepts the preheating request when the service instance which needs to be restarted/expanded at the upstream is started are solved.
Meanwhile, the preset request can be ensured not to be discarded by the current limiting strategy. Thus, as long as there is one live instance of a chain-dependent service, it can be ensured that the start-up proceeds properly. Accordingly, restarting/expanding the capacity of the calling topological relation corresponding to the dependency relation is not needed, multiple services can be restarted/expanded simultaneously, and the service recovery speed can be increased.
Fig. 3 is a block diagram of a request processing apparatus according to an embodiment of the present disclosure, where the apparatus is applied to open a first service requesting current limiting, and as shown in fig. 3, the apparatus 20 may include:
a detection module 201 configured to detect, for a received first request, a request type of the first request; the first request is used for indicating the first service to execute preset operation;
a response module 202 configured to respond to the first request if the request type of the first request is a warm-up request; the preheating request is a request sent by a second service, and the second service defines the calling information of the first service;
a discarding module 203 configured to discard the first request if the request type of the first request is a non-warm-up request and the first request meets a discarding condition of a current limiting policy.
The request processing apparatus provided in the embodiment of the present disclosure, when a first service starts a request current limit, detects a request type of the first request for a received first request, and directly responds to the first request when the request type of the first request is a warm-up request, that is, when the first request is a warm-up request sent by a second service that depends on the first service. The first request is discarded when the request type of the first request is a non-warm-up request and the first request meets the discarding condition of the current limiting strategy. Therefore, the preheating request can be ensured not to be discarded, so that the problem that the preheating request cannot be normally processed and the starting speed is reduced can be solved.
Optionally, the apparatus 20 further includes:
a judging module configured to judge whether the received second request meets the discarding condition based on the current limiting policy before detecting the request type of the first request; the second request comprises the first request;
and the execution module is configured to take the second request meeting the discarding condition as the first request and execute the step of detecting the request type of the first request if the second request meets the discarding condition.
Optionally, the detection module 201 is specifically configured to:
detecting whether a specified preheating request mark parameter is included in the first request;
if the first request comprises the preheating request marking parameter, determining that the request type is a preheating request;
if the first request does not include the warming request marking parameter, determining that the request type is a non-warming request.
Optionally, the first service and the second service belong to the same service cluster; the detection module 201 is specifically configured to:
reading an IP address of a service instance currently started in the service cluster based on a specified interface provided by the service cluster;
if the IP address carried by the first request belongs to the IP address of the service instance which is started currently, determining that the request type is a preheating request;
and if the IP address carried by the first request does not belong to the IP address of the service instance which is started currently, determining that the request type is a non-preheating request.
Optionally, the apparatus 20 further includes: a setting module configured to set a processing priority of the first request to a designated priority if the request type is a warm-up request; the assigned priority is used to indicate that the first service prioritizes processing the request.
Optionally, the detection module 201 is specifically configured to:
detecting whether the service instance receiving the first request is a preset preheating request processing instance or not, or detecting whether an access interface receiving the first request is a preset preheating request processing interface or not; the service instance and the access interface belong to the first service;
if so, determining the request type of the first request as a preheating request;
if not, determining that the request type of the first request is a non-preheating request.
Optionally, the preheating request is a request sent in a link of restarting and/or expanding the volume of the service instance in the second service.
Optionally, the current-limiting policy includes discarding the subsequently received requests within a predetermined time period when the number of the received requests within the predetermined time period is greater than a preset number threshold.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to an embodiment of the present disclosure, there is provided an electronic apparatus including: a processor, a memory for storing processor-executable instructions, wherein the processor is configured to perform the steps of the request processing method as in any of the above embodiments when executed.
There is also provided, in accordance with an embodiment of the present disclosure, a storage medium, in which instructions are executed by a processor of an electronic device, so that the electronic device can perform the steps in the request processing method as in any one of the above embodiments.
There is also provided, according to an embodiment of the present disclosure, a computer program product comprising readable program instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the steps of the request processing method as in any one of the above embodiments.
FIG. 4 is a block diagram illustrating an apparatus for request processing in accordance with an example embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the request processing method described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of device 700, the change in position of device 700 or a component of device 700, the presence or absence of user contact with device 700, the orientation or acceleration/deceleration of device 700, and the change in temperature of device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the request processing methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the apparatus 700 to perform the request processing method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
FIG. 5 is a block diagram illustrating an apparatus for request processing in accordance with an example embodiment. For example, the apparatus 800 may be provided as a server. Referring to FIG. 5, the apparatus 800 includes a processing component 822, which further includes one or more processors, and memory resources, represented by memory 832, for storing instructions, such as applications, that are executable by the processing component 822. The application programs stored in memory 832 may include one or more modules that each correspond to a set of instructions. Further, the processing component 822 is configured to execute instructions to perform the request processing method described above.
The device 800 may also include a power component 826 configured to perform power management of the device 800, a wired or wireless network interface 850 configured to connect the device 800 to a network, and an input/output (I/O) interface 858. The apparatus 800 may operate based on an operating system stored in the memory 832, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for processing a request, applied to open a first service requesting current limitation, the method comprising:
detecting a request type of a received first request; the first request is used for indicating the first service to execute preset operation;
if the request type of the first request is a preheating request, responding to the first request; the preheating request is a request sent by a second service, and the second service defines the calling information of the first service;
and if the request type of the first request is a non-preheating request and the first request meets the discarding condition of the current limiting strategy, discarding the first request.
2. The method of claim 1, wherein prior to detecting the request type of the first request, the method further comprises:
determining whether the received second request meets the discarding condition based on the current limiting policy; the second request comprises the first request;
and if the second request meets the discarding condition, taking the second request meeting the discarding condition as the first request, and executing the step of detecting the request type of the first request.
3. The method of claim 1 or 2, wherein the detecting the request type of the first request comprises:
detecting whether a specified preheating request mark parameter is included in the first request;
if the first request comprises the preheating request marking parameter, determining that the request type is a preheating request;
if the first request does not include the warming request marking parameter, determining that the request type is a non-warming request.
4. The method according to claim 1 or 2, wherein the first service and the second service belong to the same service cluster; the detecting the request type of the first request comprises:
reading an IP address of a service instance currently started in the service cluster based on a specified interface provided by the service cluster;
if the IP address carried by the first request belongs to the IP address of the service instance which is started currently, determining that the request type is a preheating request;
and if the IP address carried by the first request does not belong to the IP address of the service instance which is started currently, determining that the request type is a non-preheating request.
5. The method of claim 2, wherein if the request type is a warm-up request, the method further comprises: setting the processing priority of the first request as a designated priority; the assigned priority is used to indicate that the first service prioritizes processing the request.
6. The method of claim 1, wherein the detecting the request type of the first request comprises:
detecting whether the service instance receiving the first request is a preset preheating request processing instance or not, or detecting whether an access interface receiving the first request is a preset preheating request processing interface or not; the service instance and the access interface belong to the first service;
if so, determining the request type of the first request as a preheating request;
if not, determining that the request type of the first request is a non-preheating request.
7. A request processing apparatus, for enabling a first service requesting current limiting, the apparatus comprising:
a detection module configured to detect, for a received first request, a request type of the first request; the first request is used for indicating the first service to execute preset operation;
the response module is configured to respond to the first request if the request type of the first request is a preheating request; the preheating request is a request sent by a second service, and the second service defines the calling information of the first service;
a discarding module configured to discard the first request if a request type of the first request is a non-warm-up request and the first request meets a discarding condition of a current limit policy.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the request processing method of any of claims 1 to 6.
9. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, cause the electronic device to perform the request processing method of any one of claims 1 to 6.
10. A computer program product comprising readable program instructions which, when executed by a processor of an electronic device, cause the electronic device to perform the request processing method of any of claims 1 to 6.
CN202111276977.2A 2021-10-29 2021-10-29 Request processing method and device, electronic equipment, storage medium and product Pending CN114138357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111276977.2A CN114138357A (en) 2021-10-29 2021-10-29 Request processing method and device, electronic equipment, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111276977.2A CN114138357A (en) 2021-10-29 2021-10-29 Request processing method and device, electronic equipment, storage medium and product

Publications (1)

Publication Number Publication Date
CN114138357A true CN114138357A (en) 2022-03-04

Family

ID=80391938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111276977.2A Pending CN114138357A (en) 2021-10-29 2021-10-29 Request processing method and device, electronic equipment, storage medium and product

Country Status (1)

Country Link
CN (1) CN114138357A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160092252A1 (en) * 2014-09-30 2016-03-31 Amazon Technologies, Inc. Threading as a service
CN106598687A (en) * 2015-10-19 2017-04-26 阿里巴巴集团控股有限公司 Script preheating method and device
CN106708819A (en) * 2015-07-17 2017-05-24 阿里巴巴集团控股有限公司 Data caching preheating method and device
CN108881367A (en) * 2018-04-09 2018-11-23 阿里巴巴集团控股有限公司 A kind of service request processing method, device and equipment
CN109756782A (en) * 2017-11-06 2019-05-14 阿里巴巴集团控股有限公司 A kind of method for processing resource, device and streaming media server
CN109842565A (en) * 2018-12-15 2019-06-04 平安科技(深圳)有限公司 Interface current-limiting method, device, electronic equipment and storage medium
CN109873718A (en) * 2019-01-23 2019-06-11 平安科技(深圳)有限公司 A kind of container self-adapting stretching method, server and storage medium
CN110149364A (en) * 2019-04-15 2019-08-20 厦门市美亚柏科信息股份有限公司 Method, apparatus, the storage medium of micro services are provided based on data service platform
CN110399212A (en) * 2018-04-25 2019-11-01 北京京东尚科信息技术有限公司 Task requests processing method, device, electronic equipment and computer-readable medium
CN111639276A (en) * 2020-04-23 2020-09-08 北京达佳互联信息技术有限公司 Resource preloading method and device and storage medium
CN111901249A (en) * 2020-07-31 2020-11-06 深圳前海微众银行股份有限公司 Service current limiting method, device, equipment and storage medium
CN111953772A (en) * 2020-08-11 2020-11-17 北京达佳互联信息技术有限公司 Request processing method, device, server and storage medium
CN112069386A (en) * 2020-09-07 2020-12-11 北京奇艺世纪科技有限公司 Request processing method, device, system, terminal and server
CN112311776A (en) * 2020-10-21 2021-02-02 浪潮云信息技术股份公司 System and method for preventing flooding attack of API gateway
CN112449005A (en) * 2020-11-11 2021-03-05 北京健康之家科技有限公司 Request distribution method and device, electronic equipment and readable storage medium
US20210168178A1 (en) * 2019-12-03 2021-06-03 Microsoft Technology Licensing, Llc Reducing setup time for online meetings
CN113268360A (en) * 2021-05-14 2021-08-17 北京三快在线科技有限公司 Request processing method, device, server and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160092252A1 (en) * 2014-09-30 2016-03-31 Amazon Technologies, Inc. Threading as a service
CN106708819A (en) * 2015-07-17 2017-05-24 阿里巴巴集团控股有限公司 Data caching preheating method and device
CN106598687A (en) * 2015-10-19 2017-04-26 阿里巴巴集团控股有限公司 Script preheating method and device
CN109756782A (en) * 2017-11-06 2019-05-14 阿里巴巴集团控股有限公司 A kind of method for processing resource, device and streaming media server
CN108881367A (en) * 2018-04-09 2018-11-23 阿里巴巴集团控股有限公司 A kind of service request processing method, device and equipment
CN110399212A (en) * 2018-04-25 2019-11-01 北京京东尚科信息技术有限公司 Task requests processing method, device, electronic equipment and computer-readable medium
CN109842565A (en) * 2018-12-15 2019-06-04 平安科技(深圳)有限公司 Interface current-limiting method, device, electronic equipment and storage medium
CN109873718A (en) * 2019-01-23 2019-06-11 平安科技(深圳)有限公司 A kind of container self-adapting stretching method, server and storage medium
CN110149364A (en) * 2019-04-15 2019-08-20 厦门市美亚柏科信息股份有限公司 Method, apparatus, the storage medium of micro services are provided based on data service platform
US20210168178A1 (en) * 2019-12-03 2021-06-03 Microsoft Technology Licensing, Llc Reducing setup time for online meetings
CN111639276A (en) * 2020-04-23 2020-09-08 北京达佳互联信息技术有限公司 Resource preloading method and device and storage medium
CN111901249A (en) * 2020-07-31 2020-11-06 深圳前海微众银行股份有限公司 Service current limiting method, device, equipment and storage medium
CN111953772A (en) * 2020-08-11 2020-11-17 北京达佳互联信息技术有限公司 Request processing method, device, server and storage medium
CN112069386A (en) * 2020-09-07 2020-12-11 北京奇艺世纪科技有限公司 Request processing method, device, system, terminal and server
CN112311776A (en) * 2020-10-21 2021-02-02 浪潮云信息技术股份公司 System and method for preventing flooding attack of API gateway
CN112449005A (en) * 2020-11-11 2021-03-05 北京健康之家科技有限公司 Request distribution method and device, electronic equipment and readable storage medium
CN113268360A (en) * 2021-05-14 2021-08-17 北京三快在线科技有限公司 Request processing method, device, server and storage medium

Similar Documents

Publication Publication Date Title
EP3010187B1 (en) Method for upgrading and device and apparatus thereof
CN109451880B (en) Network connection method and device
CN109314913B (en) Access control limiting method and device
CN108052822B (en) Terminal control method, device and system
CN108702763B (en) Method and device for sending lead code and scheduling request
CN107094094B (en) Application networking method and device and terminal
CN112671897B (en) Access method, device, storage medium, equipment and product of distributed system
CN106792892B (en) Access control method and device for application program
CN106302528B (en) Short message processing method and device
CN113934331A (en) Information processing method, device and storage medium
CN112866022B (en) Method, device and medium for reducing system breakdown times of modem
RU2632396C2 (en) Method and device to control router plug-in module
EP3863321B1 (en) Method, device and medium for handling network connection abnormality of terminal
CN109218375B (en) Application interaction method and device
CN110933773B (en) Link monitoring method and device
CN112256424A (en) Virtual resource processing method, device and system, electronic equipment and storage medium
CN109491655B (en) Input event processing method and device
WO2020042010A1 (en) Access control barring method and apparatus
CN114138357A (en) Request processing method and device, electronic equipment, storage medium and product
CN114584261B (en) Data processing method, device and storage medium
CN112929271B (en) Route configuration method and device for configuring route
CN106060104B (en) Application management method and device
CN112083841B (en) Information input method, device and storage medium
CN107819871B (en) Application state determination method and device
CN109976563B (en) Misoperation determining method and device and touch operation response method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination