CN114844947B - Request processing method and device, electronic equipment and computer readable medium - Google Patents

Request processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN114844947B
CN114844947B CN202210475015.8A CN202210475015A CN114844947B CN 114844947 B CN114844947 B CN 114844947B CN 202210475015 A CN202210475015 A CN 202210475015A CN 114844947 B CN114844947 B CN 114844947B
Authority
CN
China
Prior art keywords
service
time corresponding
request
normalization
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210475015.8A
Other languages
Chinese (zh)
Other versions
CN114844947A (en
Inventor
刘纯彰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210475015.8A priority Critical patent/CN114844947B/en
Publication of CN114844947A publication Critical patent/CN114844947A/en
Application granted granted Critical
Publication of CN114844947B publication Critical patent/CN114844947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The disclosure relates to a request processing method, a request processing device, electronic equipment and a computer readable medium, and belongs to the technical field of computer networks. The method comprises the following steps: responding to a network request of data reporting, and acquiring request data corresponding to the network request, wherein the request data comprises a service main key and reporting time; acquiring a preset time unit length, and acquiring a normalization time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time; judging whether the network request passes or not according to the normalization time corresponding to the service main key and the cache data related to the service main key in the cache; and when the network request is judged to pass, forwarding the network request to a service server for processing the request for processing. According to the method and the device, the filtering efficiency problem of the effective flow under the massive requests can be solved by calculating the normalization time corresponding to the service primary key.

Description

Request processing method and device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of computer networks, and in particular, to a method for processing a request, a device for processing a request, an electronic device, and a computer readable medium.
Background
In the process of reporting mass data, if the data flow is relatively large, the problems of flow jitter and spurs can occur on the premise of ensuring that the requested data is correct.
In view of this, there is a need in the art for a method for processing requests that can solve the problem of filtering efficiency of effective traffic in massive requests.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a request processing method, a request processing device, electronic equipment and a computer readable medium, so as to solve the problem of filtering efficiency of effective flow under massive requests at least to a certain extent.
According to a first aspect of the present disclosure, there is provided a method of processing a request, comprising:
responding to a network request of data reporting, and acquiring request data corresponding to the network request, wherein the request data comprises a service main key and reporting time;
acquiring a preset time unit length, and acquiring a normalization time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time;
judging whether the network request passes or not according to the normalization time corresponding to the service main key and the cache data related to the service main key in the cache;
and when the network request is judged to pass, forwarding the network request to a service server for processing the request for processing.
In an exemplary embodiment of the present disclosure, the obtaining, according to the time unit length, the service primary key, and the reporting time, a normalized time corresponding to the service primary key includes:
obtaining drift time corresponding to the hash value of the service main key according to the hash value of the service main key, the reporting time and the time unit length;
and normalizing the drift time corresponding to the hash value of the service main key to obtain the normalization time corresponding to the service main key.
In an exemplary embodiment of the present disclosure, normalizing the drift time corresponding to the hash value of the service primary key to obtain a normalized time corresponding to the service primary key includes:
and normalizing the drift time corresponding to the hash value of the service main key according to the reporting time and the time unit length to obtain the normalized time corresponding to the service main key.
In an exemplary embodiment of the present disclosure, the determining, according to the normalized time corresponding to the service primary key and the cached data related to the service primary key in the cache, whether the network request passes includes:
judging whether cache data related to the business main key exists in a cache;
if the cached data does not exist, judging that the network request passes, and caching the service main key and the normalization time corresponding to the service main key;
if the cache data exist, judging whether the network request passes or not according to the normalization time corresponding to the service main key and the cache normalization time corresponding to the service main key in the cache data.
In an exemplary embodiment of the present disclosure, the determining, according to the normalized time corresponding to the service primary key and the cached normalized time corresponding to the service primary key in the cached data, whether the network request passes includes:
if the buffer normalization time corresponding to the service main key in the buffer data is smaller than the normalization time corresponding to the service main key, judging that the network request passes;
and if the buffer normalization time corresponding to the service main key in the buffer data is greater than or equal to the normalization time corresponding to the service main key, judging that the network request does not pass.
In an exemplary embodiment of the disclosure, if the buffer normalization time corresponding to the service primary key in the buffer data is smaller than the normalization time corresponding to the service primary key, after determining that the network request passes, the method further includes:
and updating the buffer memory normalization time corresponding to the service main key in the buffer memory data according to the normalization time corresponding to the service main key.
According to a second aspect of the present disclosure, there is provided a processing apparatus of a request, comprising:
the request data acquisition module is configured to execute a network request responding to data reporting and acquire request data corresponding to the network request, wherein the request data comprises a service main key and reporting time;
the normalization time determining module is configured to acquire a preset time unit length and obtain normalization time corresponding to the service main key according to the time unit length, the service main key and the reporting time;
the network request judging module is configured to execute the judgment of whether the network request passes or not according to the normalization time corresponding to the service main key and the cache data related to the service main key in the cache;
and the network request forwarding module is configured to forward the network request to a service server for processing the request for processing when the network request is judged to pass.
In one exemplary embodiment of the present disclosure, the return-to-time determination module includes:
the drift time determining unit is configured to execute the method and the device for obtaining the drift time corresponding to the hash value of the service primary key according to the hash value of the service primary key, the reporting time and the time unit length;
and the drift time normalization unit is configured to perform normalization processing on the drift time corresponding to the hash value of the service main key to obtain the normalization time corresponding to the service main key.
In an exemplary embodiment of the present disclosure, the drift time normalization unit is further configured to perform normalization processing on the drift time corresponding to the hash value of the service primary key according to the reporting time and the time unit length, so as to obtain a normalized time corresponding to the service primary key.
In one exemplary embodiment of the present disclosure, the network request determination module includes:
a cache data judging unit configured to perform judgment as to whether cache data related to the service primary key exists in a cache;
the first network request judging unit is configured to execute the judgment that the network request passes if the cache data does not exist, and cache the service main key and the normalization time corresponding to the service main key;
and the second network request judging unit is configured to execute judging whether the network request passes or not according to the normalization time corresponding to the service main key and the cache normalization time corresponding to the service main key in the cache data if the cache data exists.
In an exemplary embodiment of the present disclosure, the second network request determining unit includes:
a third network request judging unit configured to execute judging that the network request passes if the buffer normalization time corresponding to the service primary key in the buffer data is smaller than the normalization time corresponding to the service primary key;
and the fourth network request judging unit is configured to execute the judgment that the network request does not pass if the buffer normalization time corresponding to the service main key in the buffer data is greater than or equal to the normalization time corresponding to the service main key.
In an exemplary embodiment of the present disclosure, the second network request determining unit further includes:
and the cache data updating unit is configured to execute the updating of the cache normalization time corresponding to the service main key in the cache data according to the normalization time corresponding to the service main key.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of processing a request as described in any one of the above.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of processing a request of any one of the above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of processing a request as claimed in any one of the preceding claims.
Exemplary embodiments of the present disclosure may have the following advantageous effects:
in the request processing method of the disclosed example embodiment, a service primary key and a reporting time in network request data are acquired, a normalization time corresponding to the service primary key is obtained by combining a preset time unit length, and then whether the network request passes or not is judged according to the normalization time corresponding to the service primary key and cache data related to the service primary key in a cache, and the judged passing network request is forwarded and processed. According to the request processing method in the disclosed example embodiment, on one hand, through calculating the normalization time of the service primary key in the interval of the time unit length, the effect of scattering the filtering request distribution according to the service primary key is achieved, the peak and valley can be cut off, the filtering efficiency problem of effective flow can be well solved, massive and infinitely-increased flow is filtered and reduced to a constant level, and the problems of flow jitter and spurs are solved; on the other hand, the problem of the dependence of server resources on the repeated traffic and the linear increase of the traffic scale in the network request can be solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 shows a flow diagram of a method of processing a request according to an example embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating determining normalized time corresponding to a business primary key according to an example embodiment of the present disclosure;
FIG. 3 shows a flow diagram of determining whether a network request passes according to an example embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of an exemplary system architecture to which a request processing method and apparatus of embodiments of the present disclosure may be applied;
FIG. 5 illustrates a flow diagram of a method of processing a request in one embodiment of the present disclosure;
FIG. 6 illustrates a graph of results from a method of processing a request according to an example embodiment of the present disclosure;
FIG. 7 illustrates a block diagram of a requesting processing device of an example embodiment of the present disclosure;
fig. 8 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein.
The following example embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the process of reporting mass data, if the flow is relatively large, the request frequency of each primary key value is very high, so that the flow needs to be uniformly filtered and scattered on the premise of ensuring the correctness of the request data, and meanwhile, the problems of flow jitter and spurs need to be solved.
In some related embodiments, when the server receives the network request, firstly querying a local cache, and if cache data corresponding to a service primary key of the current network request exists in the cache, filtering the current request; if not, the business primary key is stored in the cache and the current request is processed, and then an expiration time, such as 3 minutes, is set. However, this method cannot achieve the effect of scattering the filtering request distribution according to the service primary key.
The present exemplary embodiment first provides a method for processing a request. Referring to fig. 1, the method for processing the request may include the steps of:
and S110, responding to a network request of data reporting, and acquiring request data corresponding to the network request, wherein the request data comprises a service main key and reporting time.
And S120, acquiring a preset time unit length, and obtaining a normalization time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time.
And S130, judging whether the network request passes or not according to the normalization time corresponding to the service main key and the cache data related to the service main key in the cache.
And S140, when the network request is judged to pass, forwarding the network request to a service server for processing the request for processing.
In the request processing method of the disclosed example embodiment, a service primary key and a reporting time in network request data are acquired, a normalization time corresponding to the service primary key is obtained by combining a preset time unit length, and then whether the network request passes or not is judged according to the normalization time corresponding to the service primary key and cache data related to the service primary key in a cache, and the judged passing network request is forwarded and processed. According to the request processing method in the disclosed example embodiment, on one hand, through calculating the normalization time of the service primary key in the interval of the time unit length, the effect of scattering the filtering request distribution according to the service primary key is achieved, the peak and valley can be cut off, the filtering efficiency problem of effective flow can be well solved, massive and infinitely-increased flow is filtered and reduced to a constant level, and the problems of flow jitter and spurs are solved; on the other hand, the problem of the dependence of server resources on the repeated traffic and the linear increase of the traffic scale in the network request can be solved.
The above steps of the present exemplary embodiment will be described in more detail with reference to fig. 2 to 5.
In step S110, in response to a network request for reporting data, request data corresponding to the network request is obtained, where the request data includes a service primary key and reporting time.
In this example embodiment, the server for filtering and forwarding the network request is a first server, and when the first server receives a network request reported from a client or other service servers, the first server may first obtain request data corresponding to the current request, where the request data at least includes a service primary key and reporting time of the current request. Wherein the service primary key may be, for example, a user encoded userId, or a device code deviceId, etc.
In step S120, a preconfigured time unit length is obtained, and a normalized time corresponding to the service primary key is obtained according to the time unit length, the service primary key and the reporting time.
In this example embodiment, the first server may pull the configuration from the configuration center at regular time, where the configuration data includes a normalized time unit length, expressed as a unit. According to the time unit length, the service main key and the reporting time, the normalization time corresponding to the current service main key can be obtained.
In this example embodiment, as shown in fig. 2, according to the time unit length, the service primary key and the reporting time, the normalized time corresponding to the service primary key is obtained, which specifically may include the following steps:
and S210, obtaining drift time corresponding to the hash value of the service main key according to the hash value of the service main key, the reporting time and the time unit length.
Firstly, calculating a hash value c=hash code (key) of a service main key, and then, according to the hash value of the service main key and the time unit length, different time drift amounts can be allocated for different users. If the normalized time unit length is expressed in units, the drift amount may be c% units, which is in the range of [0, units ], where% represents the remainder.
And then according to the reporting time and the time drift amount, calculating to obtain drift time corresponding to the hash value of the service primary key, wherein a calculation formula of the drift time is shiftts=ts+c% unit, and ts is the reporting time of the request.
And S220, normalizing the drift time corresponding to the hash value of the service main key to obtain the normalized time corresponding to the service main key.
And normalizing the drift time corresponding to the hash value of the service main key according to the reporting time and the time unit length to obtain the normalization time corresponding to the service main key, wherein the normalization value of the service main key under the unit normalization time length is normalized by normalized=ts-shifttsts%. Because the normalized time windows of different users are different, each user only allows the first request to pass through in the same time window, and therefore constant level shrinkage of the flow is realized. By distributing different time drift amounts for different users, the time when the normalized time of the different users jumps is different, so that the scattering of the flow among the different users is realized, and the problem of flow spurs is solved.
For example, if only normalization processing is performed, assuming that the normalized time is set to 60 seconds under a massive request, the value of the normalized time for each user is 0,60,120 … 60 ×n, where n represents each normalized time period, and then the request is passed according to the normalized time for each user. However, when the request amount of the user is large, the normalized time of each user jumps from n×60 to (n+1) ×60, at the point of (n+1) ×60, and the jump time allows the requests of all users to pass simultaneously, which results in a traffic spike every 60 seconds.
If different time drift amounts are allocated to different users through the above steps in the present exemplary embodiment, for example, the drift amount allocated by the user a is 0 and the drift amount allocated by the user B is 27, the time point of the normalized time jump of the user a is n×60, and the time point of the normalized time jump of the user B is n×60+27. Under the condition of large flow, the jump time point is approximately equal to the request passing time point, and the time of the request passing of the user A and the user B is scattered at the time, so that the jump time of all users is prevented from being at the same time, and the problem of flow spurs is solved.
In step S130, it is determined whether the network request passes or not according to the normalized time corresponding to the service primary key and the cached data related to the service primary key in the cache.
After the normalization time corresponding to the service primary key is obtained, the mapping relation between the service primary key and the normalization time corresponding to the service primary key can be compared and stored in a local cache, so that whether the current network request passes or not is judged.
In this example embodiment, as shown in fig. 3, according to the normalized time corresponding to the service primary key and the cached data related to the service primary key in the cache, determining whether the network request passes may specifically include the following steps:
and S310, judging whether cache data related to the business primary key exists in the cache.
Firstly, judging whether the local cache has the cache data related to the business main key.
And S320, if the cached data does not exist, judging that the network request passes, and caching the service main key and the normalization time corresponding to the service main key.
If the local cache does not have the cache data related to the service primary key, caching the normalized time (key= > normalized dTs) corresponding to the service primary key in the request, and judging that the network request passes.
And S330, if the cached data exists, judging whether the network request passes or not according to the normalization time corresponding to the service main key and the cache normalization time corresponding to the service main key in the cached data.
If the cache data related to the service primary key exists in the local cache, judging whether the network request passes or not by comparing the normalization time corresponding to the service primary key in the request with the cache normalization time corresponding to the service primary key in the cache data.
Specifically, if the buffer normalization time corresponding to the service primary key in the buffer data is smaller than the normalization time corresponding to the service primary key, the network request is judged to pass. After the network request is judged to pass, updating the cache normalization time corresponding to the service primary key in the cache data according to the normalization time corresponding to the service primary key. And if the buffer normalization time corresponding to the service main key in the buffer data is greater than or equal to the normalization time corresponding to the service main key, judging that the network request does not pass.
In step S140, when the network request is determined to pass, the network request is forwarded to a service server for processing the request for processing.
When the network request is judged to pass, the first server forwards the judged to pass network request to a service server for processing the request for processing. The request is returned directly to the network that is determined not to pass.
FIG. 4 is a schematic diagram of a system architecture of an exemplary application environment to which a request processing method and apparatus of an embodiment of the present invention may be applied.
As shown in fig. 4, in the system architecture, the first server is a server for filtering and forwarding network requests, and the client may be various electronic devices with processors, including but not limited to smartphones, tablets, portable computers, and the like. The first server can receive network requests reported by other service servers or clients through the flow gateway, the request data at least comprises a service main key and reporting time, the configuration is pulled from the configuration center at regular time, and the configuration data comprises normalized time unit length. The first server can obtain the normalization time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time, then judge whether the network request passes or not according to the normalization time corresponding to the service primary key and the cache data related to the service primary key in the cache, and forward the network request to the second server, namely the real service server for processing when the network request judges that the network request passes.
It should be understood that the number of clients and servers in fig. 4 is merely illustrative. There may be any number of clients and servers, as desired for an implementation. For example, the first server may be a server cluster formed by a plurality of servers.
A complete flowchart of a method of processing a request in one embodiment of the present disclosure is shown in fig. 5, which is an illustration of the above steps in this example embodiment, and the specific steps of the flowchart are as follows:
s502, calculating a hash value of a service primary key.
Hash value c=hashcode (key) of the business primary key.
S504, calculating the hash value and the drift time under the length of the time unit.
In an optional embodiment, according to the hash value, the reporting time and the time unit length of the service primary key, the drift time corresponding to the hash value of the service primary key may be calculated. Further, the hash value, the reporting time, the time unit length and the preset linear function relation of the service main key are used for obtaining the drift time corresponding to the hash value of the service main key. The drift time in the preset linear function relation is positively correlated with the hash value and the reporting time, and the drift time is negatively correlated with the time unit length. The larger the positive correlation, i.e. the hash value, the larger the drift time value. The preset linear function relation may be calculated according to the following formula, where the calculation formula of the drift time of the hash value is: shiftts=ts+c% unit, where ts is the reporting time of the current request and unit is the normalized time unit length. It will be appreciated that the correlation coefficient may be increased before c% unit in the calculation formula.
And S506, normalizing according to the drift time and the time unit length.
The calculation formula of the normalization time is as follows: normolizedts = ts-shiftts% unit.
S508, comparing and storing the mapping relation between the service primary key and the normalized time in a local cache.
And S510, judging whether the business main key exists in the cache.
If not, go to step S512; if yes, the process proceeds to step S514.
Step S512, caching the result.
And caching the service primary key and the current normalized time, and proceeding to step S520.
And S514, judging whether the buffer value is smaller than the current normalization time.
If yes, go to step S516; if not, the process proceeds to step S518.
And S516, updating the cache value.
And updating the buffer value according to the current normalization time, and proceeding to step S520.
Step S518, determining that the network request does not pass.
Step S520, it is determined that the network request passes, and the request is sent to the second server.
As shown in fig. 6, which is a graph of results obtained by a request processing method according to an exemplary embodiment of the present disclosure, a preset time unit length unit=10 in fig. 6 shows a relationship between reporting times ts of three different service main keys, i.e., c% unit=8, c% unit=5, and c% unit=3. Assuming that there is one request report per second for each service key, only one request out of 10 requests within 10 seconds of one service key will be decided to pass according to the request processing method in the exemplary embodiment of the present disclosure.
As can be seen from fig. 6, hash values of the service primary keys are different, and ts time points when the reporting request passes are also different, so that the effect of judging that the passing request is scattered by pressing the service primary keys is achieved. For example, the average number of online service primary keys is 120 ten thousand, the average number of requests per second per service primary key is n=10 (n will continue to increase as the service progresses), the total number of requests qps=1200 ten thousand, and if unit=120 seconds=2 minutes, the number of requests QPS filtered by the traffic gateway is 1 ten thousand. Where QPS (Queries Per Second) denotes the query rate per second. Therefore, massive infinitely variable flow filtering can be reduced to a constant level through the method.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
The user information (including but not limited to user equipment information, user personal information, etc.) related to the present disclosure is information authorized by the user or sufficiently authorized by each party.
Further, the disclosure also provides a device for processing the request. Referring to fig. 7, the processing device of the request may include a request data acquisition module 710, a return time determination module 720, a network request decision module 730, and a network request forwarding module 740. Wherein:
the request data obtaining module 710 is configured to obtain, in response to a network request for reporting data, request data corresponding to the network request, where the request data includes a service primary key and a reporting time;
the normalization time determining module 720 is configured to obtain a preconfigured time unit length, and obtain a normalization time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time;
the network request determining module 730 is configured to determine whether the network request passes according to the normalized time corresponding to the service primary key and the cached data related to the service primary key in the cache;
the network request forwarding module 740 is configured to forward the network request to a traffic server for processing the request for processing when the network request is determined to pass.
In some example embodiments of the present disclosure, the return-to-time determination module 720 may include a drift time determination unit and a drift time normalization unit. Wherein:
the drift time determining unit is configured to obtain drift time corresponding to the hash value of the service primary key according to the hash value of the service primary key, the reporting time and the time unit length;
the drift time normalization unit is configured to normalize the drift time corresponding to the hash value of the service primary key to obtain normalized time corresponding to the service primary key.
In some exemplary embodiments of the present disclosure, the drift time normalization unit may be further configured to normalize the drift time corresponding to the hash value of the service primary key according to the reporting time and the time unit length, to obtain a normalized time corresponding to the service primary key.
In some example embodiments of the present disclosure, the network request determination module 730 may include a cache data determination unit, a first network request determination unit, and a second network request determination unit. Wherein:
the cache data judging unit is configured to judge whether cache data related to the business main key exists in the cache;
the first network request judging unit is configured to judge that the network request passes if the cache data does not exist, and cache the service main key and the normalization time corresponding to the service main key;
the second network request judging unit is configured to judge whether the network request passes or not according to the normalization time corresponding to the service primary key and the buffer normalization time corresponding to the service primary key in the buffer data if the buffer data exists.
In some example embodiments of the present disclosure, the second network request determining unit may include a third network request determining unit and a fourth network request determining unit. Wherein:
the third network request judging unit is configured to judge that the network request passes if the buffer normalization time corresponding to the service main key in the buffer data is smaller than the normalization time corresponding to the service main key;
the fourth network request judging unit is configured to judge that the network request does not pass if the buffer normalization time corresponding to the service primary key in the buffer data is greater than or equal to the normalization time corresponding to the service primary key.
In some exemplary embodiments of the present disclosure, the second network request determining unit may further include a cache data updating unit configured to update a cache normalization time corresponding to the service primary key in the cache data according to the normalization time corresponding to the service primary key.
The specific details of each module/unit in the above-mentioned request processing apparatus are already described in the corresponding method embodiment section, and will not be repeated here.
Fig. 8 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
It should be noted that, the computer system 800 of the electronic device shown in fig. 8 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU) 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for system operation are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
In particular, according to embodiments of the present invention, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present invention include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. When executed by a Central Processing Unit (CPU) 801, the computer program performs the various functions defined in the system of the present application.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the method as described in the above embodiments.
It should be noted that although in the above detailed description several modules of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules described above may be embodied in one module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into a plurality of modules to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of processing a request, comprising:
responding to a network request of data reporting, and acquiring request data corresponding to the network request, wherein the request data comprises a service main key and reporting time;
acquiring a preset time unit length, and acquiring a normalization time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time;
judging whether cache data related to the business main key exists in a cache;
if the cached data does not exist, judging that the network request passes, and caching the service main key and the normalization time corresponding to the service main key;
if the cache data exist and the cache normalization time corresponding to the service main key in the cache data is smaller than the normalization time corresponding to the service main key, judging that the network request passes, and updating the normalization time corresponding to the service main key in the cache data;
if the cache data exist and the cache normalization time corresponding to the service main key in the cache data is greater than or equal to the normalization time corresponding to the service main key, judging that the network request does not pass;
and when the network request is judged to pass, forwarding the network request to a service server for processing the request for processing.
2. The method for processing a request according to claim 1, wherein the obtaining the normalized time corresponding to the service primary key according to the time unit length, the service primary key and the reporting time includes:
obtaining drift time corresponding to the hash value of the service main key according to the hash value of the service main key, the reporting time and the time unit length;
and normalizing the drift time corresponding to the hash value of the service main key to obtain the normalization time corresponding to the service main key.
3. The method for processing the request according to claim 2, wherein normalizing the drift time corresponding to the hash value of the service primary key to obtain the normalized time corresponding to the service primary key comprises:
and normalizing the drift time corresponding to the hash value of the service main key according to the reporting time and the time unit length to obtain the normalized time corresponding to the service main key.
4. The method according to claim 1, wherein if the buffer normalization time corresponding to the service primary key in the buffer data is smaller than the normalization time corresponding to the service primary key, the method further comprises, after determining that the network request passes:
and updating the buffer memory normalization time corresponding to the service main key in the buffer memory data according to the normalization time corresponding to the service main key.
5. A device for processing a request, comprising:
the request data acquisition module is configured to execute a network request responding to data reporting and acquire request data corresponding to the network request, wherein the request data comprises a service main key and reporting time;
the normalization time determining module is configured to acquire a preset time unit length and obtain normalization time corresponding to the service main key according to the time unit length, the service main key and the reporting time;
the network request judging module is configured to execute judging whether cache data related to the business main key exists in the cache; if the cached data does not exist, judging that the network request passes, and caching the service main key and the normalization time corresponding to the service main key; if the cache data exist and the cache normalization time corresponding to the service main key in the cache data is smaller than the normalization time corresponding to the service main key, judging that the network request passes, and updating the normalization time corresponding to the service main key in the cache data; if the cache data exist and the cache normalization time corresponding to the service main key in the cache data is greater than or equal to the normalization time corresponding to the service main key, judging that the network request does not pass;
and the network request forwarding module is configured to forward the network request to a service server for processing the request for processing when the network request is judged to pass.
6. The apparatus for processing a request according to claim 5, wherein the return-to-time determination module comprises:
the drift time determining unit is configured to execute the method and the device for obtaining the drift time corresponding to the hash value of the service primary key according to the hash value of the service primary key, the reporting time and the time unit length;
and the drift time normalization unit is configured to perform normalization processing on the drift time corresponding to the hash value of the service main key to obtain the normalization time corresponding to the service main key.
7. The apparatus according to claim 6, wherein the drift time normalization unit is further configured to perform normalization processing on the drift time corresponding to the hash value of the service primary key according to the reporting time and the time unit length, so as to obtain a normalized time corresponding to the service primary key.
8. The apparatus according to claim 5, wherein the network request determination module includes:
and the cache data updating unit is configured to execute the updating of the cache normalization time corresponding to the service main key in the cache data according to the normalization time corresponding to the service main key.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of processing a request as claimed in any one of claims 1 to 4.
10. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of processing a request according to any of claims 1 to 4.
CN202210475015.8A 2022-04-29 2022-04-29 Request processing method and device, electronic equipment and computer readable medium Active CN114844947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210475015.8A CN114844947B (en) 2022-04-29 2022-04-29 Request processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210475015.8A CN114844947B (en) 2022-04-29 2022-04-29 Request processing method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN114844947A CN114844947A (en) 2022-08-02
CN114844947B true CN114844947B (en) 2024-03-26

Family

ID=82568327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210475015.8A Active CN114844947B (en) 2022-04-29 2022-04-29 Request processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN114844947B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092351A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Cache data update method and device
WO2018095187A1 (en) * 2016-11-22 2018-05-31 北京京东尚科信息技术有限公司 Document online preview method and device
CN109918191A (en) * 2017-12-13 2019-06-21 北京京东尚科信息技术有限公司 A kind of method and apparatus of the anti-frequency of service request
CN110113384A (en) * 2019-04-15 2019-08-09 深圳壹账通智能科技有限公司 Network request processing method, device, computer equipment and storage medium
CN110399212A (en) * 2018-04-25 2019-11-01 北京京东尚科信息技术有限公司 Task requests processing method, device, electronic equipment and computer-readable medium
CN111125112A (en) * 2019-12-25 2020-05-08 京东数字科技控股有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN113114768A (en) * 2021-04-14 2021-07-13 北京京东振世信息技术有限公司 Service request processing method, device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092351A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Cache data update method and device
WO2018095187A1 (en) * 2016-11-22 2018-05-31 北京京东尚科信息技术有限公司 Document online preview method and device
CN109918191A (en) * 2017-12-13 2019-06-21 北京京东尚科信息技术有限公司 A kind of method and apparatus of the anti-frequency of service request
CN110399212A (en) * 2018-04-25 2019-11-01 北京京东尚科信息技术有限公司 Task requests processing method, device, electronic equipment and computer-readable medium
CN110113384A (en) * 2019-04-15 2019-08-09 深圳壹账通智能科技有限公司 Network request processing method, device, computer equipment and storage medium
CN111125112A (en) * 2019-12-25 2020-05-08 京东数字科技控股有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN113114768A (en) * 2021-04-14 2021-07-13 北京京东振世信息技术有限公司 Service request processing method, device and system

Also Published As

Publication number Publication date
CN114844947A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
US10547618B2 (en) Method and apparatus for setting access privilege, server and storage medium
CN107480277B (en) Method and device for collecting website logs
US20200073913A1 (en) Method and apparatus for processing data sequence
CN108810047B (en) Method and device for determining information push accuracy rate and server
CN110620681B (en) Network connection timeout time setting method, device, equipment and medium
CN112818371A (en) Resource access control method, system, device, equipment and medium
CN114844947B (en) Request processing method and device, electronic equipment and computer readable medium
CN110245014B (en) Data processing method and device
CN111310242B (en) Method and device for generating device fingerprint, storage medium and electronic device
CN114465919B (en) Network service testing method, system, electronic equipment and storage medium
US7822748B2 (en) Method and system for delivering information with caching based on interest and significance
CN112825519B (en) Method and device for identifying abnormal login
CN113824675A (en) Method and device for managing login state
CN112307071A (en) Monitoring data acquisition method and device, electronic equipment and computer readable medium
CN111125112A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN113066479A (en) Method and device for evaluating model
CN107666497B (en) Data access method and device
CN113778909B (en) Method and device for caching data
CN112367324B (en) CDN attack detection method and device, storage medium and electronic equipment
CN116108132B (en) Method and device for auditing text of short message
CN113791828A (en) Prompt message generation method and device
CN117112151A (en) Task scheduling method and device, electronic equipment and computer readable medium
CN116450674A (en) Data query method, device, electronic equipment and computer readable medium
CN113282471A (en) Equipment performance testing method and device and terminal equipment
CN114448715A (en) Token-based authentication method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant