CN116627968A - Concurrent request processing method, processing device, computer equipment and medium - Google Patents

Concurrent request processing method, processing device, computer equipment and medium Download PDF

Info

Publication number
CN116627968A
CN116627968A CN202310511521.2A CN202310511521A CN116627968A CN 116627968 A CN116627968 A CN 116627968A CN 202310511521 A CN202310511521 A CN 202310511521A CN 116627968 A CN116627968 A CN 116627968A
Authority
CN
China
Prior art keywords
request
requests
concurrency
dictionary
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310511521.2A
Other languages
Chinese (zh)
Inventor
刘欣毅
燕浩宇
姚舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Insight Network Co ltd
Original Assignee
Beijing Insight Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Insight Network Co ltd filed Critical Beijing Insight Network Co ltd
Priority to CN202310511521.2A priority Critical patent/CN116627968A/en
Publication of CN116627968A publication Critical patent/CN116627968A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2272Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a concurrent request processing method, a concurrent request processing device, computer equipment and a medium. The method comprises the following steps: first receiving first feedback data sent by a back end in response to a first request, wherein the first feedback data comprises a plurality of first dictionary codes corresponding to a form to be rendered; then, carrying out de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes, and generating a plurality of second requests based on the plurality of target dictionary codes, so that the occurrence of repeated requests can be effectively reduced; the second request is used for requesting real information corresponding to the target dictionary code from the back end; and finally, the second request is controlled to be sent to the rear end in a preset mode, wherein the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number, so that the effective control on concurrency of multiple requests can be realized, and the processing efficiency of the concurrency requests is improved.

Description

Concurrent request processing method, processing device, computer equipment and medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a concurrent request processing method, a processing apparatus, a computer device, and a medium.
Background
The data dictionary refers to a data table structure used by the form and a maintenance management form of field information and field attributes thereof. In the process of manufacturing a large-scale editing type table, the front end requests a large number of dictionary codes corresponding to the table from the rear end, wherein the dictionary codes refer to codes corresponding to all components in the table in a data dictionary. After a large number of dictionary codes are obtained, the situation that the front end requests real information corresponding to the large number of dictionary codes is concurrent with the front end multi-request occurs. When the front end multi-request is concurrent, a large number of network requests are sent, the network requests are simultaneously sent to the back end, and the back end processes the network requests and returns real information corresponding to the dictionary codes. In this case, if the number of concurrent network requests is too large, the requests are piled up, the response time is slow, and even the time is overtime, and the front page also has page breakdown because the network requests are not responded. In addition, dictionary codes in many network requests are repeated when multiple requests are concurrent, which results in that the back end needs to process the repeated network requests, and further reduces the request processing efficiency. The current method of reducing the back-end repeat request is generally to control the back-end to receive the same request only once in a certain period of time, however, this method is contrary to the business logic of the front-end.
Thus, there is a need for a method of handling multiple request concurrency as well as repeat request concurrency.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a concurrent request processing method, processing apparatus, computer device, and medium, so as to solve the problem of low processing efficiency of concurrent requests caused by multi-request concurrency and repeated request concurrency.
According to a first aspect, an embodiment of the present invention provides a method for processing concurrent requests, where the method includes:
receiving first feedback data sent by a back end in response to a first request; the first feedback data comprise a plurality of first dictionary codes corresponding to the form to be rendered;
performing de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes;
generating a plurality of second requests based on a plurality of the target dictionary codes; the second request is used for requesting real information corresponding to the target dictionary code from the back end;
and controlling the second request to be sent to the rear end in a preset mode, wherein the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number.
In some embodiments, the controlling the second request to be sent to the backend in a preset manner includes:
writing a plurality of the second requests into a task queue;
reading a preset number of the second requests from the task queue, and adding the preset number of the second requests into a concurrency pool;
after the second request of any one of the concurrency pools is sent, removing the sent second request from the concurrency pool;
determining whether unread second requests exist in the task queue or not under the condition that the number of the second requests in the concurrency pool is smaller than a preset number;
and under the condition that the unread second requests exist in the task queue, reading a new second request from the task queue to join the concurrency pool so as to keep the number of the second requests in the concurrency pool to be a preset number.
In some embodiments, the reading a preset number of the second requests from the task queue and adding the preset number of the second requests to the concurrency pool includes:
reading one second request from the task queue, and adding the read second request into the concurrency pool;
determining whether the number of the second requests in the concurrency pool is smaller than or equal to the preset number;
returning to execute the step of reading one second request from the task queue and adding the read second request into the concurrency pool under the condition that the number of the second requests in the concurrency pool is smaller than the preset number;
and under the condition that the number of the second requests in the concurrency pool is equal to a preset number, suspending reading the second requests from the task queue.
In some embodiments, the performing deduplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes includes:
for each first dictionary code, determining an initial index and a current index corresponding to the first dictionary code based on a preset function;
and removing the first dictionary code of which the current index is inconsistent with the initial index, and reserving the first dictionary code of which the current index is consistent with the initial index as the target dictionary code.
In some embodiments, after controlling the second request to be sent to the backend in a preset manner, the method further includes:
receiving second feedback data returned by the back end in response to the second request; the second feedback data comprises real information corresponding to the target dictionary code;
and storing the second feedback data into a local cache.
In some embodiments, after storing the second feedback data in the local cache, the method further includes:
under the condition of preparing to render a new form, acquiring a second dictionary code corresponding to the new form;
inquiring whether the local cache stores real information corresponding to the second dictionary code or not;
under the condition that the real information corresponding to the second dictionary code is stored in the local cache, extracting the real information corresponding to the second dictionary code from the local cache for rendering the new form;
generating a corresponding third request based on the second dictionary code under the condition that the real information corresponding to the second dictionary code is not stored in the local cache; the third request is used for requesting real information corresponding to the second dictionary code from the back end;
and controlling the third request to be sent to the back end in the preset mode.
In some embodiments, after receiving the second feedback data returned by the backend in response to the second request, the method further includes:
and rendering the form to be rendered based on the second feedback data so as to update initial values of all components in the form to be rendered into the real information.
According to a second aspect, an embodiment of the present invention provides a concurrency request processing apparatus, including:
the receiving module is used for receiving first feedback data sent by the back end in response to the first request; the first feedback data comprise a plurality of first dictionary codes corresponding to the form to be rendered;
the de-duplication module is used for performing de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes;
a request generation module for generating a plurality of second requests based on a plurality of the target dictionary codes; the second request is used for requesting real information corresponding to the target dictionary code from the back end;
the sending control module is used for controlling the second request to be sent to the rear end in a preset mode, and the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number.
According to a third aspect, an embodiment of the present invention provides a computer device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the concurrent request processing method as provided in the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the concurrent request processing method as provided in the first aspect.
The technical scheme of the invention has the following advantages.
The invention provides a concurrent request processing method, a concurrent request processing device, computer equipment and a medium. The method comprises the following steps: first receiving first feedback data sent by a back end in response to a first request, wherein the first feedback data comprises a plurality of first dictionary codes corresponding to a form to be rendered; then, carrying out de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes, and generating a plurality of second requests based on the plurality of target dictionary codes, so that the occurrence of repeated requests can be effectively reduced; the second request is used for requesting real information corresponding to the target dictionary code from the back end; and finally, the second request is controlled to be sent to the rear end in a preset mode, wherein the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number, so that the effective control on concurrency of multiple requests can be realized, and the processing efficiency of the concurrency requests is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a concurrent request processing method according to an embodiment of the present invention.
Fig. 2 is a flowchart of a method for performing deduplication processing on a plurality of first dictionary codes according to an embodiment of the present invention.
Fig. 3 is a flowchart of a method for controlling a second request to be sent according to an embodiment of the present invention.
Fig. 4 is a flowchart of another concurrent request processing method according to an embodiment of the present invention.
Fig. 5 is a flowchart of another method for processing concurrent requests according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a concurrent request processing apparatus according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
In the process of manufacturing a large-scale editing type table, the front end requests a large number of dictionary codes corresponding to the table from the rear end, wherein the dictionary codes refer to codes corresponding to all components in the table in a data dictionary. After a large number of dictionary codes are obtained, the situation that the front end requests real information corresponding to the large number of dictionary codes is concurrent with the front end multi-request occurs.
The concurrency of the front-end multiple requests can generate a plurality of problems, in some cases, the front-end multiple requests can send a large number of network requests when being concurrent, and some network requests are repeated, so that the front-end page enters a false dead state because the concurrent requests are not responded, and the page cannot be rendered and cannot be operated, and a dead illusion is caused. In addition, the multi-request concurrent super-multi-network request also causes the problem of interface timeout, namely the previous request can respond quickly, but the later request is still in the request, and if the interface timeout time conflicts with the set interface timeout time, the interface timeout is caused, and the page is directly crashed. In addition, jitter of front-end operations, fast operations, network communications, or slow back-end responses increase the probability of the back-end processing network requests repeatedly.
In other cases, while query class interfaces are almost always idempotent, it can be difficult to achieve idempotent when including, for example, data insertion, multi-module data updates, especially the idempotent requirements at high concurrency. Such as foreground callback and background callback of batch dictionary query, presentation of foreground large table data, slow performance business logic or slow network environment, which are high-speed scenes of repeatedly processing network requests.
In a scenario where a large edit form is created, the above-described several cases occur. The current method for reducing the back-end repeat request generally controls the back-end to receive the same request only once within a certain period of time, however, the method is contrary to the business logic of the front-end, so that a method for processing multiple concurrent requests and repeating the concurrent requests at the front-end is needed currently to improve the processing efficiency of the concurrent requests.
Fig. 1 is a flowchart of a concurrent request processing method according to an embodiment of the present invention. In the embodiment of the invention, the concurrent request processing method is executed by the front end. The front end refers to a foreground program running end of a website, such as a client end of a user, such as a computer end and a mobile end, and interacts with the user by showing pages browsed by the user on a browser of the computer end, the mobile end and the like. The backend refers to an operation end of a background program, such as a server end, and can provide various service supports for the front end.
As shown in fig. 1, the concurrent request processing method includes: step S1-step S4.
Step S1, receiving first feedback data sent by a back end in response to a first request.
The first request is sent to the back end by the front end and is used for requesting a plurality of first dictionary codes corresponding to the form to be rendered.
The form to be rendered refers to a page which is currently displayed to a user at the front end and can be operated by the user. Wherein, the rendering refers to a process of displaying corresponding contents on a form page according to a certain specification according to a pre-edited code.
The first dictionary code refers to codes corresponding to all components in a form to be rendered in a pre-constructed data dictionary, and the data dictionary refers to a data table structure used by the form and a maintenance management form of field information and field attributes of the data table structure.
The first feedback data comprises a plurality of first dictionary codes corresponding to the form to be rendered.
In some embodiments, the first feedback data may be returned in the form of a list, including the correspondence between each component in the form to be rendered and the first dictionary code.
Since the content required to be requested by different components in the form to be rendered may be the same, for example, the gender of each client, that is, the first dictionary code corresponding to the different components may be the same, when each component requests the real information corresponding to the first dictionary code, the front end may generate multiple network requests (repeated network requests) containing the same first dictionary code. In order to avoid the front-end generating duplicate network requests, the following steps are performed.
And S2, performing de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes.
The deduplication processing refers to an operation of removing a repeated first dictionary code from a plurality of first dictionary codes. The target dictionary codes retain mutually different first dictionary codes.
And S3, generating a plurality of second requests based on the plurality of target dictionary codes.
The second request is a network request for requesting real information corresponding to the target dictionary code from the back end. The second request includes at least one of the target dictionary codes.
The dictionary code and the real information are stored in the back-end database in the form of key value pairs, and the real information represented by the dictionary code can be queried through the dictionary code, for example, the dictionary code is 10, the corresponding real information is gender and woman, the dictionary code is 11, and the corresponding real information is gender and man.
In this embodiment, a plurality of second requests are generated according to a plurality of target dictionary codes obtained after the deduplication processing, so that generation of repeated network requests can be avoided, and the number of network requests that need to be concurrent is reduced.
And S4, controlling the second request to be sent to the back end in a preset mode.
The preset mode is that the concurrency number of the second requests is smaller than or equal to the preset number. The preset number is the maximum concurrency number of the second request, and may be set according to the actual application situation, which is not specifically limited in this embodiment.
It should be noted that, even if the second requests are generated after the deduplication processing is performed on the plurality of first dictionary codes, the number of second requests may still be huge. An excessive number of second requests are sent out and may also cause congestion of the network, possibly with the previous second request still being processed and the subsequent second request having timed out. Therefore, in this embodiment, the second request is controlled to be sent to the back end in a preset manner, so that the concurrency number of the second request is ensured to be smaller than or equal to the preset number, the second request is sequentially and orderly sent in a controlled manner, the processing efficiency of the concurrency request is effectively improved, the pressure of the back end can be relieved, and the problems of response timeout caused by occupation of memory resources of the back end and occupation surge of resources of the central processor are prevented.
The embodiment of the invention provides a concurrent request processing method, which comprises the following steps: first receiving first feedback data sent by a back end in response to a first request, wherein the first feedback data comprises a plurality of first dictionary codes corresponding to a form to be rendered; then, carrying out de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes, and generating a plurality of second requests based on the plurality of target dictionary codes, so that the occurrence of repeated requests can be effectively reduced; the second request is used for requesting real information corresponding to the target dictionary code from the back end; and finally, the second request is controlled to be sent to the rear end in a preset mode, wherein the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number, so that the effective control on concurrency of multiple requests can be realized, and the processing efficiency of the concurrency requests is improved.
Fig. 2 is a flowchart of a method for performing deduplication processing on a plurality of first dictionary codes according to an embodiment of the present invention. As shown in fig. 2, the step of performing the de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes (step S2 described above) includes: step S21-step S22.
Step S21, determining an initial index and a current index corresponding to each first dictionary code based on a preset function.
The preset function is a preset function for searching an initial index and a current index corresponding to the first dictionary code. The preset function may be set according to actual application situations, for example, set to find a return-to-zero (findIndex) function, a filter (filter) function, and the like. The initial index is the index position corresponding to the first dictionary code searched when the first dictionary code appears in the plurality of first dictionary codes for the first time, and the current index is the index position corresponding to the first dictionary code searched currently.
Step S22, removing the first dictionary code of which the current index is inconsistent with the initial index, and reserving the first dictionary code of which the current index is consistent with the initial index as a target dictionary code.
Wherein, in the case that the current index of the first dictionary code is inconsistent with the initial index, it is explained that the first dictionary code does not appear for the first time, that is, the first dictionary code is a duplicate dictionary code, and therefore, the first dictionary code whose current index is inconsistent with the initial index is removed. In the case where the current index of the first dictionary code coincides with the initial index, it is explained that the first dictionary code is the first occurrence, and therefore, the first dictionary code whose current index coincides with the initial index is retained as the target dictionary code.
In one embodiment, the plurality of first dictionary codes are deduplicated by a filter method. In this process, a callback function is passed as a parameter, in which the condition for writing dictionary codes to be searched is used. The callback function is called by a filter method, and is used for judging whether each element in the array should be cleared, specifically, if the current first dictionary code is the first occurrence, returning true, and the filter method can keep the first dictionary code in the target dictionary code; if the current element does not appear for the first time, return false, filter method will filter it out. Eventually, a new array is returned, which contains all of the target dictionary codes.
In another embodiment, the plurality of first dictionary codes may be deduplicated by a findIndex method. The deduplication process of the findIndex method is similar to the filter method described above, and will not be repeated here.
The embodiment of the invention provides a method for carrying out de-duplication processing on a plurality of first dictionary codes, which can effectively avoid the front end from generating a plurality of network requests (repeated network requests) containing the same first dictionary code by carrying out the de-duplication processing on the first dictionary codes and realize the control of the repeated network requests from the source.
Fig. 3 is a flowchart of a method for controlling a second request to be sent according to an embodiment of the present invention. As shown in fig. 3, the step of controlling the second request to be sent to the back end in a preset manner (step S4 described above) includes: step S41-step S44.
Step S41, writing a plurality of second requests into the task queue.
Wherein the task queue is a read task queue for storing the second request. The front-end reader may read the second request from the task queue.
In this embodiment, writing the plurality of second requests into the task queue may assist in implementing ordered processing of the second requests.
Step S42, reading a preset number of second requests from the task queue, and adding the preset number of second requests into the concurrency pool.
Wherein the concurrency pool is used for controlling the concurrency amount of the second request.
In one embodiment, the preset number of second requests may be read at one time and added to the concurrency pool.
In another embodiment, reading a preset number of second requests from the task queue and adding the preset number of second requests to the concurrency pool includes: the following steps one to four.
Step one, reading a second request from the task queue, and adding the read second request into the concurrency pool.
Step two, determining whether the number of second requests in the concurrency pool is smaller than or equal to the preset number.
In some embodiments, there is a number determination mechanism in the concurrency pool, and when the maximum concurrency pool number (preset number) is not reached, then the second request may be added to the concurrency pool.
And step three, returning to execute the step of reading one second request from the task queue and adding the read second request into the concurrency pool (the step one) under the condition that the number of the second requests in the concurrency pool is smaller than the preset number.
And step four, under the condition that the number of the second requests in the concurrency pool is equal to the preset number, suspending reading the second requests from the task queue.
In some embodiments, adding the second request to the concurrency pool is stopped when the number of second requests in the concurrency pool reaches the maximum concurrency pool number (the preset number described above).
And step S43, after the second request in the concurrency pool is sent, removing the sent second request from the concurrency pool.
Wherein, in order to avoid that the second request with completed sending continuously occupies the position in the concurrency pool, the second request with completed sending needs to be removed from the concurrency pool after any second request in the concurrency pool is completed.
In some embodiments, the concurrency pool may be blocked by the Promidase. Race method (an algorithm) in ECMAScript 6.0 (a programming language standard), and the second request read is truncated when a preset number is reached in combination with the await method (an algorithm). In addition, the mechanism of the Promidase method can be understood as race, i.e. when one of the second requests produces a result, the Promidase method also produces a result. The promise. Trace method is understood to be when the second request transmission is completed.
Step S44, determining whether unread second requests exist in the task queue or not under the condition that the number of the second requests in the concurrency pool is smaller than the preset number.
When the number of second requests in the concurrency pool is smaller than the preset number, it is indicated that a new second request which is not sent can be added in the concurrency pool at this time, so that it is required to determine whether there is an unread second request currently in the task queue.
In one embodiment, the number determination mechanism in the concurrency pool may be further configured to detect, in real time, the number of second requests in the concurrency pool, and determine whether there is an unread second request in the task queue if the number of second requests in the concurrency pool is less than a preset number.
In another embodiment, the step of determining whether the unread second request exists in the task queue may be triggered each time one second request is removed from the concurrency pool, that is, the step of removing the second request and the step of adding the second request may be automatically associated, without detecting the number of second requests in the concurrency pool in real time, so as to reduce system resource consumption.
Step S45, under the condition that the unread second requests exist in the task queue, reading a new second request from the task queue to join the concurrency pool so as to keep the number of the second requests in the concurrency pool to be a preset number.
Wherein in case there is an unread second request in the task queue, it is indicated that there is a second request that has not yet been sent, and therefore a new second request is read from the task queue to join the concurrency pool to send the second request.
In one embodiment, in the case that there is no unread second request in the task queue, it is indicated that all the second requests in the task queue have been read, and only the second requests in the concurrency pool are currently not sent to complete, so that all the second requests are currently sent only by sending the second requests in the concurrency pool to complete.
According to the embodiment of the invention, by the method for controlling the second requests to be sent, the number of the second requests in the concurrency pool can be kept not to exceed the preset number, the second requests are sequentially and moderately sent, the processing efficiency of the concurrency requests is effectively improved, the pressure of the rear end can be relieved, and the problems of response overtime caused by occupation of memory resources of the rear end and occupation of the CPU resources are prevented.
Fig. 4 is a flowchart of another concurrent request processing method according to an embodiment of the present invention. As shown in fig. 4, after controlling the second request to be sent to the back end in a preset manner (step S4 above), the method further includes: step S5-step S6.
And S5, receiving second feedback data returned by the back end in response to the second request.
The second feedback data comprises real information corresponding to the target dictionary code.
In one embodiment, after receiving the second feedback data returned by the back-end in response to the second request, the front-end further includes: and rendering the form to be rendered based on the second feedback data so as to update the initial values of all components in the form to be rendered to corresponding real information.
The initial value is a value corresponding to the fact that the component in the form is not rendered, for example, the initial value can be a null value or a preset default value.
In this embodiment, since the front end performs control for concurrency of multiple requests and concurrency of repeated requests, the memory resource occupation of the back end and the central processor resource occupation pressure of the central processor will also be reduced, so that the response speed of the second request can be effectively ensured, and the speed of rendering the form to be rendered can be improved. In some practical tests, the rendering speed may reach a state of second level rendering.
And S6, storing the second feedback data into a local cache.
The local cache is a cache that can be directly read by the front end, and may be, for example, a cache space of a browser, a cache space of a client, or the like.
In the embodiment of the invention, after the second feedback data returned by the rear end in response to the second request is received, the second feedback data can be stored in the local cache, so that the second feedback data can be used in the process of requesting the real information corresponding to the dictionary code, and the request speed of requesting the real information corresponding to the dictionary code is improved.
Fig. 5 is a flowchart of another method for processing concurrent requests according to an embodiment of the present invention. As shown in fig. 5, after storing the second feedback data in the local cache (step S6 above), the method further includes: step S7-step S11.
And S7, under the condition of preparing to render the new form, acquiring a second dictionary code corresponding to the new form.
The new form refers to a new page which is displayed to the user at the front end and can be operated by the user.
The second dictionary code refers to the corresponding code of each component in the new form in the pre-constructed data dictionary, and the data dictionary refers to the data table structure used by the form and the maintenance management form of the field information and the field attribute thereof.
In one embodiment, the second dictionary code may be subjected to a de-duplication process, and the detailed description of the de-duplication process may refer to the foregoing step of performing the de-duplication process on the first dictionary code, which is not described herein.
And S8, inquiring whether the local cache stores the real information corresponding to the second dictionary code.
The second feedback data is stored in the local cache, and the second feedback data includes real information corresponding to the dictionary code, so that before the network request is sent based on the second dictionary code, whether the real information corresponding to the second dictionary code is stored in the local cache can be queried.
In one embodiment, the data cached in the local storage in the browser may be obtained by calling a local storage obtaining item instruction (local storage) to look up the second dictionary code as a keyword in the second feedback data cached locally. If the real information corresponding to the second dictionary code exists in the local cache, the hit cache is indicated, and the following step S9 is executed. If the real information corresponding to the second dictionary code does not exist in the local cache, the cache is not hit, and the following step S10 is executed.
Step S9, under the condition that the real information corresponding to the second dictionary code is stored in the local cache, the real information corresponding to the second dictionary code is extracted from the local cache to be used for rendering the new form.
Step S10, when the real information corresponding to the second dictionary code is not stored in the local cache, a corresponding third request is generated based on the second dictionary code.
The third request is used for requesting real information corresponding to the second dictionary code from the back end.
Step S11, the third request is controlled to be sent to the back end in a preset mode.
The preset mode is that the concurrency number of the third request is smaller than or equal to the preset number.
In this embodiment, the detailed description of the control of the sending of the third request to the back end in the preset manner may refer to the foregoing part of the embodiment of the present invention that sends the control of the second request to the back end in the preset manner, which is not described herein.
In still another concurrent request processing method provided by the embodiment of the present invention, under the condition of preparing to render a new form, a second dictionary code corresponding to the new form is acquired, before a third network request is generated based on the second dictionary code, whether real information corresponding to the second dictionary code is stored in a local cache is queried, and under the condition that real information corresponding to the second dictionary code is stored in the local cache, the real information corresponding to the second dictionary code is extracted from the local cache for rendering the new form. And generating a corresponding third request based on the second dictionary code only when the real information corresponding to the second dictionary code is not stored in the local cache, so that the number of concurrent requests can be further reduced, and the processing efficiency of the concurrent requests can be improved.
Fig. 6 is a schematic structural diagram of a concurrent request processing apparatus according to an embodiment of the present invention. As shown in fig. 6, the apparatus includes: a receiving module 61, a deduplication module 62, a request generation module 63, and a transmission control module 64.
The receiving module 61 is configured to receive first feedback data sent by the back end in response to the first request.
The first feedback data comprise a plurality of first dictionary codes corresponding to the form to be rendered.
The deduplication module 62 is configured to perform deduplication processing on the plurality of first dictionary codes, and obtain a plurality of target dictionary codes.
The request generating module 63 is configured to generate a plurality of second requests based on the plurality of target dictionary codes.
The second request is used for requesting real information corresponding to the target dictionary code from the back end.
The sending control module 64 is configured to control the second request to be sent to the back end in a preset manner.
The preset mode is that the concurrency number of the second requests is smaller than or equal to the preset number.
In one embodiment, the concurrent request processing apparatus further includes: the device comprises a storage module and a rendering module.
The receiving module 61 is further configured to receive second feedback data returned by the back end in response to the second request. The second feedback data comprises real information corresponding to the target dictionary code.
The storage module is used for storing the second feedback data into the local cache.
The local cache is a cache that can be directly read by the front end, and may be, for example, a cache space of a browser, a cache space of a client, or the like.
And the rendering module is used for rendering the form to be rendered based on the second feedback data so as to update the initial values of all components in the form to be rendered into corresponding real information.
In one embodiment, the concurrent request processing apparatus further includes: and a query module.
The receiving module 61 is further configured to obtain a second dictionary code corresponding to the new form when preparing to render the new form.
The query module is used for querying whether the real information corresponding to the second dictionary code is stored in the local cache, and extracting the real information corresponding to the second dictionary code from the local cache to be used for rendering the new form under the condition that the real information corresponding to the second dictionary code is stored in the local cache.
The request generating module 63 is further configured to generate, when the real information corresponding to the second dictionary code is not stored in the local cache, a corresponding third request based on the second dictionary code, where the third request is used to request, to the back end, the real information corresponding to the second dictionary code.
The transmission control module 64 is further configured to control the third request to be transmitted to the backend in a preset manner.
The embodiment of the invention provides a concurrent request processing device, wherein a receiving module is used for receiving first feedback data sent by a back end in response to a first request, and the first feedback data comprises a plurality of first dictionary codes corresponding to a form to be rendered; the de-duplication module is used for performing de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes, and the request generation module is used for generating a plurality of second requests based on the plurality of target dictionary codes, so that the occurrence of repeated requests can be effectively reduced; the second request is used for requesting real information corresponding to the target dictionary code from the back end; the sending control module is used for controlling the second request to be sent to the rear end in a preset mode, wherein the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number, so that effective control over concurrency of multiple requests can be realized, and the processing efficiency of the concurrency requests is improved.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown in fig. 7, the computer device may include a processor 701 and a memory 702, where the processor 701 and the memory 702 may be connected by a bus or otherwise, as exemplified by a bus connection in fig. 7.
The processor 701 may be a central processing unit (Central Processing Unit, CPU). The processor 701 may also be a chip such as another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or a combination thereof.
The memory 702 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the concurrent request processing method in the embodiment of the present invention. The processor 701 executes various functional applications of the processor and data processing by running non-transitory software programs, instructions, and modules stored in the memory 702, that is, implements the concurrent request processing method in the method embodiments described above.
Memory 702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created by the processor 701, or the like. In addition, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, such remote memory being connectable to processor 701 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in memory 702 that, when executed by processor 701, perform the concurrent request processing method in the embodiment shown in fig. 1.
The details of the above computer device may be understood correspondingly with respect to the corresponding relevant descriptions and effects in the embodiment shown in fig. 1, which are not repeated here.
It will be appreciated by those skilled in the art that implementing all or part of the above-described embodiment method may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer readable storage medium, and the program may include the above-described embodiment method when executed. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations are within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for processing concurrent requests, the method comprising:
receiving first feedback data sent by a back end in response to a first request; the first feedback data comprise a plurality of first dictionary codes corresponding to the form to be rendered;
performing de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes;
generating a plurality of second requests based on a plurality of the target dictionary codes; the second request is used for requesting real information corresponding to the target dictionary code from the back end;
and controlling the second request to be sent to the rear end in a preset mode, wherein the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number.
2. The method of claim 1, wherein the controlling the second request to be sent to the backend in a preset manner comprises:
writing a plurality of the second requests into a task queue;
reading a preset number of the second requests from the task queue, and adding the preset number of the second requests into a concurrency pool;
after the second request of any one of the concurrency pools is sent, removing the sent second request from the concurrency pool;
determining whether unread second requests exist in the task queue or not under the condition that the number of the second requests in the concurrency pool is smaller than a preset number;
and under the condition that the unread second requests exist in the task queue, reading a new second request from the task queue to join the concurrency pool so as to keep the number of the second requests in the concurrency pool to be a preset number.
3. The method of claim 2, wherein the reading a preset number of the second requests from the task queue and adding a preset number of the second requests to a concurrency pool comprises:
reading one second request from the task queue, and adding the read second request into the concurrency pool;
determining whether the number of the second requests in the concurrency pool is smaller than or equal to the preset number;
returning to execute the step of reading one second request from the task queue and adding the read second request into the concurrency pool under the condition that the number of the second requests in the concurrency pool is smaller than the preset number;
and under the condition that the number of the second requests in the concurrency pool is equal to a preset number, suspending reading the second requests from the task queue.
4. The method of claim 1, wherein performing the de-duplication process on the plurality of first dictionary codes to obtain a plurality of target dictionary codes comprises:
for each first dictionary code, determining an initial index and a current index corresponding to the first dictionary code based on a preset function;
and removing the first dictionary code of which the current index is inconsistent with the initial index, and reserving the first dictionary code of which the current index is consistent with the initial index as the target dictionary code.
5. The method of claim 1, further comprising, after controlling the second request to be sent to the backend in a preset manner:
receiving second feedback data returned by the back end in response to the second request; the second feedback data comprises real information corresponding to the target dictionary code;
and storing the second feedback data into a local cache.
6. The method of claim 5, wherein after storing the second feedback data in the local cache, further comprising:
under the condition of preparing to render a new form, acquiring a second dictionary code corresponding to the new form;
inquiring whether the local cache stores real information corresponding to the second dictionary code or not;
under the condition that the real information corresponding to the second dictionary code is stored in the local cache, extracting the real information corresponding to the second dictionary code from the local cache for rendering the new form;
generating a corresponding third request based on the second dictionary code under the condition that the real information corresponding to the second dictionary code is not stored in the local cache; the third request is used for requesting real information corresponding to the second dictionary code from the back end;
and controlling the third request to be sent to the back end in the preset mode.
7. The method of claim 5, wherein after receiving the second feedback data returned by the backend in response to the second request, further comprising:
and rendering the form to be rendered based on the second feedback data so as to update initial values of all components in the form to be rendered into the real information.
8. A concurrent request processing apparatus, the apparatus comprising:
the receiving module is used for receiving first feedback data sent by the back end in response to the first request; the first feedback data comprise a plurality of first dictionary codes corresponding to the form to be rendered;
the de-duplication module is used for performing de-duplication processing on the plurality of first dictionary codes to obtain a plurality of target dictionary codes;
a request generation module for generating a plurality of second requests based on a plurality of the target dictionary codes; the second request is used for requesting real information corresponding to the target dictionary code from the back end;
the sending control module is used for controlling the second request to be sent to the rear end in a preset mode, and the preset mode is that the concurrency number of the second request is smaller than or equal to the preset number.
9. A computer device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the concurrent request processing method according to any one of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the concurrent request processing method according to any one of claims 1-7.
CN202310511521.2A 2023-05-08 2023-05-08 Concurrent request processing method, processing device, computer equipment and medium Pending CN116627968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511521.2A CN116627968A (en) 2023-05-08 2023-05-08 Concurrent request processing method, processing device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310511521.2A CN116627968A (en) 2023-05-08 2023-05-08 Concurrent request processing method, processing device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN116627968A true CN116627968A (en) 2023-08-22

Family

ID=87620502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310511521.2A Pending CN116627968A (en) 2023-05-08 2023-05-08 Concurrent request processing method, processing device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN116627968A (en)

Similar Documents

Publication Publication Date Title
CN107832406B (en) Method, device, equipment and storage medium for removing duplicate entries of mass log data
CN110674432A (en) Second-level caching method and device and computer readable storage medium
US20160275178A1 (en) Method and apparatus for search
EP3036662A1 (en) Generating cache query requests
CN111414389B (en) Data processing method and device, electronic equipment and storage medium
CN113419824A (en) Data processing method, device, system and computer storage medium
CN107679077B (en) Paging implementation method and device, computer equipment and storage medium
CN109766318B (en) File reading method and device
CN110019873B (en) Face data processing method, device and equipment
CN111221634A (en) Method, device and equipment for processing merging request and storage medium
WO2021189195A1 (en) Data querying method and apparatus, server, and storage medium
CN113961832A (en) Page rendering method, device, equipment, storage medium and program product
CN110222046B (en) List data processing method, device, server and storage medium
CN116627968A (en) Concurrent request processing method, processing device, computer equipment and medium
CN109213972B (en) Method, device, equipment and computer storage medium for determining document similarity
CN116226150A (en) Data processing method, device, equipment and medium based on distributed database
CN115904240A (en) Data processing method and device, electronic equipment and storage medium
CN115964395A (en) Data reading method and device and electronic equipment
CN111753141A (en) Data management method and related equipment
CN113190549B (en) Multidimensional table data calling method, multidimensional table data calling device, server and storage medium
CN111045787B (en) Rapid continuous experiment method and system
CN114139040A (en) Data storage and query method, device, equipment and readable storage medium
CN111026706A (en) Method, device, equipment and medium for warehousing power system data
US20140108420A1 (en) Index creation method and system
CN110858918B (en) MDS data acquisition method, digital television and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination