CN117453422A - Data processing method, device, electronic equipment and computer readable storage medium - Google Patents

Data processing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117453422A
CN117453422A CN202311774471.3A CN202311774471A CN117453422A CN 117453422 A CN117453422 A CN 117453422A CN 202311774471 A CN202311774471 A CN 202311774471A CN 117453422 A CN117453422 A CN 117453422A
Authority
CN
China
Prior art keywords
processing
data
asynchronous
requests
user request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311774471.3A
Other languages
Chinese (zh)
Other versions
CN117453422B (en
Inventor
陈灏
毛宇
朱木须
邬建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shouqianba Internet Technology Co ltd
Nanjing Yanli Technology Co ltd
Original Assignee
Shanghai Shouqianba Internet Technology Co ltd
Nanjing Yanli Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shouqianba Internet Technology Co ltd, Nanjing Yanli Technology Co ltd filed Critical Shanghai Shouqianba Internet Technology Co ltd
Priority to CN202311774471.3A priority Critical patent/CN117453422B/en
Publication of CN117453422A publication Critical patent/CN117453422A/en
Application granted granted Critical
Publication of CN117453422B publication Critical patent/CN117453422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data processing method, a data processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring a user request, wherein the user request comprises a service processing request; processing user requests and/or processing service processing requests through an asynchronous sub-thread and/or an asynchronous queue; and obtaining a response result of the user request according to the processing result. The method and the device respectively perform asynchronous processing on the user request, the service processing request and the like acquired by the data processing system by adopting the asynchronous sub-thread and the asynchronous queue. When asynchronous processing is carried out, one service processing request or user request is completed, other service processing requests or user requests can be continued, parallel processing can be realized, and the overall performance of the system is improved. And each task queue or asynchronous sub-thread can directly return a result when processing a service processing request or a user request, is not influenced by other task queues or asynchronous sub-threads, and improves the response speed of data processing.

Description

Data processing method, device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method, a data processing device, an electronic device, and a computer readable storage medium.
Background
With the development of micro-service architecture, it presents a great challenge to the performance of micro-service architecture. At present, the performance of the micro service architecture is mainly improved by a capacity expansion node mode. However, this capacity expansion method cannot effectively solve the service problem.
Disclosure of Invention
In view of the foregoing, an object of an embodiment of the present application is to provide a data processing method, apparatus, electronic device, and computer readable storage medium, which can effectively improve the overall performance of the system.
In a first aspect, an embodiment of the present application provides a data processing method, where a user request is obtained, where the user request includes a service processing request; processing the user request and/or the service processing request through an asynchronous sub-thread and/or an asynchronous queue; and obtaining a response result of the user request according to the processing result.
In the implementation process, the asynchronous sub-threads and the asynchronous queues are adopted to respectively and asynchronously process the user request, the service processing request and the like acquired by the data processing system. When asynchronous processing is carried out, one service processing request or user request is completed, other service processing requests or user requests can be continued, so that parallel processing can be realized, and the overall performance of the system can be effectively improved. And each task queue or asynchronous sub-thread can directly return a result when processing a service processing request or a user request, is not influenced by other task queues or asynchronous sub-threads, and improves the response speed of the data processing system. Furthermore, hardware resources are not required to be added in the whole scheme, so that the cost can be greatly reduced.
In one embodiment, processing the user request by an asynchronous sub-thread includes: forwarding one or more of the user requests to one or more asynchronous sub-threads, respectively, by servlets; wherein the Servlet feeds back to the container after forwarding the user request to the asynchronous sub-thread; and processing corresponding one or more user requests in parallel through one or more asynchronous sub-threads.
In the implementation process, one or more user requests are respectively transferred to one or more asynchronous sub-threads through the Servlet, and the Servlet can share data among different programs, so that the Servlet can easily access server resources when processing the requests and can transfer the data to other programs or components. In addition, the Servlet uses a lightweight Java thread to process each request, so that the processing efficiency when a large number of concurrent requests are processed is further improved.
In one embodiment, processing the traffic processing request through an asynchronous queue includes: submitting one or more user requests to one or more corresponding task queues respectively, wherein each user request comprises one or more service processing requests, and the task queues are asynchronous queues; and processing corresponding one or more business processing requests in parallel through one or more task queues.
In the implementation process, when one service processing request is completed in asynchronous processing of the task queue, other service processing requests can be continued, so that parallel processing can be realized. Thus, by setting one or more task queues to process corresponding one or more service processing requests in parallel, the overall performance of the system can be improved. In addition, when each task queue processes the service processing request, the service processing request processed by one task queue can directly return the result without waiting for other task queues to process other service processing requests after the completion of the service processing request. Thus, by setting one or more task queues to process corresponding one or more service processing requests in parallel, the response speed of the data processing system can be improved.
In one embodiment, each user request includes one or more service processing requests, each asynchronous sub-thread includes one or more task queues, and processing the user request and the service processing requests through the asynchronous sub-threads includes: forwarding one or more of the user requests to one or more asynchronous sub-threads, respectively, by servlets; wherein the Servlet feeds back to the container after forwarding the user request to the asynchronous sub-thread; submitting one or more service processing requests in one or more user requests in each asynchronous sub-thread to one or more corresponding task queues in the asynchronous sub-thread, wherein the task queues are asynchronous queues; and processing corresponding one or more business processing requests in parallel through one or more task queues.
In the implementation process, when the number of the user requests is multiple, the multiple user requests are respectively transferred to the corresponding multiple asynchronous sub-threads, and the multiple user requests are asynchronously and parallelly processed through the multiple asynchronous sub-threads. And for each asynchronous sub-thread, when each asynchronous sub-thread comprises a plurality of service processing requests, the plurality of service processing requests in each asynchronous sub-thread can be respectively submitted to a plurality of corresponding task queues in the asynchronous sub-thread, and the corresponding plurality of service processing requests are processed in parallel through the plurality of task queues. When asynchronous processing is carried out, one service processing request or user request is completed, other service processing requests or user requests can be continued, so that parallel processing can be realized, and the overall performance of the system is improved. And each task queue or asynchronous sub-thread can directly return a result when processing a service processing request or a user request, is not influenced by other task queues or asynchronous sub-threads, and improves the response speed of the data processing system.
In one embodiment, each task queue is provided with a corresponding processing period, and the parallel processing of the corresponding one or more service processing requests by the one or more task queues includes: when submitting the user request to a corresponding task queue, respectively triggering and starting a timeout task of each task queue, wherein the timeout task is a timing thread for timing execution; acquiring response time of the corresponding task queue for processing the user request through the timing thread; judging whether the response time of each task queue exceeds a corresponding processing period or not; and writing the current processing result of the task queue with the response time exceeding the corresponding processing period into a response, and releasing the corresponding link of the task queue.
In the implementation process, by setting a corresponding processing period for each task queue, when the response time of the task queue exceeds the processing period, the preprocessing result of the task queue is written into the response, the corresponding link of the task queue is released in advance, the occupied time of the link is reduced, and the data processing efficiency is improved.
In one embodiment, the user request includes one or more data, and after the obtaining the user request, the method further includes: judging whether the data has a local cache or not; if the data is not cached locally, judging whether the data is cached by Redis; if the data has the Redis cache, judging whether the data needs to be cached to the local; and if the data needs to be cached locally, caching the data locally.
In one embodiment, the method further comprises: if the data is not cached by Redis, querying a database, acquiring data resources corresponding to the data, and judging whether the data need to be cached to the Redis; if the data does not need to be cached to the Redis, judging whether the data needs to be cached to the local; and if the data needs to be cached locally, caching the data locally.
In the above implementation, the closer to the computing resource, the faster the data is obtained, but because of the limitation of the system resource, all the data cannot be stored in the memory. By caching the data in the local and Redis according to the use frequency and the change condition, the pressure of the local cache can be reduced, the data volume of simultaneously accessing the system resources is controlled, the occurrence of the condition of system congestion is reduced, and the efficiency of data processing is improved.
In one embodiment, the data is multiple, and the multiple data queries corresponding data resources in the database through the current limiting module.
In the implementation process, the current limiting module is arranged, so that the data volume of the access database can be limited, database breakdown caused by cache breakdown is avoided, and the availability of a cache system is improved.
In one embodiment, after the obtaining the user request, the method further comprises: splitting different business processing requests in the user request into a plurality of asynchronous log queues; writing the corresponding business processing requests into a plurality of asynchronous log queues respectively, and reading log data in the corresponding task queues; and writing the log data into a corresponding file through the asynchronous log queue.
In the implementation process, by setting a plurality of asynchronous log queues, when a plurality of service processing requests need to be processed, different service processing requests can be split into different asynchronous log queues by adopting a log distribution mechanism, so that the problem investigation is facilitated, and meanwhile, the log writing speed can be improved. In addition, by setting the log queues as asynchronous log queues, the log queues cannot be mutually influenced when asynchronous processing is carried out, and when writing of the log queues is completed, the result can be directly returned, the influence of other log queues is avoided, and the response speed of the system is improved.
In one embodiment, the user request generates a single number, the method further comprising: acquiring a pre-acquired current single number; judging whether the current single number is a single number in the current batch or not; if the current single number is the single number in the current batch, judging whether the single number in the current batch is used up; if the single number in the current batch is not used, judging whether the single number usage in the current batch exceeds a usage threshold; if the usage amount threshold is exceeded, obtaining a single number within a set number range as a single number of the next batch; and returning the current single number.
In the implementation process, batch single numbers are obtained at one time, the database is not required to be accessed again when the single numbers are generated each time, the times of accessing the database are reduced, and the single number generation efficiency is improved. In addition, by monitoring the usage amount of the single numbers, when the number of the single numbers in the current batch is reduced to a certain threshold value, a batch of single numbers can be acquired in advance. Thus, the risk of single number confusion can be reduced, and database resources can be utilized efficiently.
In a second aspect, an embodiment of the present application further provides a data processing apparatus, including: the acquisition module is used for acquiring a user request, wherein the user request comprises a service processing request; the processing module is used for processing the user request and/or the service processing request through an asynchronous sub-thread and/or an asynchronous queue; and the response module is used for obtaining a response result of the user request according to the processing result.
In a third aspect, embodiments of the present application further provide an electronic device, including: a processor, a memory storing machine-readable instructions executable by the processor, which when executed by the processor, perform the steps of the method of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the present embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the data processing method of the first aspect, or any of the possible embodiments of the first aspect.
In order to make the above objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a data processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of processing a user request through an asynchronous sub-thread provided in an embodiment of the present application;
FIG. 3 is a flow chart of processing a service processing request through an asynchronous queue according to an embodiment of the present application;
FIG. 4 is a flowchart of a specific implementation of a timeout task according to an embodiment of the present application;
FIG. 5 is a flow chart of data buffering provided in an embodiment of the present application;
FIG. 6 is a flowchart of single number generation provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of functional modules of a data processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a block schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
In order to solve the problems of network fusion, system fusion, service fusion, network fusion between enterprises, system fusion, service fusion and the like in enterprises, the system architecture mode of the enterprises is gradually converted from a traditional single architecture to a micro-service architecture. The main function of the micro-service architecture is to disperse different service functions of the application into discrete services, each micro-service can be deployed independently, and each micro-service only focuses on own service, so that the coupling of the system is reduced.
The inventor of the application finds that the performance of the micro-service architecture is improved mainly by a capacity expansion node mode at present through long-term research. However, when faced with a large amount of concurrent traffic, the expansion node will only shift pressure to the database layer. When the database reaches a critical point, the continued capacity expansion node cannot effectively solve the service problem. Moreover, the capacity expansion node mode needs to infinitely increase hardware resources, and has high cost.
In view of this, the present inventors propose a data processing method that performs asynchronous processing on a user request, a service processing request, and the like acquired by a data processing system by using an asynchronous sub-thread and an asynchronous queue, respectively. When asynchronous processing is carried out, one service processing request or user request is completed, other service processing requests or user requests can be continued, so that parallel processing can be realized, and the overall performance of the system can be effectively improved. And each task queue or asynchronous sub-thread can directly return a result when processing a service processing request or a user request, is not influenced by other task queues or asynchronous sub-threads, and improves the response speed of the data processing system. Furthermore, hardware resources are not required to be added in the whole scheme, so that the cost can be greatly reduced.
Referring to fig. 1, a flowchart of a data processing method according to an embodiment of the present application is provided. The specific flow shown in fig. 1 will be described in detail.
Step 201, a user request is obtained.
The user request here includes a service processing request. For example, the service processing request may include an information acquisition request, a data transmission request, a data update request, a data deletion request, and the like. Of course, one or more user requests may be obtained at the same time, and each user request includes one or more service processing requests. In addition, one or more data may be included in the user request. The number and type of the user requests and the number and type of the service processing requests can be selected according to actual situations, and the application is not particularly limited.
The user request can be acquired in real time, or at intervals of a set time, or under the triggering of the user request. The acquisition mode of the user request can be adjusted according to actual conditions, and the application is not particularly limited.
Step 202, processing user requests and/or processing business processing requests through an asynchronous sub-thread and/or an asynchronous queue.
It will be appreciated that the data processing method is generally used in a data processing system, as shown in figure 2, which includes at least one main thread and a plurality of sub-threads, each main thread under which a plurality of sub-threads may be dispatched. Wherein each sub-thread under each main thread is an asynchronous sub-thread. Of course, all of the sub-threads in the data processing system may also be set as asynchronous sub-threads. The number and arrangement of the main threads and the sub threads in the data processing system can be adjusted according to actual conditions, and the method is not particularly limited.
It may be appreciated that when the number of user requests to be processed by the main thread is plural, the plural user requests may be classified, and each classified user request may be distributed to a corresponding asynchronous sub-thread, so as to perform asynchronous processing on the plural user requests through each asynchronous sub-thread. When one user request is completed, other user requests can be continued to realize parallel processing, so that the overall performance of the system is improved. When each asynchronous sub-thread processes the user request, the user request processed by one asynchronous sub-thread can directly return the result without waiting for other asynchronous sub-threads to process other tasks after the user request is completed, and the response speed of the data processing system can be improved.
In one embodiment, one or more task queues are included in each asynchronous sub-thread, the task queues being asynchronous queues, each task queue being operable to process one or more business process requests. When a plurality of service processing requests exist in the user requests in the asynchronous sub-thread, the service processing requests can be classified, and each classified service processing request is distributed to a corresponding task queue respectively, so that asynchronous processing is carried out on the service processing requests through each task queue respectively. When one of the service processing requests is completed, the other service processing requests can be continued to realize parallel processing, so that the overall performance of the system is improved. And when each task queue processes the service processing request, the service processing request processed by one task queue can directly return the result without waiting for other task queues to process other service processing requests after the completion of the service processing request, and the response speed of the data processing system can be improved.
Each task queue here may set a corresponding processing period. When the response time of the task queue exceeds the processing period of the response, the corresponding link of the task queue can be released.
And 203, obtaining a response result of the user request according to the processing result.
It should be understood that after the asynchronous sub-thread processes the user request and/or the asynchronous queue processes the service processing request, a corresponding processing result is returned, where the processing result is used to feed back a response result corresponding to the user request.
The response result may include a result of a processing state of a request from a user or a service processing request, such as a response completion, a response suspension, or the like, or may include a result obtained after a request from a user is processed, such as returned data, deleted data, and updated data. The response result can be adjusted according to the actual situation, and the application is not particularly limited.
In the implementation process, the asynchronous sub-threads and the asynchronous queues are adopted to respectively and asynchronously process the user request, the service processing request and the like acquired by the data processing system. When asynchronous processing is carried out, one service processing request or user request is completed, other service processing requests or user requests can be continued, so that parallel processing can be realized, and the overall performance of the system can be effectively improved. And each task queue or asynchronous sub-thread can directly return a result when processing a service processing request or a user request, is not influenced by other task queues or asynchronous sub-threads, and improves the response speed of the data processing system. Furthermore, hardware resources are not required to be added in the whole scheme, so that the cost can be greatly reduced.
In one possible implementation, processing a user request through an asynchronous sub-thread includes: forwarding one or more user requests to one or more asynchronous sub-threads through servlets, respectively; the corresponding one or more user requests are processed in parallel by one or more asynchronous sub-threads.
Wherein servlets feed back to the container after forwarding the user request to the asynchronous sub-thread.
It should be appreciated that, as shown in fig. 2, after the main thread obtains the corresponding user request, the main thread may be processed by a Servlet to forward one or more user requests to one or more asynchronous sub-threads, respectively. After the corresponding user request is obtained, the synchronous sub-threads respectively process the corresponding user request and output corresponding response results.
Servlets are here Java applets running on the server side for handling client requests, responding to dynamic resources to the browser. And servlets can share data between different programs. This allows servlets to easily access server resources when processing requests and can pass data to other programs or components. In addition, servlets use lightweight Java threads to handle each request, rather than using heavy-weight operating system processes. This design makes servlets more efficient in handling large numbers of concurrent requests.
In one embodiment, before forwarding the one or more user requests to the one or more asynchronous sub-threads, respectively, by the Servlet, the method further comprises: multiple user requests are classified to forward multiple user requests of the same class to the same asynchronous sub-thread or corresponding multiple asynchronous sub-threads.
Illustratively, the user requests may be classified into category a, category B, and category C, where more user requests in category a may be forwarded by two asynchronous sub-threads. The user request in class a may be forwarded to asynchronous sub-thread 1 and asynchronous sub-thread 2, the user request in class B may be forwarded to asynchronous sub-thread 3, and the user request in class C may be forwarded to asynchronous sub-thread 4 when forwarding the user request to the corresponding asynchronous sub-thread.
In the implementation process, one or more user requests are respectively transferred to one or more asynchronous sub-threads through the Servlet, and the Servlet can share data among different programs, so that the Servlet can easily access server resources when processing the requests and can transfer the data to other programs or components. In addition, the Servlet uses a lightweight Java thread to process each request, so that the processing efficiency when a large number of concurrent requests are processed is further improved.
In one possible implementation, processing a service processing request through an asynchronous queue includes: submitting one or more user requests to corresponding one or more task queues respectively; and processing corresponding one or more business processing requests in parallel through one or more task queues.
It should be understood that, as shown in fig. 3, when there are a plurality of acquired user requests, in order to improve the processing efficiency, the plurality of user requests may be respectively submitted to the corresponding one or more task queues. Each task queue may be used to process one or more business process requests of one or more user requests, and multiple task queues may process multiple business process requests in parallel at the same time.
In one embodiment, after one or more user requests are acquired, one or more service processing requests in the one or more user requests may be acquired, if the service processing requests are multiple, the multiple service processing requests may be classified, and the multiple service processing requests may be respectively submitted to the corresponding one or more task queues according to the classification result.
In the implementation process, when one service processing request is completed in asynchronous processing of the task queue, other service processing requests can be continued, so that parallel processing can be realized. Thus, by setting one or more task queues to process corresponding one or more service processing requests in parallel, the overall performance of the system can be improved. In addition, when each task queue processes the service processing request, the service processing request processed by one task queue can directly return the result without waiting for other task queues to process other service processing requests after the completion of the service processing request. Thus, by setting one or more task queues to process corresponding one or more service processing requests in parallel, the response speed of the data processing system can be improved.
In one possible implementation, processing user requests through an asynchronous sub-thread and processing business processing requests through an asynchronous queue includes: forwarding one or more user requests to one or more asynchronous sub-threads through servlets, respectively; submitting one or more service processing requests in one or more user requests in each asynchronous sub-thread to one or more corresponding task queues in the asynchronous sub-thread respectively; and processing corresponding one or more business processing requests in parallel through one or more task queues.
It should be appreciated that when both an asynchronous sub-thread and a task queue are provided in the data processing system, after a user request is obtained, a plurality of user requests may be submitted to corresponding asynchronous sub-threads, respectively. Further, the plurality of service processing requests in each asynchronous sub-thread may be respectively submitted to one or more task queues in the asynchronous sub-thread, so as to process the corresponding one or more service processing requests in parallel through the one or more task queues.
In the implementation process, when the number of the user requests is multiple, the multiple user requests are respectively transferred to the corresponding multiple asynchronous sub-threads, and the multiple user requests are asynchronously and parallelly processed through the multiple asynchronous sub-threads. And for each asynchronous sub-thread, when each asynchronous sub-thread comprises a plurality of service processing requests, the plurality of service processing requests in each asynchronous sub-thread can be respectively submitted to a plurality of corresponding task queues in the asynchronous sub-thread, and the corresponding plurality of service processing requests are processed in parallel through the plurality of task queues. When asynchronous processing is carried out, one service processing request or user request is completed, other service processing requests or user requests can be continued, so that parallel processing can be realized, and the overall performance of the system is improved. And each task queue or asynchronous sub-thread can directly return a result when processing a service processing request or a user request, is not influenced by other task queues or asynchronous sub-threads, and improves the response speed of the data processing system.
In one possible implementation, as shown in fig. 4, processing, in parallel, the corresponding one or more service processing requests through one or more task queues includes: when submitting a user request to a corresponding task queue, respectively triggering and starting a timeout task of each task queue; acquiring response time of processing a user request of a corresponding task queue through a timing thread; judging whether the response time of each task queue exceeds the corresponding processing period or not; and writing the current processing result of the task queue with the response time exceeding the corresponding processing period into the response, and releasing the corresponding link of the task queue.
The timeout task here is a timed thread that is executed at a fixed time.
It should be appreciated that when a task queue is executing a business process request, the link to which the task corresponds is occupied and the task queue cannot receive other business process requests. When a task queue processes a service processing request, if the processing time is too long, the task queue may face abnormal situations such as interrupt of a processing procedure, suspension of processing or abnormal processing. These anomalies may cause the links of the task queue to be occupied for a long period of time, affecting the task queue to process other business process requests.
And setting a timeout task, triggering the task queue to start a corresponding timeout task when a user request is submitted to the corresponding task queue, wherein the timeout task is used for acquiring response time of the task queue for processing the user request. After the response time is obtained, judging whether the response time exceeds the processing period corresponding to the task queue, if the response time exceeds the processing period corresponding to the task queue, writing the current processing result of the task queue to the service processing request into the response, and releasing the corresponding link of the task queue.
After the corresponding link of the task queue is released, the next user request can be submitted to the task queue continuously so as to process the next user request through the task queue.
In one embodiment, the method may further include determining whether the processing of the service processing request in each task queue is complete. If yes, writing the current processing result of the task queue with the response time not exceeding the corresponding processing period into the response, and releasing the corresponding link of the task queue.
Alternatively, the response time of the corresponding task queue to process the user request may be acquired (shown in fig. 4), and whether the processing of the service processing request in the corresponding task queue is completed is determined. When it is determined that the response time does not exceed the corresponding processing period, only whether the service processing request in the task queue whose response time does not exceed the corresponding processing period is processed is determined. The setting mode for judging whether the service processing request in each task queue is processed or not can be adjusted according to actual conditions, and the method is not particularly limited.
It should be understood that if the response time of the task queue does not exceed the processing period corresponding to the task queue, the processing of the service processing request in the task queue is completed. The current processing result of the task queue to the service processing request can be written into the response, and the corresponding link of the task queue is released.
It will be appreciated that the current processing result of the task queue with a response time exceeding the corresponding processing period may not be the final result of the corresponding business processing request. The current processing result of the task queue for which the response time does not exceed the corresponding processing period is typically the final result of the corresponding business processing request. Of course, the current processing result may be adjusted according to practical situations, and the present application is not particularly limited.
In the implementation process, by setting a corresponding processing period for each task queue, when the response time of the task queue exceeds the processing period, the preprocessing result of the task queue is written into the response, the corresponding link of the task queue is released in advance, the occupied time of the link is reduced, and the data processing efficiency is improved.
In one possible implementation, after step 201, the method further includes: judging whether the data has local cache; if the data does not have the local cache, judging whether the data has the Redis cache; if the data has the Redis cache, judging whether the data needs to be cached to the local; if the data needs to be cached locally, the data is cached locally.
The locally cached data may be some frequently used, less altered data. The data buffered by Redis may be some of the more commonly used data. I.e. the priority of the frequency of use of the data in the local cache can be set higher than the data in the dis cache.
The above-mentioned caching of data locally may be achieved by techniques such as Guava, ecache and Caffeine.
It should be understood that each user request may include a plurality of data, and when a large amount of data in the plurality of user requests simultaneously accesses the system resource, a congestion may be caused between the large amount of data, thereby affecting the processing efficiency of the system. The large amount of data can be cached firstly by setting a local cache and a Redis cache, so that the data of directly accessing system resources is reduced, and congestion is avoided.
In one possible implementation, the method further includes: if the data does not have the Redis cache, inquiring a database, acquiring data resources corresponding to the data, and judging whether the data need to be cached to the Redis; if the data does not need to be cached to the Redis, judging whether the data needs to be cached to the local; if the data needs to be cached locally, the data is cached locally.
Specifically, as shown in fig. 5, when data needs to access a system resource in a user request, whether the data has a local cache may be determined first, and if the data has a local cache, the data is cached locally.
If the data has no local cache, judging whether the data has a Redis cache, if so, acquiring the Redis cache data, and further judging whether the data needs to be cached to the local; if the data needs to be cached locally, the data is cached locally.
If the data is not cached by the Redis, querying a database, acquiring a data resource corresponding to the data, and judging whether the data need to be cached to the Redis;
if the data does not need to be cached to the Redis, further judging whether the data needs to be cached to the local; if the data needs to be cached locally, the data is cached locally. If the data does not need to be cached locally, the data is returned.
If the data needs to be cached to the Redis, caching the data to the Redis, and further judging whether the data needs to be cached to the local; if the data needs to be cached locally, the data is cached locally. If the data does not need to be cached locally, the data is returned.
In the above implementation, the closer to the computing resource, the faster the data is obtained, but because of the limitation of the system resource, all the data cannot be stored in the memory. By caching the data in the local and Redis according to the use frequency and the change condition, the pressure of the local cache can be reduced, the data volume of simultaneously accessing the system resources is controlled, the occurrence of the condition of system congestion is reduced, and the efficiency of data processing is improved.
In one possible implementation, the plurality of data queries the database for the corresponding data resource through the current limit module.
The current limiting module is arranged before the database, and the data can be accessed to the database through the permission of the current limiting module.
It will be appreciated that if multiple data accesses to the database are performed simultaneously, breakdown may occur, which may lead to database breakdown, and thus the amount of data to be accessed to the database may need to be limited. By setting the current limiting module, the data volume of the access database can be limited, database breakdown caused by cache breakdown is avoided, and the availability of a cache system is improved.
In the implementation process, the current limiting module is arranged, so that the data volume of the access database can be limited, database breakdown caused by cache breakdown is avoided, and the availability of a cache system is improved.
In one possible implementation, after step 201, the method further includes: splitting different service processing requests in the user request into a plurality of asynchronous log queues; writing corresponding business processing requests through a plurality of asynchronous log queues respectively, and reading log data in corresponding task queues; and writing the log data into the corresponding file through the asynchronous log queue.
It can be understood that the log can be used for monitoring the current running condition of the system, and also can be used for carrying out historical operation backtracking and eliminating on-line problems and the like.
It should be appreciated that by writing the service processing request to an asynchronous log queue, the asynchronous log queue may read log data in a task queue corresponding to the service processing request in the asynchronous log queue, and write the log data to a corresponding file for monitoring the task queue, and store data of the task queue when the task queue processes the service processing request for subsequent historical operation backtracking.
In the implementation process, by setting a plurality of asynchronous log queues, when a plurality of service processing requests need to be processed, different service processing requests can be split into different asynchronous log queues by adopting a log distribution mechanism, so that the problem investigation is facilitated, and meanwhile, the log writing speed can be improved. In addition, by setting the log queues as asynchronous log queues, the log queues cannot be mutually influenced when asynchronous processing is carried out, and when writing of the log queues is completed, the result can be directly returned, the influence of other log queues is avoided, and the response speed of the system is improved.
In one possible implementation, the user request is to generate a single number, and the method further includes: acquiring a pre-acquired current single number; judging whether the current single number is the single number in the current batch or not; if the current single number is the single number in the current batch, judging whether the single number in the current batch is used up; if the single number in the current batch is not used, judging whether the single number usage in the current batch exceeds a usage threshold; if the usage amount threshold is exceeded, obtaining a single number within a set number range as a single number of the next batch; returning to the current single number.
In one embodiment, the system may pre-fetch single numbers in a set number range in bulk each time the system is started or the single numbers are exhausted. For example, 1000 singles are pre-acquired each time, 10000 singles are pre-acquired each time, and so on. When the single number is generated later, the pre-acquired single number can be directly used without accessing the database each time.
The pre-acquired current single number may be determined from the previous single number and the set rules. Illustratively, the pre-fetched singleton is the next singleton to the singleton generated by the last generate singleton request. For example, if the single number generated by the last generated single number request is 0002, the pre-fetched single number may be 0003. If the single number generated by the last generate single number request is 1115, the pre-fetched single number may be 1116.
The usage threshold is the minimum number of single numbers of the current batch for obtaining the next batch. For example, the usage threshold may be one third of the number of singles in the current lot, the usage threshold may be one fifth of the number of singles in the current lot, and so on. Or may be fixed data, for example, the usage threshold may be 5, 10, 20, etc. The usage threshold may be selected according to practical situations, and the present application is not particularly limited.
Specifically, as shown in fig. 6, when a user request for generating a single number is obtained, a single number may be followed first to determine a current single number, and whether the current single number is a single number in a current batch is determined; if the current single number is not the single number in the current batch, the single number in the set digital range of the next batch is acquired, and the latest batch information is generated. And determining the generated single number corresponding to the user request through the single number in the latest batch, and marking the generated single number as used. Further, it may be further determined whether the usage of the single number in the latest batch exceeds the usage threshold, and if the usage threshold is not exceeded, the single number may be returned directly. If the usage amount threshold is exceeded, the next single number batch is continuously obtained, the single number corresponding to the user request is determined according to the single number in the next single number batch, and meanwhile the current single number is returned.
If the current single number is the single number in the current batch, judging whether the single number in the current batch is used up; if the single number in the current batch is not used, determining the single number corresponding to the user request through the single number in the current batch, and marking the generated single number as used. Further, judging whether the single number usage in the current batch exceeds a usage threshold; if the usage amount threshold is exceeded, the single number within the set number range is obtained to be used as the single number of the next batch, the single number corresponding to the user request is determined according to the single number in the next batch, and meanwhile the current single number is returned.
If the single number in the current batch is used up, determining the single number corresponding to the user request by using the single number in the latest batch, and marking the generated single number as used. Further, it may be further determined whether the usage of the single number in the latest batch exceeds the usage threshold, and if the usage threshold is not exceeded, the single number may be returned directly. If the usage amount threshold is exceeded, the next single number batch is continuously obtained, the single number corresponding to the user request is determined according to the single number in the next single number batch, and meanwhile the current single number is returned.
In the implementation process, batch single numbers are obtained at one time, the database is not required to be accessed again when the single numbers are generated each time, the times of accessing the database are reduced, and the single number generation efficiency is improved. In addition, by monitoring the usage amount of the single numbers, when the number of the single numbers in the current batch is reduced to a certain threshold value, a batch of single numbers can be acquired in advance. Thus, the risk of single number confusion can be reduced, and database resources can be utilized efficiently.
Based on the same application conception, the embodiment of the present application further provides a data processing device corresponding to the data processing method, and since the principle of the device in the embodiment of the present application for solving the problem is similar to that of the foregoing embodiment of the data processing method, the implementation of the device in the embodiment of the present application may refer to the description in the embodiment of the foregoing method, and the repetition is omitted.
Fig. 7 is a schematic functional block diagram of a data processing apparatus according to an embodiment of the present application. The respective modules in the data processing apparatus in the present embodiment are configured to execute the respective steps in the above-described method embodiments. The data processing device comprises an acquisition module 301, a processing module 302 and a response module 303; wherein,
the acquiring module 301 is configured to acquire a user request, where the user request includes a service processing request.
The processing module 302 is configured to process the user request and/or the service processing request through an asynchronous sub-thread and/or an asynchronous queue.
The response module 303 is configured to obtain a response result of the user request according to the processing result.
In a possible implementation manner, the processing module 302 is specifically configured to: forwarding one or more of the user requests to one or more asynchronous sub-threads, respectively, by servlets; wherein the Servlet feeds back to the container after forwarding the user request to the asynchronous sub-thread; and processing corresponding one or more user requests in parallel through one or more asynchronous sub-threads.
In a possible implementation manner, the processing module 302 is specifically configured to: submitting one or more user requests to one or more corresponding task queues respectively, wherein each user request comprises one or more service processing requests, and the task queues are asynchronous queues; and processing corresponding one or more business processing requests in parallel through one or more task queues.
In a possible implementation manner, the processing module 302 is specifically configured to: forwarding one or more of the user requests to one or more asynchronous sub-threads, respectively, by servlets; wherein the Servlet feeds back to the container after forwarding the user request to the asynchronous sub-thread; submitting one or more service processing requests in one or more user requests in each asynchronous sub-thread to one or more corresponding task queues in the asynchronous sub-thread, wherein the task queues are asynchronous queues; and processing corresponding one or more business processing requests in parallel through one or more task queues.
In a possible implementation manner, the processing module 302 is specifically configured to: when submitting the user request to a corresponding task queue, respectively triggering and starting a timeout task of each task queue, wherein the timeout task is a timing thread for timing execution; acquiring response time of the corresponding task queue for processing the user request through the timing thread; judging whether the response time of each task queue exceeds a corresponding processing period or not; and writing the current processing result of the task queue with the response time exceeding the corresponding processing period into a response, and releasing the corresponding link of the task queue.
In a possible implementation manner, the data processing device further comprises a buffer module, configured to determine whether the data has a local buffer; if the data is not cached locally, judging whether the data is cached by Redis; if the data has the Redis cache, judging whether the data needs to be cached to the local; and if the data needs to be cached locally, caching the data locally.
In a possible implementation manner, the data processing device further includes a buffer module, and is further configured to query a database if the data is not subjected to Redis buffering, obtain a data resource corresponding to the data, and determine whether the data needs to be buffered to the Redis; if the data does not need to be cached to the Redis, judging whether the data needs to be cached to the local; and if the data needs to be cached locally, caching the data locally.
In a possible implementation manner, the data processing apparatus further comprises a splitting module, configured to split different service processing requests in the user request into a plurality of asynchronous log queues; writing the corresponding business processing requests into a plurality of asynchronous log queues respectively, and reading log data in the corresponding task queues; and writing the log data into a corresponding file through the asynchronous log queue.
In a possible implementation, the processing module 302 is further configured to: acquiring a pre-acquired current single number; judging whether the current single number is a single number in the current batch or not; if the current single number is the single number in the current batch, judging whether the single number in the current batch is used up; if the single number in the current batch is not used, judging whether the single number usage in the current batch exceeds a usage threshold; if the usage amount threshold is exceeded, obtaining a single number within a set number range as a single number of the next batch; and returning the current single number.
For the sake of understanding the present embodiment, an electronic device performing a data processing method disclosed in the embodiments of the present application will be described in detail.
As shown in fig. 8, a block schematic diagram of an electronic device is provided. The electronic device 100 may include a memory 111 and a processor 113. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 8 is merely illustrative and is not intended to limit the configuration of the electronic device 100. For example, the electronic device 100 may also include more or fewer components than shown in fig. 8, or have a different configuration than shown in fig. 8.
The memory 111 and the processor 113 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, where the method disclosed in any embodiment of the present application may be applied to the processor 113 or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capabilities. The processor 113 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (digital signal processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The electronic device 100 in the present embodiment may be used to perform each step in each method provided in the embodiments of the present application.
Furthermore, the embodiments of the present application also provide a computer readable storage medium, on which a computer program is stored, which when being executed by a processor performs the steps of the data processing method described in the above-mentioned method embodiments.
The computer program product of the data processing method provided in the embodiments of the present application includes a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the steps of the data processing method described in the method embodiments, and specifically, reference may be made to the method embodiments, which are not described herein.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of data processing, comprising:
acquiring a user request, wherein the user request comprises a service processing request;
processing the user request and/or the service processing request through an asynchronous sub-thread and/or an asynchronous queue;
And obtaining a response result of the user request according to the processing result.
2. The method of claim 1, wherein processing the user request by an asynchronous sub-thread comprises:
forwarding one or more of the user requests to one or more asynchronous sub-threads, respectively, by servlets; wherein the Servlet feeds back to the container after forwarding the user request to the asynchronous sub-thread;
and processing corresponding one or more user requests in parallel through one or more asynchronous sub-threads.
3. The method of claim 1, wherein processing the traffic processing request through an asynchronous queue comprises:
submitting one or more user requests to one or more corresponding task queues respectively, wherein each user request comprises one or more service processing requests, and the task queues are asynchronous queues;
and processing corresponding one or more business processing requests in parallel through one or more task queues.
4. The method of claim 1, wherein each of the user requests includes one or more business process requests, each of the asynchronous sub-threads includes one or more task queues, and wherein processing the user requests and the business process requests through the asynchronous sub-threads and the asynchronous queues includes:
Forwarding one or more of the user requests to one or more asynchronous sub-threads, respectively, by servlets; wherein the Servlet feeds back to the container after forwarding the user request to the asynchronous sub-thread;
submitting one or more service processing requests in one or more user requests in each asynchronous sub-thread to one or more corresponding task queues in the asynchronous sub-thread, wherein the task queues are asynchronous queues;
and processing corresponding one or more business processing requests in parallel through one or more task queues.
5. A method according to claim 3 or 4, wherein each of said task queues is provided with a corresponding processing cycle, said processing corresponding one or more of said traffic processing requests in parallel through one or more of said task queues comprising:
when submitting the user request to a corresponding task queue, respectively triggering and starting a timeout task of each task queue, wherein the timeout task is a timing thread for timing execution;
acquiring response time of the corresponding task queue for processing the user request through the timing thread;
Judging whether the response time of each task queue exceeds a corresponding processing period or not;
and writing the current processing result of the task queue with the response time exceeding the corresponding processing period into a response, and releasing the corresponding link of the task queue.
6. The method according to claim 3 or 4, wherein the user request includes one or more data, and wherein after the obtaining the user request, the method further comprises:
judging whether the data has a local cache or not;
if the data is not cached locally, judging whether the data is cached by Redis;
if the data has the Redis cache, judging whether the data needs to be cached to the local;
and if the data needs to be cached locally, caching the data locally.
7. The method of claim 6, wherein the method further comprises:
if the data is not cached by Redis, querying a database, acquiring data resources corresponding to the data, and judging whether the data need to be cached to the Redis;
if the data does not need to be cached to the Redis, judging whether the data needs to be cached to the local;
and if the data needs to be cached locally, caching the data locally.
8. The method of claim 7, wherein the data is a plurality of data, and wherein the plurality of data queries the database for corresponding data resources via the current limiting module.
9. The method according to claim 3 or 4, wherein after the obtaining the user request, the method further comprises:
splitting different business processing requests in the user request into a plurality of asynchronous log queues;
writing the corresponding business processing requests into a plurality of asynchronous log queues respectively, and reading log data in the corresponding task queues;
and writing the log data into a corresponding file through the asynchronous log queue.
10. The method of claim 3 or 4, wherein the user request is to generate a single number, the method further comprising:
acquiring a pre-acquired current single number;
judging whether the current single number is a single number in the current batch or not;
if the current single number is the single number in the current batch, judging whether the single number in the current batch is used up;
if the single number in the current batch is not used, judging whether the single number usage in the current batch exceeds a usage threshold;
If the usage amount threshold is exceeded, obtaining a single number within a set number range as a single number of the next batch;
and returning the current single number.
11. A data processing apparatus, comprising:
the acquisition module is used for acquiring a user request, wherein the user request comprises a service processing request;
the processing module is used for processing the user request and/or the service processing request through an asynchronous sub-thread and/or an asynchronous queue;
and the response module is used for obtaining a response result of the user request according to the processing result.
12. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, which when executed by the processor perform the steps of the method of any of claims 1 to 10 when the electronic device is run.
13. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 10.
CN202311774471.3A 2023-12-22 2023-12-22 Data processing method, device, electronic equipment and computer readable storage medium Active CN117453422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311774471.3A CN117453422B (en) 2023-12-22 2023-12-22 Data processing method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311774471.3A CN117453422B (en) 2023-12-22 2023-12-22 Data processing method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN117453422A true CN117453422A (en) 2024-01-26
CN117453422B CN117453422B (en) 2024-03-01

Family

ID=89591464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311774471.3A Active CN117453422B (en) 2023-12-22 2023-12-22 Data processing method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117453422B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589442A (en) * 2001-10-05 2005-03-02 Bea系统公司 System for application server messaging with multiple dispatch pools
CN109672627A (en) * 2018-09-26 2019-04-23 深圳壹账通智能科技有限公司 Method for processing business, platform, equipment and storage medium based on cluster server
CN110069353A (en) * 2019-03-18 2019-07-30 中科恒运股份有限公司 Business asynchronous processing method and device
CN111104235A (en) * 2019-12-06 2020-05-05 江苏苏宁物流有限公司 Queue-based asynchronous processing method and device for service requests
CN111694681A (en) * 2020-06-12 2020-09-22 中国银行股份有限公司 Batch service processing method and device, electronic equipment and computer storage medium
CN111782996A (en) * 2020-05-29 2020-10-16 厦门市美亚柏科信息股份有限公司 Asynchronous request processing method and device
CN112995261A (en) * 2019-12-17 2021-06-18 中兴通讯股份有限公司 Configuration method and device of service table, network equipment and storage medium
CN113419824A (en) * 2021-01-25 2021-09-21 阿里巴巴集团控股有限公司 Data processing method, device, system and computer storage medium
CN116095005A (en) * 2023-01-30 2023-05-09 中国工商银行股份有限公司 Traffic management method, apparatus, device, medium, and program product
CN116595099A (en) * 2023-05-22 2023-08-15 北京言子初科技有限公司 Asynchronous processing method and device for high concurrency data

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589442A (en) * 2001-10-05 2005-03-02 Bea系统公司 System for application server messaging with multiple dispatch pools
CN109672627A (en) * 2018-09-26 2019-04-23 深圳壹账通智能科技有限公司 Method for processing business, platform, equipment and storage medium based on cluster server
CN110069353A (en) * 2019-03-18 2019-07-30 中科恒运股份有限公司 Business asynchronous processing method and device
CN111104235A (en) * 2019-12-06 2020-05-05 江苏苏宁物流有限公司 Queue-based asynchronous processing method and device for service requests
CN112995261A (en) * 2019-12-17 2021-06-18 中兴通讯股份有限公司 Configuration method and device of service table, network equipment and storage medium
WO2021121203A1 (en) * 2019-12-17 2021-06-24 中兴通讯股份有限公司 Method and apparatus for configuring service table, network device, and storage medium
CN111782996A (en) * 2020-05-29 2020-10-16 厦门市美亚柏科信息股份有限公司 Asynchronous request processing method and device
CN111694681A (en) * 2020-06-12 2020-09-22 中国银行股份有限公司 Batch service processing method and device, electronic equipment and computer storage medium
CN113419824A (en) * 2021-01-25 2021-09-21 阿里巴巴集团控股有限公司 Data processing method, device, system and computer storage medium
CN116095005A (en) * 2023-01-30 2023-05-09 中国工商银行股份有限公司 Traffic management method, apparatus, device, medium, and program product
CN116595099A (en) * 2023-05-22 2023-08-15 北京言子初科技有限公司 Asynchronous processing method and device for high concurrency data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李林;周晓慧;: "基于Servlet3.0的Web异步处理的研究", 科技风, no. 20, 25 October 2011 (2011-10-25) *

Also Published As

Publication number Publication date
CN117453422B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN110489447B (en) Data query method and device, computer equipment and storage medium
CN109344172B (en) High-concurrency data processing method and device and client server
CN112134909B (en) Time sequence data processing method, device, system, server and readable storage medium
US10812322B2 (en) Systems and methods for real time streaming
CN112787999B (en) Cross-chain calling method, device, system and computer readable storage medium
CN112540829A (en) Container group eviction method, device, node equipment and storage medium
CN113360577A (en) MPP database data processing method, device, equipment and storage medium
US11762687B2 (en) Processing of messages and documents carrying business transactions
CN113378083B (en) Short link generation method, device, equipment and storage medium
CN117453422B (en) Data processing method, device, electronic equipment and computer readable storage medium
US11301255B2 (en) Method, apparatus, device, and storage medium for performing processing task
CN111242621B (en) Transaction data storage method, device, equipment and storage medium
CN110955461B (en) Processing method, device, system, server and storage medium for computing task
US9063858B2 (en) Multi-core system and method for data consistency by memory mapping address (ADB) to hash table pattern associated with at least one core
CN109800184B (en) Caching method, system, device and storable medium for small block input
CN113467935A (en) Method and system for realizing L1cache load forward
CN113419792A (en) Event processing method and device, terminal equipment and storage medium
CN111639129A (en) Transaction processing method and device, electronic equipment and computer-readable storage medium
CN111324438A (en) Request scheduling method and device, storage medium and electronic equipment
US11860788B2 (en) Prefetching data in a distributed storage system
CN103825842A (en) Data flow processing method and device for multi-CPU system
CN116896587A (en) Processing method and device for repeated network request, computer equipment and storage medium
CN111917572B (en) Transaction request processing method and device, electronic equipment and readable storage medium
KR101924466B1 (en) Apparatus and method of cache-aware task scheduling for hadoop-based systems
CN109474543B (en) Queue resource management method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant