CN110912958A - HTTP connection processing method, device, equipment and medium - Google Patents

HTTP connection processing method, device, equipment and medium Download PDF

Info

Publication number
CN110912958A
CN110912958A CN201811086759.0A CN201811086759A CN110912958A CN 110912958 A CN110912958 A CN 110912958A CN 201811086759 A CN201811086759 A CN 201811086759A CN 110912958 A CN110912958 A CN 110912958A
Authority
CN
China
Prior art keywords
task
processing
queue
batch
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811086759.0A
Other languages
Chinese (zh)
Inventor
杨伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Chongqing Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Chongqing Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201811086759.0A priority Critical patent/CN110912958A/en
Publication of CN110912958A publication Critical patent/CN110912958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/143Termination or inactivation of sessions, e.g. event-controlled end of session
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The invention discloses a method, a device, equipment and a medium for processing HTTP connection. The method comprises the following steps: on the basis of establishing a hypertext transfer protocol (HTTP) connection with an initiating terminal of a task processing request, receiving the task processing request sent by the initiating terminal; determining that the task requested to be processed by the task processing request is a first task, and writing the first task into a batch processing queue; the HTTP connection is released. According to the HTTP connection processing method, device, equipment and medium provided by the embodiment of the invention, the time consumption of HTTP connection can be shortened.

Description

HTTP connection processing method, device, equipment and medium
Technical Field
The present invention relates to the field of communications, and in particular, to a method, an apparatus, a device, and a medium for processing an HTTP connection.
Background
Fig. 1 is a schematic diagram illustrating an HTTP connection process in the prior art, and as shown in fig. 1, taking a process of HyperText Transfer Protocol (HTTP) connection between a client and a server as an example, a conventional HTTP connection processing process includes: and establishing HTTP connection between the client and the server, and sending a task processing request to the server by the server through the established HTTP connection. And after receiving the task processing request, the server processes the task, returns an HTTP response message to the client after the task processing is finished, and releases the HTTP connection.
In the traditional task processing process, the HTTP connection is released only after the server completes processing the task, so that the whole HTTP connection process takes a long time.
Disclosure of Invention
The embodiment of the invention provides a processing method, a processing device, processing equipment and a processing medium of HTTP connection, which can shorten the time consumption of HTTP connection.
The embodiment of the invention provides a processing method of HTTP connection, which comprises the following steps:
on the basis of establishing a hypertext transfer protocol (HTTP) connection with an initiating terminal of a task processing request, receiving the task processing request sent by the initiating terminal;
determining that a task requested to be processed by a task processing request is a first task, and writing the first task into a batch processing queue, wherein the first task is a task of which the sensitivity of the required processing delay is not higher than a preset processing delay sensitivity threshold;
the HTTP connection is released.
In an optional implementation manner, after receiving the task processing request sent by the initiator, the method further includes:
determining that the task requested to be processed by the task processing request is a second task, and writing the second task into a rapid processing queue, wherein the second task is a task of which the sensitivity of the required processing delay is higher than a preset processing delay sensitivity threshold;
the HTTP connection is released.
In an optional implementation, after writing the first task to the batch queue, the method further includes:
and sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs to the initiating terminal.
In an optional implementation, after writing the second task to the fast processing queue, the method further includes:
and sending the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs to the initiating terminal.
In an optional implementation, after writing the first task to the batch queue, the method further includes:
determining that the number of batch tasks in the batch queue reaches a number threshold, and/or, a preset time period has elapsed since the last transmission of the batch tasks in the batch queue to the batch thread,
and sending the batch processing tasks in the batch processing queue to a batch processing thread.
In an optional implementation, the method after writing the second task into the fast processing queue further includes:
the second task is sent to the fast processing thread.
In an optional embodiment, the method further comprises:
receiving a query request of a first task;
analyzing the identification information of the first task and the identification information of the server node to which the batch processing queue belongs from the received query request of the first task;
acquiring the processing progress of a first task in a task processing progress table of a server node to which a batch processing queue belongs based on identification information of the server node to which the batch processing queue belongs and identification information of the first task, wherein the task processing progress table comprises: a mapping relationship between identification information of tasks processed by the server node to which the batch queue belongs and a processing progress of the tasks processed by the server node to which the batch queue belongs, the tasks processed by the server node to which the batch queue belongs including a first task;
and feeding back the processing progress of the first task to the initiating end of the query request.
In an optional embodiment, the method further comprises:
receiving a query request of a second task;
analyzing the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs from the received query request of the second task;
acquiring the processing progress of a second task in a task processing progress table of a server node to which a rapid processing queue belongs on the basis of identification information of the server node to which the rapid processing queue belongs and identification information of the second task, wherein the task processing progress table comprises a mapping relation between the identification information of the task processed by the server node to which the rapid processing queue belongs and the processing progress of the task processed by the server node to which the rapid processing queue belongs, and the tasks processed by the server node to which the rapid processing queue belongs comprise the second task;
and feeding back the processing progress of the second task to the initiating end of the query request.
In an optional implementation manner, after determining that the task requested to be processed by the task processing request is the first task, the method further includes:
judging whether the task quantity carried by the batch processing queue reaches the threshold value of the task quantity carried by the batch processing queue, releasing HTTP connection,
or the like, or, alternatively,
and waiting for writing the first task into the batch processing queue when the task quantity carried by the batch processing queue is less than the threshold value of the task quantity carried by the batch processing queue.
In an optional implementation manner, after determining that the task requested to be processed by the task processing request is the second task, the method further includes:
judging whether the task quantity carried by the rapid processing queue reaches the threshold value of the task quantity carried by the rapid processing queue, releasing HTTP connection,
or the like, or, alternatively,
and waiting to write the second task into the fast processing queue when the task quantity carried by the fast processing queue is smaller than the threshold value of the task quantity carried by the fast processing queue.
An embodiment of the present invention provides a processing apparatus for HTTP connection, including:
the first receiving module is used for receiving the task processing request sent by the initiating terminal on the basis of establishing HTTP connection with the initiating terminal of the task processing request;
the batch writing module is used for determining that a task requested to be processed by the task processing request is a first task and writing the first task into the batch processing queue, wherein the first task is a task of which the sensitivity of the required processing delay is not higher than a preset processing delay sensitivity threshold;
the first releasing module is used for releasing the HTTP connection.
In an alternative embodiment, the apparatus further comprises:
the fast writing module is used for determining that the task requested to be processed by the task processing request is a second task and writing the second task into the fast processing queue, wherein the second task is a task of which the sensitivity of the required processing delay is higher than a preset processing delay sensitivity threshold;
and the second releasing module is used for releasing the HTTP connection.
In an alternative embodiment, the apparatus further comprises:
and the first sending module is used for sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs to the initiating end.
In an alternative embodiment, the apparatus comprises:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the processing method of the HTTP connection provided by the embodiment of the invention.
In an alternative implementation, the computer storage medium stores computer program instructions, and the computer program instructions are executed by the processor to perform the HTTP connection processing method provided by the embodiment of the present invention.
According to the processing method, device, equipment and medium of the HTTP connection in the embodiment of the invention, the HTTP connection is released after the first task is written into the batch processing queue. From the establishment of the HTTP connection, the entire HTTP connection process can be completed only by consuming the time to receive the request and the time to write the first task into the batch queue. Compared with the traditional HTTP connection, the HTTP connection is not required to be released after the task processing is finished, and the time consumption of the HTTP connection is shortened.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 shows a schematic diagram of a prior art HTTP connection process;
FIG. 2 is a schematic flow chart diagram illustrating a method of processing an HTTP connection in accordance with an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an HTTP connected processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram illustrating an exemplary hardware architecture of an HTTP connected processing device that can implement the HTTP connected processing method and apparatus according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
HTTP is a set of rules for communicating over a network. For example, the HTTP connection can be classified into an HTTP connection in a Client/Server (C/S) mode, an HTTP connection in a Server/Server (S/S) mode, and an HTTP connection in a Browser/Server (B/S) mode according to the difference between both communication parties.
Specifically, the task initiating terminal in the C/S mode is a client, the task initiating terminal in the S/S mode is a server, and the task initiating terminal in the B/S mode is a browser. The S/S mode is different from the other two modes in that two ends of communication in the S/S mode can be accessed in two directions. While in the more common C/S and B/S modes, the server cannot actively notify the client/browser.
In the conventional HTTP connection process, after the server receives the task processing request, the HTTP connection is not released until the service processing is completed or the HTTP connection is timed out. Each HTTP connection is long, occupies a large amount of HTTP connection resources, and affects the throughput of the system. In particular, when the task processing request is in a highly concurrent state, HTTP connection resources are easily exhausted, causing traffic congestion.
Therefore, a method, an apparatus, a device, and a medium for processing an HTTP connection, which can shorten the connection time, are required.
For better understanding of the present invention, a method, an apparatus, a device, and a medium for processing an HTTP connection according to embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be noted that these embodiments are not intended to limit the scope of the present disclosure.
Fig. 2 is a schematic flow chart diagram illustrating a processing method of an HTTP connection according to an embodiment of the present invention. In the HTTP connection processing method according to an embodiment of the present invention, the execution subject of each step may be a server node on the network side. Such as physical servers and/or virtual servers.
As shown in fig. 2, the processing method 200 of HTTP connection in the present embodiment may include the following steps S201 to S203:
s201, on the basis of establishing HTTP connection with an initiator of a task processing request, receiving the task processing request sent by the initiator.
In S201, when the originating end of the task processing request requests the server to perform task processing, an HTTP connection needs to be established with the server.
Wherein, the initiating end of the task processing request can be a browser, a server and a client. Accordingly, the HTTP connection may be in B/S mode, S/S mode, or C/S mode. Therefore, the processing method of the HTTP connection in the embodiment of the invention has no special requirements on the network structure and has good applicability.
In some embodiments of the invention, the originator of the task processing request may be a client, a server, or a browser.
In some embodiments of the invention, the task processing request comprises a task.
The task processing request is used for indicating a receiver of the task processing request to process the task in the task processing request.
S202, determining that the task requested to be processed by the task processing request is a first task, and writing the first task into the batch processing queue.
The first task is a task of which the sensitivity of the required processing delay is not higher than a preset processing delay sensitivity threshold.
In S202, the sensitivity of the processing latency refers to a ratio of an actual processing latency of the task to an expected processing latency of the task.
In some embodiments, the actual processing latency of a task refers to the actual time it takes for the task to complete processing from the receipt of a task request.
In some embodiments, the expected processing latency of a task characterizes the processing time that the task is expected to take. For example, the determination may be made based on the degree of interest or importance of the user to the task, or the requirement of the processing delay of the next task to the task based on the processing result of the task.
For example, if the user has a low level of interest in the task, the expected processing latency for the task may be set to be greater or much greater than the actual processing latency for the task.
If the processing delay requirement of the next task for the task based on the processing result of the task is N seconds, the N seconds may be used as the expected processing delay of the task.
In some embodiments, the pre-set processing delay sensitivity threshold may take a value no less than 1. Exemplarily, if the preset processing delay sensitivity threshold is 2, that is, the actual processing delay of the task is not higher than twice the expected processing delay of the task, the task is the first task; if the actual processing latency of a task is greater than twice the expected processing latency of the task, the task is the second task.
In some embodiments of the invention, the task processing request includes, in addition to the task, an actual processing latency of the task and an expected processing latency of the task.
Therefore, the actual processing delay and the expected processing delay of the task can be directly analyzed from the task processing request, the sensitivity of the processing delay of the task is calculated according to the analyzed actual processing delay and the expected processing delay of the task, and the task is divided into a first task and a first task.
In other embodiments of the present invention, since the sensitivities of the processing delays required by different types of tasks are different, the task processing request further includes the type information of the task, and the sensitivities of the processing delays required by the tasks can be determined according to the type information of the task in the task processing request. Further, the task may be divided into a first task and a second task according to the sensitivity of the processing delay required by the task.
In some embodiments of the present invention, a method for dividing a first task and a second task according to a sensitivity of processing latency required by the tasks specifically includes:
a processing delay sensitivity threshold may be set first, and then a task whose required processing delay sensitivity is not higher than the preset processing delay sensitivity threshold is divided into a first task, that is, the first task is more sensitive to the time consumed by task processing. And dividing the task with the required processing time delay sensitivity higher than a preset processing time delay sensitivity threshold into a second task, wherein the second task is insensitive to the time consumed by task processing.
As an example, taking the access log writing as an example, the actual writing time of a single access log is short, in milliseconds, but there are many logs per second, that is, the actual processing delay of the access log is short. Meanwhile, because the user does not care about the time length of the access log writing (whether the access log is written in 50 milliseconds or 1 second, which is not very different from the user), the expected processing delay of the access log can be set to be a longer time length, and then the writing task of the access log is confirmed to belong to the first task insensitive to the processing delay, and the batch processing can be adopted for the writing task of the access log.
As another example, a user may derive 10 ten thousand pieces of data at a time from 1000 ten thousand pieces of data, which may take more than 10 seconds, i.e., the actual processing delay of a task of deriving 10 ten thousand pieces of data at a time is long. Meanwhile, the operation end needs to wait and download the derived 10 ten thousand pieces of data, and the expected processing delay of the task can be set to be a short time length according to the delay required by the operation end for waiting and downloading the task (for example, the operation end cannot wait for 30 seconds to download the derived 10 ten thousand pieces of data, the expected delay processing delay of the derived 10 ten thousand pieces of data can be set to be a value smaller than 30 seconds, and at this time, the sensitivity of the processing delay of the derived 10 ten thousand pieces of data is smaller than the processing delay sensitivity threshold). The task of exporting 10 ten thousand pieces of data at a time can be determined as a second task sensitive to processing delay, and the task needs to be quickly queued, that is, after 10 ten thousand pieces of data are exported at a time, the operation end is notified to complete downloading of 10 ten thousand pieces of data.
In some embodiments of the present invention, the batch queue is used for temporarily storing a plurality of first tasks waiting for batch processing, wherein the batch processing refers to processing a part of the tasks simultaneously.
In some embodiments, the batch queue may be divided into a plurality of batch sub-queues for saving resource consumption when processing tasks. The first tasks of different types are respectively stored in different batch processing sub-queues.
For example, if in accordance with the first taskType, dividing the first task into a first task A1First task A2And a first task A3. The first task a may be put into effect1Write batch sub-queue a1A first task A2Write batch sub-queue a2A first task A3Write batch sub-queue a3
In some embodiments, the batch queue includes a plurality of first task slots. Each first task slot may temporarily store a first task to be processed.
Specifically, when the first task is received, the first task may be written directly into one of the first task slots of the batch queue.
Accordingly, if the batch processing queue comprises a plurality of batch processing sub-queues, each batch processing sub-queue comprises a plurality of first task slots.
Illustratively, if a first task A is received1Then write it into the batch sub-queue a1In a first task slot. Receives the second first task A again1Then the second first task A may be started1Write batch sub-queue a1In another first task slot.
S203, the HTTP connection is released.
In S203, after the first task is written into the batch queue, the HTTP connection may be released to end the HTTP connection.
According to the processing method, device, equipment and medium of the HTTP connection in the embodiment of the invention, the HTTP connection is released after the first task is written into the batch processing queue. From the establishment of the HTTP connection, the entire HTTP connection process can be completed only by consuming the time to receive the request and the time to write the first task into the batch queue. Compared with the traditional HTTP connection, the HTTP connection is not required to be released after the task processing is finished, and the time consumption of the HTTP connection is shortened.
In some embodiments of the present invention, since the task requested in the task processing request may be a second task in addition to the first task, at this time, after S201, the processing method 200 for HTTP connection further includes S202 'and S203':
s202', the task requested to be processed by the task processing request is determined to be a second task, and the second task is written into the fast processing queue.
The second task is a task of which the sensitivity of the required processing delay is higher than a preset threshold of the sensitivity of the processing delay.
In some embodiments of the present invention, a method for determining the second task is the same as the method for dividing the first task and the second task in the above embodiments, and details are not repeated here.
In some embodiments of the present invention, the fast processing queue is used to temporarily store the second task waiting for fast processing, where fast processing refers to processing the tasks directly one by one without waiting for other tasks to be processed together.
In some embodiments, the difference from batch queues is that the fast process queue may include only one fast process queue, without being divided into multiple fast process sub-queues.
Specifically, after receiving the second task, the second task may be placed directly into the fast processing queue.
In some embodiments, the fast processing queue includes a plurality of second task slots, each second task slot for temporarily storing a second task to be processed.
Specifically, after receiving the second task, the second task is written into one second task slot, and when processing the second task, the second tasks in the second task slots can be processed one by one.
S203', the HTTP connection is released.
In S203', after writing the second task into the fast processing queue, the HTTP connection may be released to end the HTTP connection.
In some embodiments of the present invention, when the receiver of the task processing request is a server node, after S202, the method 200 for processing an HTTP connection further includes:
and S204, sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs to the initiating terminal.
In some embodiments of the invention, the identification information of the first task may be a number of the first task. Such as the task ID of the first task.
In some embodiments of the invention, the network side may include a plurality of server nodes that may contain batch queues. Therefore, in order to assist the originator of the task processing request in identifying the server node that processes the task of the present request, the identification information of the server node that received the present task processing request and writes the requested task into the batch queue may be transmitted to the originator of the present task processing request.
For example, the identification information of the server node may be a number of the server node, a Media Access Control (MAC) address of the server node, or an Internet Protocol (IP) address of interconnection between networks of the server node.
In some embodiments, after receiving a task processing request of an initiator, a server node on a network side writes a first task in the task processing request into a batch processing queue, and then sends identification information of the first task and identification information of the server node to the initiator.
In some embodiments of the present invention, after S202', the processing method 200 of the HTTP connection further includes:
s204', the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs are sent to the initiating terminal.
In some embodiments of the present invention, the category of the identification information of the second task is the same as the category of the identification information of the first task, and is not described herein again.
In some embodiments of the present invention, the type of the identification information of the server node in the fast processing queue is the same as the type of the identification information of the server node to which the batch processing queue belongs, and details are not repeated here.
In some embodiments of the present invention, since the batch processing refers to processing a plurality of tasks simultaneously, after S202, the processing method 200 of the HTTP connection further includes:
s205, determining that the number of the batch processing tasks in the batch processing queue reaches a number threshold, and/or sending the batch processing tasks in the batch processing queue to the batch processing thread after a preset time period from the last sending of the batch processing tasks in the batch processing queue to the batch processing thread.
In S205, the batch processing thread is configured to call a plurality of first tasks waiting to be processed from the batch processing queue at a time, and process the called plurality of first tasks in the same batch.
In some embodiments of the invention, the batch thread comprises: a master controller sub-thread, and one or more ordinary batch processing sub-threads.
The main control sub-thread is used for calling a plurality of first tasks in the batch processing queue from the batch processing queue and distributing the called first tasks to the common batch processing sub-thread.
And the common batch processing sub-thread is used for carrying out batch processing on the distributed first tasks.
In some embodiments, the batch queue comprises n batch sub-queues a1、a2、……、anFor example, the specific working modes of the main control sub-thread and the ordinary batch processing sub-thread include:
the master controller thread monitors all the batch sub-queues contained in the batch queue, and specifically, the master controller thread can traverse the batch sub-queue a1、a2、……、anAnd sequentially judging whether the first task in the batch processing sub-queue should be called. n is a positive integer.
The detailed step of judging whether the first task in the batch processing sub-queue should be called by the main control sub-thread specifically comprises the following first to fourth steps:
firstly, the main control sub-thread judges whether a first task exists in the current batch processing sub-queue.
If the current batch processing sub-queue has no first task, the current calling of the current batch processing sub-queue is finished, the next batch processing sub-queue is used as the current batch processing sub-queue, the first step is executed again, and whether the first task in the new current batch processing sub-queue needs to be called or not is judged.
And if the current batch processing sub-queue has the first task, continuing to judge in the second step.
Secondly, judging whether the number of the first tasks in the current batch processing sub-queue is larger than a number threshold value M, if not, executing a third step, and judging the relation between the time difference between the last calling time and the current calling time and a preset time period; if yes, executing the fourth step, and calling the first task in the current batch processing sub-queue. Wherein M is a positive integer.
Thirdly, judging whether the time between the current time and the last time of calling the task exceeds a preset time period of N seconds, if so, executing the fourth step, and calling the first task in the current batch processing sub-queue; if not, the calling of the current batch processing sub-queue is finished, the next batch processing sub-queue is used as the current batch processing sub-queue, the first step is executed again, and whether the first task in the new current batch processing sub-queue needs to be called or not is judged. Wherein N is not less than zero
And fourthly, calling the first task in the current batch processing sub-queue. And if the maximum task number of each batch processing is X, the number of the first tasks in the current batch processing sub-queue is Y. And if Y is larger than or equal to X, taking X first tasks, handing the first tasks to 1 common thread for batch processing, and executing the second step again to judge whether the number of the first tasks left in the current batch processing sub-queue is larger than a number threshold value. If Y is smaller than X, taking out Y first tasks, handing the first tasks to 1 common thread for batch processing, finishing the calling of the current batch processing sub-queue, taking the next batch processing sub-queue as the current batch processing sub-queue, re-executing the first step, and judging whether the first task in the new current batch processing sub-queue should be called. Wherein X, Y are all positive integers.
In the embodiment, the batch processing thread can take a plurality of first tasks to process simultaneously at a time, so that the resource is prevented from being unnecessarily consumed by processing each task independently. Taking the log storage as an example, putting a plurality of logs together for batch storage can save database resources and Input/Output (IO) overhead.
It should be noted that, when the batch queue is subdivided into a plurality of batch sub-queues, each queue buffers a type of the first task, so that a plurality of same type of first tasks can be processed at a time. Illustratively, a first batch sub-queue is used to temporarily store a first task of opinion feedback submitted by a user, and a second batch sub-queue is used to temporarily store a first task of log-in-storage.
In some embodiments of the present invention, after S202', the processing method 200 of the HTTP connection further includes:
s205', the second task is sent to the fast processing thread.
In S205', the fast processing thread is configured to call the second tasks one by one from the fast processing queue, and directly process one second task called each time.
In some embodiments of the invention, the fast processing threads include a main controller sub-thread, and one or more ordinary fast processing sub-threads.
And when the main control sub-thread determines that a second task exists in the fast processing queue, the second task is quickly taken out from the fast processing queue and is distributed to the common fast processing sub-thread.
The ordinary fast processing sub-thread is used for processing the second tasks one by one.
It should be noted that, in order to save resources, if there is no pending second task or normal fast processing sub-thread in the fast processing queue is exhausted, the main control sub-thread may be in a waiting state until the pending second task or normal fast processing sub-thread written in the fast processing queue is available.
In some embodiments of the present invention, after writing the first task into the batch processing queue based on the initiating end of the task processing request, and after sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs, if the processing progress of the first task needs to be queried, the specific implementation manner of the HTTP connection processing method 200 further includes S206 to S209:
s206, receiving a query request of the first task.
In S206, the query request of the first task is used to instruct the receiving end of the query request of the first task to query the processing progress of the first task.
In some embodiments, the query request for the first task may be generated based on identification information of the first task received by an initiator of the task processing request and identification information of a server node to which the batch queue belongs.
For example, in S204, after the first task is written into the batch processing queue, the identification information of the first task and the identification information of the server node to which the batch processing queue belongs are sent to the initiator of the task processing request corresponding to the first task. If the processing progress of the batch processing queue for the first task needs to be queried, a query request of the first task can be generated based on the identification information of the first task received by the initiating terminal and the identification information of the server node to which the batch processing queue belongs.
It should be noted that the initiating end of the query request of the first task may be the initiating end of the corresponding task processing request, or may be another terminal, which is capable of obtaining the identification information of the first task and the identification information of the server node to which the batch processing queue belongs, besides the initiating end of the task processing request.
S207, analyzing the identification information of the first task and the identification information of the server node to which the batch processing queue belongs from the received query request of the first task.
In some embodiments, the identification information of the first task in S207 and the identification information of the server node to which the batch queue belongs are the same as the identification information of the first task in S204 and the identification information of the server node to which the batch queue belongs, and are not described herein again.
S208, acquiring the processing progress of the first task in the task processing progress table of the server node to which the batch processing queue belongs based on the identification information of the server node to which the batch processing queue belongs and the identification information of the first task.
In S208, the task processing schedule includes: and mapping relation between the identification information of the tasks processed by the server node to which the batch queue belongs and the processing progress of the tasks processed by the server node to which the batch queue belongs, wherein the tasks processed by the server node to which the batch queue belongs comprise first tasks.
As an example, a task processing schedule within a server node, as shown in Table 1, wherein the percentage in the column of processing schedule represents the completion of processing of the first task. For example, 100% indicates that the corresponding first task has completed processing.
TABLE 1
Identification information of first task Progress of treatment
First task A1Identification information of 100%
First task A2Identification information of 70%
…… ……
First task AnIdentification information of 1%
In some embodiments, the executing entity of S207 may be a server node, if the server node is called B1. ByThe server nodes on the network side are usually deployed in a cluster manner, in which case the server node B1After the identification information of the server node to which the batch processing queue belongs is analyzed, two situations exist:
in the first case: the analyzed identification information of the server node to which the batch processing queue belongs is B1The identification information of (1). At this time, directly according to the identification information of the first task, the node B in the server1The processing progress of the first task is inquired in the task processing progress table.
In the second case: the analyzed identification information of the server node to which the batch processing queue belongs is the server node B1Identification information of other server node, if the server node is called B2. At this time, the server node B1The query request needs to be forwarded to the server node B2. By the server node B2According to the identification information of the first task in the query request, the node B of the server2The processing progress of the first task is inquired in the task processing progress table.
In this embodiment, the task processing schedule is placed in the server node to which the task processing schedule belongs, and is not uniformly placed on the remote host because the task processing schedule is placed in the server node, so that the whole processing system is simple in structure and higher in efficiency. If the task processing schedule is placed on the remote host, although the flow of each query request is the same, in addition to the need of increasing the deployment of the remote host, additional serialization and network IO overhead are brought when the thread writes the task processing schedule and the processing result, and if the processing result contains information such as a local file, the application complexity is increased.
S209, the processing progress of the first task is fed back to the initiating end of the query request.
In some embodiments, if the processing progress of the first task is less than 100%, the query request for the processing progress of the first task may be initiated again after a fixed time.
In some embodiments, in addition to the mapping relationship between the identification information of the first task and the processing progress of the first task, a mapping relationship between the processing result of the first task and the identification information of the first task is recorded in the task progress processing table.
Illustratively, with continued reference to Table 1, when querying first task A1When the processing progress is being made, the first task A is determined according to the task progress processing table1After having been processed, the first task A can be continuously inquired1And processes the result, and executes the first task A1And returning the processing result to the initiating end of the query request.
Wherein, in order to relieve the storage pressure of the server node, the first task A is returned1The processing progress of the first task may be deleted from the task processing schedule after the processing result of (1).
In some embodiments of the present invention, the specific implementation of the processing method 200 for HTTP connection further includes S206 'to S209':
s206', a query request of the second task is received.
In S206', the query request of the second task is used to instruct the receiving end of the query request of the second task to query the processing progress of the second task.
In some embodiments, the query request for the second task may be generated based on identification information of the second task received by the initiator of the task processing request and identification information of the server node to which the fast processing queue belongs.
For example, in S204', after the second task is written into the fast processing queue, the identification information of the second task and the identification information of the server node to which the fast processing queue belongs are sent to the initiating end of the task processing request corresponding to the second task. If the processing progress of the fast processing queue for the second task needs to be queried, a query request of the second task may be generated based on the identification information of the second task received by the initiator and the identification information of the server node to which the fast processing queue belongs.
S207', the identification information of the second task and the identification information of the server node to which the fast processing queue belongs are analyzed from the received query request of the second task.
In some embodiments, the identification information of the second task in S207 'and the identification information of the server node to which the fast processing queue belongs are the same as the identification information of the second task in S204' and the identification information of the server node to which the fast processing queue belongs, and are not described herein again.
S208', based on the identification information of the server node to which the rapid processing queue belongs and the identification information of the second task, the processing progress of the second task is obtained in the task processing progress table of the server node to which the rapid processing queue belongs.
In S208', the task processing schedule includes a mapping relationship between identification information of tasks processed by the server node to which the fast processing queue belongs and processing schedules of tasks processed by the server node to which the fast processing queue belongs, and the tasks processed by the server node to which the fast processing queue belongs include the second task.
In some embodiments, the detailed implementation of S208' and the structural framework of the task processing schedule are the same as those in S208, and are not described herein again.
S209', the processing progress of the second task is fed back to the initiating end of the query request.
In some embodiments, the specific implementation of S209' is similar to that of S209, and is not described herein again.
In some embodiments of the invention, there is a limit to the number of first tasks that the batch queue can hold. When the first task stored in the batch processing queue reaches the upper limit of the number, the first task written later can not be normally stored in the batch processing queue. At this time, after determining that the task requested to be processed by the task processing request is the first task in S202, the HTTP connection processing method 200 further includes:
s210, judging that the task quantity borne by the batch processing queue reaches the threshold value of the task quantity borne by the batch processing queue, and releasing HTTP connection.
Or the like, or, alternatively,
and waiting for writing the first task into the batch processing queue when the task quantity carried by the batch processing queue is less than the threshold value of the task quantity carried by the batch processing queue.
The following is divided into two examples to specifically explain two processing methods for the first task in S210, respectively.
As a first example, after the number of the first tasks stored in the batch queue reaches the threshold of the number of the tasks carried by the batch queue, if there is a first task request written in the batch queue, the first task may be directly rejected from being written in, and feedback information indicating that the first task has failed to be written in is returned to the initiating end of the task processing request.
As a second example, after the number of first tasks stored in the batch queue reaches the upper limit, if there is a first task request to write into the batch queue, the first task requesting to write may be queued at the server node to which the batch queue belongs. Until the amount of tasks carried by the batch queue is less than a threshold amount of tasks carried by the batch queue, i.e., there is a position in the batch queue to deposit the first task
Further, if the memory of the server node to which the batch processing queue belongs is not enough to store the first task written later, the first task written later may be written into a remote queue or the first task may be persisted into a database and/or a disk corresponding to the server node to which the batch processing queue belongs.
It should be noted that, when the first task of the later writing waits at the server node, there is also a case where the processing of the first task is overtime due to an excessively long waiting time, and when it is determined that the processing of the task is overtime, the HTTP connection may be released.
It should be further noted that, when the batch processing queue includes a plurality of batch processing sub-queues, when the number of the first tasks stored in the batch processing sub-queue corresponding to the first task reaches the upper limit, the processing method for the first task in the above three examples may be performed on the first task.
In some embodiments of the invention, there is a limit to the number of second tasks that the fast processing queue can hold. When the number of the second tasks stored in the fast processing queue reaches the upper limit, the second tasks written later cannot be normally stored in the fast processing queue, and the HTTP link cannot be normally released. At this time, after determining that the task requested to be processed by the task processing request is the second task in S202', the HTTP connection processing method 200 further includes:
s210', judging that the task quantity borne by the rapid processing queue reaches the threshold value of the task quantity borne by the rapid processing queue, and releasing the HTTP connection. Or waiting to write the second task into the fast processing queue when the task amount carried by the fast processing queue is smaller than the threshold value of the task amount carried by the fast processing queue.
In some embodiments, the processing method for the second task written after is the same as that for the first task written after when the first task stored in the batch processing queue reaches the upper limit of the number in S210, and details are not repeated here.
Based on the same inventive concept, the embodiment of the invention provides a processing device for HTTP connection. Fig. 3 is a schematic structural diagram of a processing apparatus for HTTP connection according to an embodiment of the present invention. As shown in fig. 3, the HTTP connection processing apparatus 300 includes:
the first receiving module 310 is configured to receive a task processing request sent by an initiator on the basis of establishing an HTTP connection with the initiator of the task processing request.
The batch writing module 320 is configured to determine that a task requested to be processed by the task processing request is a first task, and write the first task into the batch processing queue, where the first task is a task whose required processing delay sensitivity is not higher than a preset processing delay sensitivity threshold;
the first releasing module is used for releasing the HTTP connection.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
and the fast writing module is used for determining that the task requested to be processed by the task processing request is a second task and writing the second task into the fast processing queue, wherein the second task is a task of which the sensitivity of the required processing delay is higher than a preset processing delay sensitivity threshold.
And the second releasing module is used for releasing the HTTP connection.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
and the first sending module is used for sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs to the initiating end.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
and the second sending module is used for sending the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs to the initiating end.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
a third sending module, configured to determine that the number of batch tasks in the batch queue reaches a number threshold, and/or that a preset time period has elapsed since the last sending of the batch tasks in the batch queue to the batch thread,
and sending the batch processing tasks in the batch processing queue to a batch processing thread.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
and the fourth sending module is used for sending the second task to the fast processing thread.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
and the second receiving module is used for receiving the query request of the first task.
And the first analysis module is used for analyzing the identification information of the first task and the identification information of the server node to which the batch processing queue belongs from the received query request of the first task.
And the first acquisition module is used for acquiring the processing progress of the first task in the task processing progress table of the server node to which the batch processing queue belongs on the basis of the identification information of the server node to which the batch processing queue belongs and the identification information of the first task.
Wherein, the task processing schedule comprises: and mapping relation between the identification information of the tasks processed by the server node to which the batch queue belongs and the processing progress of the tasks processed by the server node to which the batch queue belongs, wherein the tasks processed by the server node to which the batch queue belongs comprise first tasks.
And the first feedback module is used for feeding back the processing progress of the first task to the initiating end of the query request.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
and the third receiving module is used for receiving the query request of the second task.
The second analysis module is used for analyzing the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs from the received query request of the second task;
and the second obtaining module is used for obtaining the processing progress of the second task in the task processing progress table of the server node to which the rapid processing queue belongs on the basis of the identification information of the server node to which the rapid processing queue belongs and the identification information of the second task.
The task processing schedule comprises a mapping relation between identification information of tasks processed by the server node to which the rapid processing queue belongs and processing schedules of the tasks processed by the server node to which the rapid processing queue belongs, and the tasks processed by the server node to which the rapid processing queue belongs comprise second tasks;
and the second feedback module is used for feeding back the processing progress of the second task to the initiating end of the query request.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
a first judging module for judging whether the task quantity carried by the batch processing queue reaches the threshold value of the task quantity carried by the batch processing queue and releasing the HTTP connection,
or the like, or, alternatively,
and waiting for writing the first task into the batch processing queue when the task quantity carried by the batch processing queue is less than the threshold value of the task quantity carried by the batch processing queue.
In some embodiments of the present invention, the HTTP connection processing apparatus 300 further includes:
a second judging module for judging whether the task amount carried by the rapid processing queue reaches the threshold value of the task amount carried by the rapid processing queue and releasing the HTTP connection,
or the like, or, alternatively,
and waiting to write the second task into the fast processing queue when the task quantity carried by the fast processing queue is smaller than the threshold value of the task quantity carried by the fast processing queue.
Other details of the HTTP connection processing apparatus according to the embodiment of the present invention are similar to the method according to the embodiment of the present invention described above with reference to fig. 2, and are not described again here.
Fig. 4 is a block diagram of an exemplary hardware architecture of an HTTP connected processing device in an embodiment of the present invention.
As shown in fig. 4, the HTTP-connected processing device 400 includes an input device 401, an input interface 402, a central processor 403, a memory 404, an output interface 405, and an output device 406. The input interface 402, the central processing unit 403, the memory 404, and the output interface 405 are connected to each other through a bus 410, and the input device 401 and the output device 406 are connected to the bus 410 through the input interface 402 and the output interface 405, respectively, and further connected to other components of the HTTP-connected processing device 400.
Specifically, the input device 401 receives input information from the outside and transmits the input information to the central processor 403 through the input interface 402; the central processor 403 processes the input information based on computer-executable instructions stored in the memory 404 to generate output information, stores the output information temporarily or permanently in the memory 404, and then transmits the output information to the output device 406 through the output interface 405; the output device 406 outputs the output information to the outside of the HTTP-connected processing device 400 for use by the user.
That is, the HTTP-connected processing device shown in fig. 4 may also be implemented to include: a memory storing computer-executable instructions; and a processor which, when executing computer executable instructions, may implement the methods and apparatus of the HTTP connected processing device described in connection with fig. 2-3.
In one embodiment, the HTTP connected processing device 400 shown in fig. 4 may be implemented as a device that may include: a memory for storing a program; and the processor is used for operating the program stored in the memory so as to execute the HTTP connection processing method of the embodiment of the invention.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

Claims (15)

1. A method for processing an HTTP connection, the method comprising:
on the basis of establishing a hypertext transfer protocol (HTTP) connection with an initiating terminal of the task processing request, receiving the task processing request sent by the initiating terminal;
determining that the task requested to be processed by the task processing request is a first task, and writing the first task into a batch processing queue, wherein the first task is a task of which the sensitivity of the required processing delay is not higher than a preset processing delay sensitivity threshold;
and releasing the HTTP connection.
2. The method of claim 1, wherein after receiving the task processing request sent by the initiator, the method further comprises:
determining that the task requested to be processed by the task processing request is a second task, and writing the second task into a fast processing queue, wherein the second task is a task of which the sensitivity of the required processing delay is higher than a preset processing delay sensitivity threshold;
and releasing the HTTP connection.
3. The method of claim 1, wherein after the writing the first task to a batch queue, the method further comprises:
and sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs to the initiating terminal.
4. The method of claim 2, wherein after writing the second task to a fast processing queue, the method further comprises:
and sending the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs to the initiating terminal.
5. The method of claim 1, wherein after the writing the first task to a batch queue, the method further comprises:
determining that the number of batch tasks in the batch queue reaches a number threshold, and/or that a preset time period has elapsed since the last transmission of batch tasks in the batch queue to the batch thread,
and sending the batch processing tasks in the batch processing queue to the batch processing thread.
6. The method of claim 2, wherein after the writing the second task to a fast processing queue, the method further comprises:
and sending the second task to the fast processing thread.
7. The method of claim 3, further comprising:
receiving a query request of the first task;
analyzing the identification information of the first task and the identification information of the server node to which the batch processing queue belongs from the received query request of the first task;
acquiring the processing progress of the first task in a task processing schedule of the server node to which the batch processing queue belongs based on the identification information of the server node to which the batch processing queue belongs and the identification information of the first task, wherein the task processing schedule comprises: a mapping relationship between identification information of tasks processed by the server node to which the batch queue belongs and processing progress of tasks processed by the server node to which the batch queue belongs, the tasks processed by the server node to which the batch queue belongs including the first task;
and feeding back the processing progress of the first task to an initiating end of the query request.
8. The method of claim 4, further comprising:
receiving a query request of the second task;
analyzing the identification information of the second task and the identification information of the server node to which the rapid processing queue belongs from the received query request of the second task;
acquiring the processing progress of the second task in a task processing progress table of the server node to which the fast processing queue belongs based on the identification information of the server node to which the fast processing queue belongs and the identification information of the second task, wherein the task processing progress table comprises a mapping relation between the identification information of the task processed by the server node to which the fast processing queue belongs and the processing progress of the task processed by the server node to which the fast processing queue belongs, and the task processed by the server node to which the fast processing queue belongs comprises the second task;
and feeding back the processing progress of the second task to the initiating end of the query request.
9. The method of claim 1, wherein after determining that the task requested to be processed by the task processing request is a first task, the method further comprises:
judging that the task quantity carried by the batch processing queue reaches the threshold value of the task quantity carried by the batch processing queue, releasing the HTTP connection,
or the like, or, alternatively,
and waiting for writing the first task into the batch processing queue when the task quantity carried by the batch processing queue is smaller than the threshold value of the task quantity carried by the batch processing queue.
10. The method of claim 2, wherein after determining that the task requested to be processed by the task processing request is a second task, the method further comprises:
judging that the task quantity carried by the rapid processing queue reaches the threshold value of the task quantity carried by the rapid processing queue, releasing the HTTP connection,
or the like, or, alternatively,
waiting to wait to write the second task into the fast processing queue when the task amount carried by the fast processing queue is smaller than the threshold value of the task amount carried by the fast processing queue.
11. An apparatus for processing an HTTP connection, the apparatus comprising:
a first receiving module, configured to receive a task processing request sent by an initiator on the basis of establishing an HTTP connection with the initiator of the task processing request;
the batch writing module is used for determining that the task requested to be processed by the task processing request is a first task and writing the first task into a batch processing queue, wherein the first task is a task of which the sensitivity of the required processing delay is not higher than a preset processing delay sensitivity threshold;
a first release module, configured to release the HTTP connection.
12. The apparatus of claim 11, further comprising:
the fast writing module is used for determining that the task requested to be processed by the task processing request is a second task and writing the second task into a fast processing queue, wherein the second task is a task of which the sensitivity of the required processing delay is higher than a preset processing delay sensitivity threshold;
and the second releasing module is used for releasing the HTTP connection.
13. The apparatus of claim 11, further comprising:
and the first sending module is used for sending the identification information of the first task and the identification information of the server node to which the batch processing queue belongs to the initiating end.
14. An apparatus for processing an HTTP connection, the apparatus comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the HTTP connection processing method of any one of claims 1 to 10.
15. A computer storage medium having computer program instructions stored thereon, which when executed by a processor, implement the method of handling HTTP connections as recited in any one of claims 1 to 10.
CN201811086759.0A 2018-09-18 2018-09-18 HTTP connection processing method, device, equipment and medium Pending CN110912958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811086759.0A CN110912958A (en) 2018-09-18 2018-09-18 HTTP connection processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811086759.0A CN110912958A (en) 2018-09-18 2018-09-18 HTTP connection processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN110912958A true CN110912958A (en) 2020-03-24

Family

ID=69812735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811086759.0A Pending CN110912958A (en) 2018-09-18 2018-09-18 HTTP connection processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110912958A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268328A (en) * 2021-05-26 2021-08-17 平安国际融资租赁有限公司 Batch processing method and device, computer equipment and storage medium
CN113760520A (en) * 2020-07-09 2021-12-07 西安京迅递供应链科技有限公司 Task processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1210409A (en) * 1997-08-28 1999-03-10 国际商业机器公司 Server-side asynchronous form management
US20060200566A1 (en) * 2005-03-07 2006-09-07 Ziebarth Wayne W Software proxy for securing web application business logic
CN107943802A (en) * 2016-10-12 2018-04-20 北京京东尚科信息技术有限公司 A kind of log analysis method and system
CN108055311A (en) * 2017-12-07 2018-05-18 畅捷通信息技术股份有限公司 HTTP Asynchronous Requests method, apparatus, server, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1210409A (en) * 1997-08-28 1999-03-10 国际商业机器公司 Server-side asynchronous form management
US20060200566A1 (en) * 2005-03-07 2006-09-07 Ziebarth Wayne W Software proxy for securing web application business logic
CN107943802A (en) * 2016-10-12 2018-04-20 北京京东尚科信息技术有限公司 A kind of log analysis method and system
CN108055311A (en) * 2017-12-07 2018-05-18 畅捷通信息技术股份有限公司 HTTP Asynchronous Requests method, apparatus, server, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760520A (en) * 2020-07-09 2021-12-07 西安京迅递供应链科技有限公司 Task processing method and device
CN113268328A (en) * 2021-05-26 2021-08-17 平安国际融资租赁有限公司 Batch processing method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US7076781B2 (en) Resource reservation for large-scale job scheduling
CN111522641B (en) Task scheduling method, device, computer equipment and storage medium
CN113485822A (en) Memory management method, system, client, server and storage medium
WO2020238989A1 (en) Method and apparatus for scheduling task processing entity
CN115004673A (en) Message pushing method and device, electronic equipment and computer readable medium
CN110912958A (en) HTTP connection processing method, device, equipment and medium
CN113656176A (en) Cloud equipment distribution method, device, system, electronic equipment, medium and product
CN111586140A (en) Data interaction method and server
CN108429703B (en) DHCP client-side online method and device
CN113157465B (en) Message sending method and device based on pointer linked list
CN110955460B (en) Service process starting method and device, electronic equipment and storage medium
CN107045452B (en) Virtual machine scheduling method and device
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN110995817A (en) Request callback method and device and client equipment
CN113656178B (en) Data processing method, device, equipment and readable storage medium
CN116192849A (en) Heterogeneous accelerator card calculation method, device, equipment and medium
CN109905459A (en) A kind of data transmission method and device
CN111131078B (en) Message hashing method and device, FPGA module and processor module
CN111324438B (en) Request scheduling method and device, storage medium and electronic equipment
CN113268327A (en) Transaction request processing method and device and electronic equipment
CN113873036B (en) Communication method, device, server and storage medium
CN114466079B (en) Request processing method, device, proxy server and storage medium
CN113301136B (en) Service request processing method and device
CN113760472A (en) Method and device for scheduling push tasks
CN115037803B (en) Service calling method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200324