CN114915659B - Network request processing method and device, electronic equipment and storage medium - Google Patents

Network request processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114915659B
CN114915659B CN202110178915.1A CN202110178915A CN114915659B CN 114915659 B CN114915659 B CN 114915659B CN 202110178915 A CN202110178915 A CN 202110178915A CN 114915659 B CN114915659 B CN 114915659B
Authority
CN
China
Prior art keywords
network request
queue
persistence
byte sequence
sequence corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110178915.1A
Other languages
Chinese (zh)
Other versions
CN114915659A (en
Inventor
刘佳皓
宋立鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110178915.1A priority Critical patent/CN114915659B/en
Publication of CN114915659A publication Critical patent/CN114915659A/en
Application granted granted Critical
Publication of CN114915659B publication Critical patent/CN114915659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Retry When Errors Occur (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application provides a network request processing method, a device, electronic equipment and a computer readable storage medium; the cloud technology and big data processing are involved; the method comprises the following steps: acquiring a network request with failed response, and carrying out serialization processing on the network request to obtain a byte sequence corresponding to the network request; storing the byte sequence corresponding to the network request into a persistence queue; reading at least one byte sequence corresponding to the network request from the persistence queue to perform deserialization processing; and retrying the at least one network request obtained after the reverse serialization processing. By the method and the device, the network request with failed response can be safely and efficiently processed.

Description

Network request processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and apparatus for processing a network request, an electronic device, and a computer readable storage medium.
Background
The network request is a common front-end and back-end interaction mode of the application program, and when the network is abnormal or the number of network requests suddenly increases in a short time, the background server can fail to respond to the network requests. In order to improve the probability of success of network requests when the network is abnormal or the number of network requests suddenly increases, a network retry mechanism, that is, a scheme of performing network request retry after network request response failure, needs to be preset.
In the related art, the number of network requests capable of being retried is limited by the memory of the background server, and the network requests with failed responses occupy the memory all the time in the whole retrying process, so that the potential memory overflow risk exists; there is also a problem of network request data loss in the memory due to service restart, power down, etc.
Disclosure of Invention
The embodiment of the application provides a network request processing method, a device, electronic equipment and a computer readable storage medium, which can safely and efficiently process a network request.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a network request processing method, which comprises the following steps:
acquiring a network request with failed response, and carrying out serialization processing on the network request to obtain a byte sequence corresponding to the network request;
storing the byte sequence corresponding to the network request into a persistence queue;
reading at least one byte sequence corresponding to the network request from the persistence queue to perform deserialization processing;
and retrying the at least one network request obtained after the reverse serialization processing.
The embodiment of the application provides a network request processing device, which comprises:
The acquisition module is used for acquiring a network request with failed response;
the serialization processing module is used for serializing the network request to obtain a byte sequence corresponding to the network request;
the storage module is used for storing the byte sequence corresponding to the network request into a persistence queue;
the anti-sequence processing module is used for reading at least one byte sequence corresponding to the network request from the persistence queue so as to perform anti-sequence processing;
and the retry module is used for retrying the at least one network request obtained after the reverse serialization processing.
In the above scheme, the device further comprises a receiving module, configured to receive a serialization processing instruction, where the serialization processing instruction includes a storage mode after serialization processing; the serialization processing module is further used for calling a serialization interface function according to the storage mode so as to conduct serialization processing on the state information of the network request, and a byte sequence conforming to the storage mode is obtained; wherein the status information includes at least one of: the request mode, the request address and the request parameter of the network request.
In the above scheme, the storage module is further configured to store a byte sequence corresponding to a network request that fails to respond for the first time to a first persistence queue based on a nonvolatile memory, where different byte sequences corresponding to network requests are stored in the first persistence queue according to a first-in-first-out order; the device further comprises a setting module, configured to set a first retry time of the network request in the first persistence queue, where a first retry time sequence of different network requests is incremented, and the sequence is a chronological order of times when byte sequences corresponding to the different network requests are stored in the first persistence queue.
In the above solution, the obtaining module is further configured to obtain at least one network request that fails to respond repeatedly when the first persistent queue is executed; the serialization processing module is further configured to perform serialization processing on at least one network request that fails in the repeated response; the storage module is further configured to store the byte sequence corresponding to the obtained at least one network request with the repeated response failure to a second persistence queue; the setting module is further configured to set a second retry time of the network request that fails to respond repeatedly in the second persistence queue; and storing byte sequences corresponding to different network requests in the second persistence queue in descending order of the second retry time.
In the above scheme, the device further includes a monitoring module, configured to periodically monitor the number of byte sequences in the second persistent queue; the storage module is further configured to, when the number exceeds a first number threshold, not store a byte sequence corresponding to a network request for which a new repeated response fails to the second persistence queue any more until the number is less than a second number threshold; wherein the first number threshold is greater than or equal to the second number threshold.
In the above solution, when the persistent queue includes a first persistent queue for storing a byte sequence corresponding to a network request that fails to respond for the first time, the monitoring module is further configured to perform a monitoring operation with respect to the first persistent queue, so as to read, from the first persistent queue, a byte sequence corresponding to at least one network request that reaches a first retry time; the anti-serialization processing module is further configured to perform anti-serialization processing on the byte sequence of the at least one network request reaching the first retry time.
In the above scheme, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request with a first response failure and a second persistence queue for storing a byte sequence corresponding to a network request with a repeated response failure, the anti-serialization processing module is further configured to read the byte sequence corresponding to the network request with the repeated response failure from the head of the second persistence queue, and perform anti-serialization processing on the read byte sequence corresponding to the network request with the repeated response failure; and when the second persistence queue is empty, reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure.
In the above solution, when the persistent queue includes a first persistent queue for storing a byte sequence corresponding to a network request that fails in a first response, and a second persistent queue for storing a byte sequence corresponding to a network request that fails in a repeated response, the anti-serialization processing module is further configured to allocate a plurality of threads, and alternately execute, by each of the threads, the following processes: reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure; and reading a byte sequence corresponding to the network request with failed repeated response from the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with failed repeated response.
In the above scheme, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request with a first response failure and a second persistence queue for storing a byte sequence corresponding to a network request with a repeated response failure, the anti-serialization processing module is further configured to allocate a corresponding number of threads to the first persistence queue and the second persistence queue according to weights of the first persistence queue and the second persistence queue; reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue through the thread distributed for the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure; and reading a byte sequence corresponding to the network request with repeated response failure from the second persistence queue through the thread distributed for the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with repeated response failure.
In the above solution, the retry module is further configured to send the at least one network request obtained after the inverse serialization processing to a channel, so that a container node in a container cluster reads and re-executes the read network request from the channel; the device also comprises a deleting module, which is used for deleting any byte sequence from the persistence queue when the retry number of the network request corresponding to the any byte sequence stored in the persistence queue exceeds a retry number threshold or the network request is successfully responded.
In the above scheme, the storage module is further configured to store the received unresponsive network request to a non-persistent queue, where the non-persistent queue is created based on a volatile memory; the retry module is configured to execute a network request with failed response, which is read from the persistent queue and is obtained through deserialization processing, when the number of byte sequences corresponding to the network request with failed response included in the persistent queue is greater than a third number threshold; and the method is used for respectively distributing a plurality of threads for the persistent queue and the non-persistent queue when the number of network requests with failed responses included in the persistent queue is smaller than or equal to the third number threshold value so as to synchronously execute the network requests which are read from the non-persistent queue and are not responded and the network requests with failed responses read from the persistent queue and obtained through deserialization processing.
In the above solution, the retry module is further configured to read, through a plurality of threads allocated to the non-persistent queue, an unresponsive network request from the non-persistent queue, and execute the read unresponsive network request; and the method is used for reading the byte sequence corresponding to the network request with failed response from the persistence queue through a plurality of threads distributed for the persistence queue, performing deserialization processing on the read byte sequence corresponding to the network request with failed response, and executing the network request with failed response obtained after the deserialization processing.
The embodiment of the application provides electronic equipment, which comprises;
a memory for storing executable instructions;
and the processor is used for realizing the network request processing method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores executable instructions for implementing the network request processing method provided by the embodiment of the application when the executable instructions are executed by a processor.
The embodiment of the application has the following beneficial effects:
the network requests with failed response are subjected to serialization processing and persistent storage, and deserialization processing is performed when retry processing is needed to restore the network requests, so that the retry is performed, the persistent storage technology is not limited by the size of a memory, massive network requests can be stored, the throughput performance is improved, the network requests can be more efficiently retried, and the data security can be ensured even if unexpected conditions such as service restarting and power failure occur.
Drawings
FIG. 1 is a schematic architecture diagram of a network request processing system 100 according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a retry assembly 400 according to an embodiment of the present application;
fig. 3 is a flow chart of a network request processing method provided in an embodiment of the present application;
fig. 4 is a flow chart of a network request processing method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of an overall architecture of a network request processing system according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a retry component according to an embodiment of the present application;
fig. 7 is a schematic diagram of a network request processing method according to an embodiment of the present application;
fig. 8 is a flow chart of a network request processing method provided in an embodiment of the present application;
fig. 9 is a schematic diagram of message retry amounts in a retry procedure according to an embodiment of the present application;
FIG. 10 is a time-consuming schematic diagram during a retry provided by an embodiment of the present application;
FIG. 11 is a schematic view of disk usage during a retry provided in an embodiment of the present application;
fig. 12 is a schematic diagram of memory usage during retry according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) A Message Queue (MQ) is a container used to hold messages during their transmission. In particular, one end is able to write messages into the message queue continuously, while the other end is able to read or subscribe to the messages in the message queue. For example, the cloud message queue (CMQ, cloud Message Queue) is a distributed message queue service that can provide a reliable message-based asynchronous communication mechanism that can store messages sent and received between different applications (or different components of the same application) in a distributed deployment in a reliable and efficient manner CMQ that prevents message loss. CMQ simultaneously supports simultaneous reading and writing of multiple processes, and the receiving and transmitting are not mutually interfered, so that all applications or components are not required to be in an operation state all the time.
2) Persistent queues: the message queues created based on the nonvolatile memory, such as the message queues created based on the disk, do not lose network request data stored in the persistence queue when the service is restarted or powered down.
3) Non-persistent queues: message queues created based on volatile memory, such as memory-based message queues, will lose network request data stored in non-persistent queues when service is restarted, or powered down.
4) Serialization (Serialization): is a process of converting state information of an object into a form that can be stored or transmitted. During serialization, an object writes its current state to a temporary or persistent store. For example, java serialization refers to the process of converting Java objects into byte sequences, the most important role of which is to serialize: in transferring and saving an object, integrity and transitivity of the object is guaranteed, and the object is converted into an ordered byte stream for transmission over a network or for saving in a local file.
5) Deserialization (Deserialization): refers to a process of reading a byte sequence from a storage area and recreating an object, for example, java deserialization refers to a process of recovering the byte sequence into a Java object, and the most important role of deserialization is: and reconstructing the object through deserialization according to the object state and the description information stored in the byte sequence.
The network request is a common front-end and back-end interaction mode of an application program, for example, an instant messaging client side, and in the chat process, the instant messaging client side can generate a plurality of network requests sent to a server, for example, the network requests for sending chat session message contents, or the instant messaging client side needs the server to provide a certain feedback response, so that the network requests are sent.
However, network request response failure may occur when an anomaly occurs in the network or the number of network requests suddenly increases in a short time. In order to improve the probability of success of network requests when the network is abnormal or the number of network requests suddenly increases, a network retry mechanism, that is, a scheme of performing network request retry after network request response failure, needs to be preset.
In the related art, after the response of the network request fails, the network request with the failed response is not released from the memory, but the network request with the failed response is retried directly in the memory, and is not released from the memory until the response is successful. That is, in the scheme provided by the related art, the number of network requests that can be retried is limited by the memory size, and the network requests that fail to respond in the whole retry process occupy the memory all the time, so that the potential risk of memory overflow exists; because the memory is a volatile memory, there may be a problem of network request data loss caused by service restart, power failure, etc.
In view of this, embodiments of the present application provide a network request processing method, apparatus, electronic device, and computer readable storage medium, which can increase the number of network requests that can be retried on the one hand, and avoid the problem of network request data loss caused by service restart, power outage, and the like on the other hand. An exemplary application of the network request processing method provided by the embodiment of the present application is described below, where the network request processing method provided by the embodiment of the present application may be implemented by various electronic devices, for example, may be implemented by a server or a server cluster.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of a network request processing system 100 according to an embodiment of the present application, including: the background server 200, message queue 300, retry component 400, processing logic server 500, network 600, terminal device 700-1, and terminal device 700-2 are each described below.
The background server 200 is a background server of the client 710-1 and the client 710-2, and is configured to receive a plurality of network requests sent by different users through the terminal device 700-1 and the terminal device 700-2, and send the received plurality of network requests to the message queue 300, so as to implement service decoupling.
Message queue 300 may be a server or cluster of servers for storing a plurality of network requests that are translated by background server 200 for processing logic server 500 to read from message queue 300 and consume the read network requests.
The retry component 400 may be a server or a cluster of servers, configured to obtain a network request with a failed response, and perform serialization processing on the obtained network request to obtain a byte sequence corresponding to the network request; next, the retry component 400 stores the byte sequence corresponding to the network request to a persistence queue; then, the retry component 400 monitors the persistent queue, and when the retry time is reached, reads at least one byte sequence corresponding to the network request from the persistent queue to perform deserialization processing; finally, the retry component 400 sends the at least one network request obtained by the deserialization process to the processing logic server 500, so that the processing logic server 500 retries the received network request.
The processing logic 500 is configured to read the unresponsive network requests from the message queue 300 for consumption; and the processing logic server 500 is configured to obtain a network request with failed response from the retry component 400, and perform a retry operation, and then return a response result for the network request to the corresponding terminal device.
The network 600 is used to connect the background server 200, the processing logic server 500, the terminal device 700-1 and the terminal device 700-2, and the network 600 may be a wide area network or a local area network, or a combination of both.
The terminal device 700-1 and the terminal device 700-2 are terminal devices associated with a user, and the terminal device 700-1 is taken as an example, the terminal device 700-1 is provided with a client 710-1 running thereon, and the client 710-1 may be various types of clients, such as an instant messaging client, a news client, a video playing client, and the like. A user may send various types of network requests to the background server 200 through the client 710-1, such as when the client 710-1 is an instant messaging client, the user may send a network request carrying chat session page content to the background server 200 through the client 710-1 to cause the background server 200 to send the received network request into the message queue 300.
In some embodiments, the background server 200, the message queue 300, the retry component 400, and the processing logic server 500 may be independent physical servers, may be a server cluster or a distributed system formed by a plurality of physical servers, and may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 700-1 and the terminal device 700-2 may be, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal device 700-1 and the terminal device 700-2, and the background server 200 and the processing logic server 500 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
The structure of the retry assembly 400 shown in fig. 1 is described below. Referring to fig. 2, fig. 2 is a schematic structural diagram of a retry component 400 provided in an embodiment of the present application, taking the retry component 400 as an example of a server, the retry component 400 shown in fig. 2 includes: at least one processor 410, a memory 440, at least one network interface 420. The various components in retry component 400 are coupled together by bus system 430. It is understood that bus system 430 is used to enable connected communications between these components. The bus system 430 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 430.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Memory 440 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 240 optionally includes one or more storage devices physically located remote from processor 410.
Memory 440 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 440 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 440 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 441 including system programs, e.g., a framework layer, a core library layer, a driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
network communication module 442 for reaching other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
in some embodiments, the network request processing apparatus provided in the embodiments of the present application may be implemented in a software manner, and fig. 2 shows a network request processing apparatus 443 stored in a memory 440, which may be software in the form of a program and a plug-in, and includes the following software modules: the acquisition module 4431, the serialization processing module 4432, the storage module 4433, the deserialization processing module 4434, the retry module 4435, the reception module 4436, the setting module 4437, the listening module 4438, and the deletion module 4439 are logically, and thus can be arbitrarily combined or further split according to the functions implemented. The functions of the respective modules will be described hereinafter. It should be noted that, for convenience of description, all the functional modules are shown in fig. 2 at one time, but in practical application, embodiments including only the acquisition module 4431, the serialization processing module 4432, the storage module 4433, the anti-serialization processing module 4434, and the retry module 4435 are not excluded.
In other embodiments, the network request processing apparatus provided in the embodiments of the present application may be implemented in hardware, and by way of example, the apparatus provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the network request processing method provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic component.
The network request processing method provided in the embodiment of the present application will be specifically described below with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a flowchart of a network request processing method according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3. It should be noted that, the execution subject of steps S101 to S104 shown in fig. 3 may be the retry component 400 shown in fig. 1.
In step S101, a network request with a failed response is obtained, and a serialization process is performed on the network request, so as to obtain a byte sequence corresponding to the network request.
In some embodiments, the types of network requests may include hypertext transfer protocol (HTTP, hypertext Transfer Protocol) requests, simple mail transfer protocol (SMTP, simple Mail Transfer Protocol) requests, file transfer protocol (FTP, file Transfer Protocol) requests, and the like.
Taking a video playing client as an example, when a user a clicks a video to be watched in a man-machine interaction interface of the video playing client, a background process of the video playing client obtains a network request for playing the video and sends the network request to a background server of the video playing client. Meanwhile, it is assumed that, in the same time period, the users B to F also click on the same video on the corresponding terminal devices, and at this time, the background server of the video playing client will receive multiple network requests in a short time.
For example, taking an instant messaging client as an example, in a process of chatting with other users through the instant messaging client, the instant messaging client typically sends a plurality of network requests to a server, for example, sends a network request carrying chat session message content to a background server of the instant messaging client, where the chat session message content may be various types of information such as text, documents, pictures or videos. When multiple users are chatting at the same time, a background server of the instant messaging client receives a large number of network requests in a short time.
The network request may be successfully responded to by the processing logic server 500, or may be failed to respond by the processing logic server 500. For example, when a background server of an instant communication client receives a large number of network requests in a short time, where a network request sent for a user a is a successful response, and a network request sent for a user B is a failed response, the network request processing method provided in the embodiment of the present application mainly processes the network request with the failed response.
For example, the retry component may implement the above-mentioned serializing process for the network request with failed response by the following manner, to obtain a byte sequence corresponding to the network request: receiving a serialization instruction, wherein the serialization instruction comprises a storage mode (such as a binary format, an extensible markup language (XML, eXtensible Markup Language) format, or a JS Object profile (JSON, javaScript Object notification) format) after serialization; calling a serialization interface function according to a storage mode to perform serialization processing on state information of a network request so as to obtain a byte sequence conforming to the storage mode; wherein the status information of the network request includes at least one of: request mode, request address and request parameter of network request.
For example, taking a JSON format as an example, the retry component, after obtaining a network request with a failed response, invokes a corresponding serialization interface function to serialize the status information of the network request with a failed response (the serialization includes the steps of first creating an object output stream that can wrap an object output stream of another type, such as a file output stream, and then writing the object by the writeObject () method of the object output stream) to obtain a byte sequence in JSON format.
In step S102, a byte sequence corresponding to the network request is stored in the persistence queue.
In some embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request that fails a first response, the retry component may implement storing the byte sequence corresponding to the network request to the persistence queue as described above by: storing a byte sequence corresponding to the network request with failed first response to a first persistence queue created based on a nonvolatile memory, wherein different byte sequences corresponding to the network requests are stored in the first persistence queue according to a first-in first-out sequence; next, a first retry time (e.g., in the form of a timestamp, where the timestamp is data generated using a digital signature technique, the signed object includes information such as original file information, a signature parameter, a signature time, and the like) of the network request that fails in the first persistence queue is set, the timestamp system is used to generate and manage the timestamp, and the signed object is digitally signed to generate the timestamp to prove that the original file already exists before signing, where the first retry time of the different network requests is sequentially incremented (e.g., may be sequentially incremented in an arithmetic sequence, sequentially incremented in an exponential sequence, or added a fixed duration based on the enqueuing time), and the sequence is a sequential sequence of times that the byte sequences corresponding to the different network requests are stored in the first persistence queue.
For example, taking a manner of superposing a fixed duration on the basis of the enqueuing time as an example, assuming that the retry component obtains 3 different network requests with failed first response, namely a network request a, a network request B and a network request C, after the network request a, the network request B and the network request C are respectively processed in a serialization manner, and respectively corresponding byte sequences are obtained, the retry component stores the respectively corresponding byte sequences of the network requests into a first persistence queue created based on a nonvolatile memory such as a disk or a solid state disk, and respectively sets the respectively corresponding first retry time for the network request a, the network request B and the network request C. Assuming that the time when the byte sequence corresponding to the network request a joins the first persistent queue is T1, the time when the byte sequence corresponding to the network request B joins the first persistent queue is T2, and the time when the byte sequence corresponding to the network request C joins the first persistent queue is T3, where T1< T2< T3, and assuming that the fixed duration set by the user is 2 seconds, the first retry time corresponding to the network request a is t1+2, the first retry time corresponding to the network request B is t2+2, and the first retry time corresponding to the network request C is t3+2, so that a plurality of network requests failing in the first response can be read and executed from the first persistent queue in a first-in-first-out manner.
In other embodiments, when the persistence queues include a first persistence queue for storing a byte sequence corresponding to a network request for which a first response fails and a second persistence queue for storing a byte sequence corresponding to a network request for which a repeated response fails, step S102 shown in fig. 3 may be implemented through steps S1021 to S1023 shown in fig. 4, which will be described in connection with the steps shown in fig. 4.
In step S1021, at least one network request is obtained that fails to respond repeatedly while executing the first persistence queue.
In step S1022, the at least one network request with failed repeated response is serialized, and the byte sequences corresponding to the at least one network request with failed repeated response are stored in the second persistence queue.
In some embodiments, the retry component reads a byte sequence corresponding to the network request with the first response failure from the first persistence queue, performs deserialization processing on the read byte sequence, and then, when retrying the network request obtained after the deserialization processing, selects out the network request with the repeated response failure in the network request, performs serialization processing, so as to add the byte sequence corresponding to the network request with the repeated response failure into the second persistence queue, and performs degradation retry.
For example, assuming that the byte sequence corresponding to the network request with the first response failure that is read by the retry component from the first persistence queue created based on the disk is the byte sequence of the network request a to the network request D, respectively, then the retry component performs deserialization processing on the byte sequence of the network request a to the network request D to obtain the network request a to the network request D, and then the retry component sends the network request a to the network request D into the channel for the container node in the container cluster (the container cluster is an implementation of the server based on the virtualization technology, for example, the processing logic server 500 shown in fig. 1 may be virtualized as the container cluster to improve portability of retrying the network request with the response failure that is read from the channel) to read and retry the network request a to the network request D from the channel. If the network request B fails to respond again during retry, the retry component performs serialization processing on the network request B, and adds a byte sequence corresponding to the network request B into a second persistence queue created based on a nonvolatile memory such as a disk or a solid state disk, so as to perform degradation retry on the network request B.
In step S1023, a second retry time for repeated responses to failed network requests is set in a second persistence queue.
In some embodiments, the byte sequences corresponding to the different network requests may be stored in the second persistence queue in descending order of the second retry time. For example, the second persistent queue may be a message queue based on a key-value database, where the second retry time is used as a key, and the network request with failed repeated response corresponding to the second retry time is used as a value, so as to sort byte sequences corresponding to the network requests with failed repeated response stored in the second persistent queue in a descending order, that is, the network request with the shortest retry time is arranged at the head of the second persistent queue.
For example, the second retry time may also be set by using a time stamp, where the second retry time of each network request that fails to respond repeatedly included in the second persistence queue may be a unified value preset by the user or the server, for example, the second retry time of each network request that fails to respond repeatedly is 1 second; of course, the corresponding preset values may be set according to different types of services, that is, the preset values of the corresponding second retry times are different for the network requests with failed repeated responses corresponding to the different types of services. For example, when the service type is a network request carrying a chat session page message, the preset value of the corresponding second retry time is 1 second; and when the service type is a network request for adding new group members, the preset value of the corresponding second retry time is 3 seconds.
For example, the second retry time may also be set according to the real-time degree or the emergency degree of the service corresponding to the network request with failed repeated response, for example, for the network request with higher real-time degree or emergency degree, the corresponding second retry time may be set to be shorter (for example, 1 second) so as to process as soon as possible; for network requests with lower real-time or emergency, the corresponding second retry time may be set longer (e.g., 3 seconds) to prioritize network requests with higher real-time.
For example, the second retry time may be set according to the user account level of the network request, and for a high-level user (i.e., a user whose user account level exceeds the level threshold), the retry time corresponding to the network request sent by the user is shorter, so as to perform processing preferentially; for the low-level user (i.e., the user whose user account level is lower than the level threshold), the retry time corresponding to the network request sent by the low-level user is longer, so that the server processes the network request sent by the low-level user after processing the network request sent by the high-level user.
It should be noted that, the first retry time mentioned in the embodiment of the present application is for the first persistent queue, that is, the retry time of each network request that fails to respond first included in the first persistent queue is called a first retry time, so as to be distinguished from the second retry time of each network request that fails to respond repeatedly included in the second persistent queue.
In other embodiments, the retry component may also periodically monitor the number of byte sequences corresponding to network requests stored in the second persistent queue for which repeated responses failed; when the number of the byte sequences stored in the second persistence queue exceeds a first number threshold value, no byte sequences corresponding to the network requests with new repeated response failures are stored in the second persistence queue any more until the number of the byte sequences stored in the second persistence queue is smaller than a second number threshold value; wherein the first number threshold is greater than or equal to the second number threshold.
For example, to avoid that the number of byte sequences corresponding to network requests with failed repeated responses stored in the second persistent queue is too large to affect performance, the number of byte sequences corresponding to network requests with failed repeated responses stored in the second persistent queue needs to be controlled. For example, the maximum number of byte sequences corresponding to the network requests with repeated response failures stored in the second persistence queue may be set to be 1000, and when the number of byte sequences corresponding to the network requests with repeated response failures stored in the second persistence queue reaches 1000, the byte sequences corresponding to the new network requests with repeated response failures are no longer stored in the second persistence queue until the number of byte sequences stored in the second persistence queue is less than 800, so that the problem of performance degradation caused by excessive number of byte sequences corresponding to the network requests with repeated response failures backlogged in the second persistence queue can be avoided.
In step S103, a byte sequence corresponding to at least one network request is read from the persistence queue to perform a deserialization process.
In some embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request that fails to respond for the first time, the retry component may implement the reading of the byte sequence corresponding to the at least one network request from the persistence queue for deserialization processing by: monitoring operation is conducted on the first persistence queue so as to read a byte sequence corresponding to at least one network request reaching the first retry time from the first persistence queue; then, a deserialization interface function is invoked to deserialize the byte sequence of the at least one network request that reaches the first retry time, and delete the read and deserialized byte sequence in the first persistence queue.
For example, a first persistence queue corresponding to the network request for storing the first response failure may be created in the disk, and listening is performed for the first persistence queue, and when it is monitored that the network request reaching the first retry time exists in the first persistence queue, at least one byte sequence corresponding to the network request reaching the first retry time is read from the first persistence queue; then, the retry component invokes the deserialization interface function to deserialize the byte sequence corresponding to the read at least one network request to obtain the corresponding at least one network request, and then sends the obtained at least one network request to the channel to be read from the channel by the container nodes in the container cluster and retries the read network request; when the network request fails to respond again, carrying out serialization processing on the network request with failed repeated response, and storing a byte sequence corresponding to the network request with failed repeated response obtained through the serialization processing into a second persistence queue created based on a disk; when the network request is successfully responded, deleting the byte sequence corresponding to the read network request from the first persistence queue, and thus, by the mode of reading and deleting simultaneously, the problem of performance degradation caused by overhigh use rate of the disk is avoided.
In other embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to the network request that fails in the first response and a second persistence queue for storing a byte sequence of the network request that fails in the repeated response, the retry component may further implement the reading of the byte sequence corresponding to the at least one network request from the persistence queue for deserialization processing by: reading a byte sequence corresponding to the network request with repeated response failure from the head part of the second persistence queue (namely, the byte sequence at the head part of the second persistence queue is read, so that the byte sequence corresponding to the network request with the shortest retry time) and performing deserialization processing on the read byte sequence corresponding to the network request with repeated response failure, and retrying the network request with repeated response failure obtained through deserialization processing, and deleting the read byte sequence corresponding to the network request with repeated response failure from the second persistence queue when the retry number of the network request with repeated response failure exceeds a retry number threshold or the network request is responded successfully; when the second persistence queue is empty, reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure, and deleting the read byte sequence corresponding to the network request with the first response failure from the first persistence queue when the network request with the first response failure is successfully responded; when the network request with the first response failure fails to respond again, the network request is subjected to serialization processing, and the byte sequence obtained through the serialization processing is stored in a second persistence queue.
For example, since the byte sequence corresponding to the network request stored in the second persistence queue is a byte sequence corresponding to the network request with failed repeated response, and the corresponding emergency degree is higher than that of the network request corresponding to the byte sequence stored in the first persistence queue, the byte sequence corresponding to the network request with failed repeated response stored in the second persistence queue can be preferentially read, the read byte sequence is subjected to deserialization, and then the network request with failed repeated response obtained through the deserialization is sent to the channel, so that the container nodes in the container cluster can read and retry the read network request with failed repeated response from the channel. When all the network requests corresponding to the byte sequences stored in the second persistence queue are processed (i.e. when the second persistence queue is empty), the retry component reads the byte sequence corresponding to the network request with the first response failure from the first persistence queue, performs deserialization processing on the read byte sequence, and then sends the network request with the first response failure obtained through deserialization processing to the channel, so that a container node in the container cluster can read and retry the read network request with the first response failure from the channel.
It should be noted that, in practical application, the byte sequences corresponding to the multiple network requests with repeated response failures stored in the second persistent queue may also be ordered in the same ordering manner as the byte sequences corresponding to the multiple network requests with first response failures stored in the first persistent queue, that is, the byte sequences are also ordered in a first-in-first-out manner, which is not limited in this embodiment of the present application specifically.
In other embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request that fails in a first response and a second persistence queue for storing a byte sequence corresponding to a network request that fails in a repeated response, the retry component may perform the deserialization process by reading at least one byte sequence corresponding to the network request from the persistence queue as described above by: a plurality of threads are allocated, and the following processing is alternately executed by each thread: reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure; and reading the byte sequence corresponding to the network request with repeated response failure from the second persistence queue, performing deserialization processing on the read byte sequence corresponding to the network request with repeated response failure, and deleting the read byte sequence corresponding to the network request with repeated response failure from the second persistence queue when the retry number of the network request with repeated response failure exceeds a retry number threshold or is in successful response.
For example, for the above case, the number of the allocated multiple threads may be fixed or dynamically increased or decreased according to the load of the service request, for example, when the number of byte sequences to be processed stored in the first persistent queue and the second persistent queue is small, a small number of threads may be allocated to alternately execute the first persistent queue and the second persistent queue by each thread; and when the number of the byte sequences to be processed stored in the first persistent queue and the second persistent queue is larger, a larger number of threads can be allocated correspondingly, and the first persistent queue and the second persistent queue are executed alternately through each thread. The above-mentioned alternate execution process may be performed alternately according to the number of times, for example, taking the thread a as an example, where the thread a executes the first persistent queue for the first time, then executes the second persistent queue for the second time, executes the first persistent queue again for the third time, and so on. Of course, the above alternate execution may be divided according to a time period, for example, in a time period T1, all threads execute the first persistent queue, and in a time period T2, all threads execute the second persistent queue.
In other embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request that fails in a first response and a second persistence queue for storing a byte sequence corresponding to a network request that fails in a repeated response, the retry component may perform the deserialization process by reading at least one byte sequence corresponding to the network request from the persistence queue as described above by: distributing corresponding number of threads for the first persistent queue and the second persistent queue according to the weights of the first persistent queue and the second persistent queue; executing a byte sequence corresponding to the network request with the first response failure from the first persistence queue through a thread distributed for the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure; and executing the byte sequence corresponding to the network request with failed repeated response read from the second persistent queue through the thread distributed for the second persistent queue, and performing deserialization processing on the read byte sequence corresponding to the network request with failed repeated response.
For example, for the above case, the total number of allocable threads may be fixed, and the number of threads to which each persistent queue can be allocated is related to the number of byte sequences to be processed stored in the persistent queue, for example, assuming that the total number of allocable threads is 10, when the number of byte sequences corresponding to network requests for which the first response fails stored in the first persistent queue is 20, and when the number of byte sequences corresponding to network requests for which the repeated response fails stored in the second persistent queue is 30, the number of threads allocated for the first persistent queue is 4; the number of threads allocated for the second persistent queue is 6. Of course, the allocation rule of the thread may be related to the processing time, that is, the more the network request corresponding to the byte sequence to be processed in the persistence queue needs to be processed, the more the number of threads correspondingly allocated. For example, assuming that the total number of threads that can be allocated is still 10, the total time required to process the byte sequence corresponding to the network request that fails in the first response stored in the first persistent queue is 10 minutes, and the total time required to process the byte sequence corresponding to the network request that fails in the repeated response stored in the second persistent queue is 20 minutes, the number of threads allocated to the first persistent queue is 3, and the number of threads allocated to the second persistent queue is 7.
In step S104, at least one network request obtained by the deserialization processing is retried.
In some embodiments, the retry component may implement the above-described retry of the at least one network request obtained by the deserialization process by: and sending at least one network request obtained through the deserialization processing to the channel so that the container nodes in the container cluster can read the read network request from the channel and re-execute the read network request.
In other embodiments, the retry component can also store received unresponsive network requests (e.g., network requests most recently sent by the user) to a non-persistent queue, wherein the non-persistent queue is a message queue created based on volatile memory (e.g., a message queue created based on memory); when the number of byte sequences corresponding to the network requests with failed responses included in the persistence queue is larger than a third number threshold, preferentially reading the byte sequences corresponding to the network requests with failed responses from the persistence queue, and executing the network requests with failed responses obtained by deserializing the byte sequences; and when the number of the network requests with failed responses included in the persistence queue is smaller than or equal to a third number threshold, a plurality of threads are respectively distributed to the persistence queue and the non-persistence queue so as to synchronously execute the network requests which are read from the non-persistence queue and are not responded and the network requests with failed responses read from the persistence queue and obtained after deserialization processing.
For example, the retry component may implement the above-described synchronous execution of an unresponsive network request read from the non-persistent queue and a failed response network request read from the persistent queue and after deserialization processing by: reading the non-responded to network request from the non-persistent queue through a plurality of threads allocated for the non-persistent queue (e.g., a message queue created based on memory), and executing the read non-responded to network request; and reading the byte sequence corresponding to the network request with failed response from the persistent queue through a plurality of threads distributed to the persistent queue (such as a message queue created based on a disk or a hard disk), performing deserialization processing on the read byte sequence corresponding to the network request with failed response, and executing the network request with failed response obtained after the deserialization processing.
For example, a corresponding number of threads may be allocated according to the number of byte sequences corresponding to network requests stored in the persistence queue for which the response fails and the number of non-responded network requests stored in the non-persistence queue, for example, assuming that the total number of allocable threads is 10, when the number of non-responded network requests stored in the non-persistence queue is 10 and the number of byte sequences corresponding to network requests stored in the persistence queue for which the response fails is 20, 3 threads may be allocated to the non-persistence queue and the remaining 7 threads may be allocated to the persistence queue, so that the read non-responded network requests are read and executed from the non-persistence queue through the 3 threads allocated to the non-persistence queue (e.g., a message queue created based on a memory); and reading a byte sequence corresponding to the network request with failed response from the persistent queue through 7 threads allocated for the persistent queue (such as a message queue created based on a disk), and re-sequencing the network request with failed response obtained through the de-serialization processing. Therefore, the network requests which are not responded and the network requests which are failed to respond are coordinated, so that on one hand, the network requests which are sent by the user recently can be timely processed, on the other hand, the network requests which are failed to respond are prevented from being excessively long in waiting time, and the user experience is improved.
According to the network request processing method, the network requests with failed response are subjected to serialization processing, byte sequences obtained after the serialization processing are stored in the persistence queue, then byte sequences corresponding to the network requests needing to be retried are read from the persistence queue to be subjected to deserialization processing, and the network requests obtained after the deserialization processing are retried, so that the number of retried network requests is not limited by the size of a memory, and the number of retried network requests can be greatly increased; in addition, the byte sequence stored in the persistence queue cannot be lost due to service restarting, power failure and the like, so that the security of network request data is ensured.
In the following, an example application of the network request processing method provided in the embodiment of the present application in an actual application scenario is described by taking a service related to an instant messaging client as an example.
The background program needs to communicate with other programs through a network, and in the communication process, there is a problem in that a request response fails due to network jitter or a sudden increase in the number of network requests (hereinafter also simply referred to as requests) or the like. For example, using an instant messaging client as an example, a message queue CMQ is often required to perform logical decoupling, such as group interaction identification, new group membership level, etc., during development of an instant messaging service. But sudden increases in the amount of requests, high loading of the container master, and CMQ service (Server) jitter all result in a large number of message delivery failures in a short period of time. For example, in the year's mid year and spring festival spanning period, the amount of messages received by the background program of an instant messaging client (e.g., QQ) has bumped 10 times more than before, resulting in a large batch timeout for CMQ messaging requests.
In view of the above technical problems, the solution provided by the related art is generally to directly put a failed request into a while loop, retry the request continuously at intervals, and directly use variables and functions in failure, and the corresponding pseudo code is as follows:
do retry function ()
Waiting for decay time ()
Number of retries +1
while retry number < maximum number of configured retries
end
It can be seen that, in the related art, after the request fails, the failed request is not released from the memory, but the failed request is directly retried in the memory, and is not released from the memory until the request fails. To ensure a successful retry probability, a manner of performing a retry with a fade (e.g., an exponential retry with a fade or performing a retry with a fade at regular intervals) is adopted, and the failed request always occupies Memory during the entire retry process, which results in a potential Memory overflow (Out of Memory) problem. And if the program exits at this time, the failed request is also released, resulting in a request data loss problem. That is, the schemes provided by the related art are all based on the memory mode for retry, and are limited by the memory capacity, a large number of failed requests cannot be completely stored in the memory for retry in a short time, i.e. the number of retry requests is limited by the memory size, and meanwhile, the potential risk of memory overflow exists. In addition, the memory is powered off and lost, and if the service is restarted or powered off abnormally, the request data in the retry queue is lost completely.
In view of this, the embodiment of the present application provides a network request processing method, by serializing a failed request into a disk, deserializing the failed request into a memory object during retry, and then performing retry, so that the number of retriable requests can be liberated from a limited memory size, so that the number of retriable requests is only related to the disk size, and a large number of failed requests in a short time can be borne, thereby greatly increasing the number of retriable requests. In addition, the requested data is stored in the disk for persistence, and the problem of data loss caused by service abnormal restarting, power failure and the like does not exist.
The following specifically describes a network request processing method provided in the embodiment of the present application.
For example, referring to fig. 5, fig. 5 is a schematic overall architecture of a network request processing system provided in an embodiment of the present application, as shown in fig. 5, after a message background of an instant messaging client receives a message sent by a user (e.g., various requests sent by the user through the instant messaging client (e.g., QQ, weChat, etc.), where the request content includes adding new members, group interactions, etc.), the received message is transferred to the group background, and then the group background sends the received message to CMQ, so that the processing logic module reads and consumes the read message from CMQ. When the message response fails due to the reasons such as the surge of the number of messages (for example, the message sending is overtime), the message with the failed sending is stored in the retry component, so that the retry component performs the decline retry on the message with the failed sending (for example, the message with the failed sending can be temporarily stored in a disk to avoid the problem of message copy loss, and finally, the retry line Cheng Chaosong is used for designating a queue). Finally, the processing logic module may also feed back the processing result to the user after executing the read message.
In addition, to improve the universality of the retry component, the network request processing method provided in the embodiment of the application is not limited to implementing the retry scheme of CMQ, but implements a universal retry scheme, and integrates the retry component into a client (trpc-go cmq client) based on a trpc-go framework (a development framework of remote procedure call with an internal open source) and accessible CMQ to send and receive messages through the trpc-go cmq client (only the retry component integrated in the trpc-go cmq client is shown in fig. 5, and the trpc-go cmq client is not shown).
The architecture of the retry component shown in fig. 5 is described in detail below.
For example, referring to fig. 6, fig. 6 is a schematic architecture diagram of a retry component provided in an embodiment of the present application, where, as shown in fig. 6, the retry component mainly includes a Disk Queue (disk_queue) (corresponding to the first persistence Queue described above) and a sort Queue (issued_queue) (corresponding to the second persistence Queue described above), where the disk_queue Queue is a Disk Queue based on deque (an embedded Queue implemented based on a Disk) for storing a request for failing in response for the first time; the soned_queue is a persistent Queue implemented based on Boltdb (a Key-Value) type embedded database, and can efficiently access data by using an application programming interface (API, application Programming Interface) provided by Boltdb only by linking the embedded database to application program codes, meanwhile, boltdb also supports fully-serializable ACID transactions, so that an application program can process complex operations more simply), and the persistent Queue is ordered according to the next retry time, and is used for implementing a decay retry algorithm, namely, after a request read from the disk_queue responds to failure again, the request is added into the soned_queue for decay retry. The disk_queue and the issued_queue are described in detail below, respectively.
The Dque scrolls the stored content according to the fixed file size, and the two memory areas of the Head (Head) and the Tail (Tail) point to the oldest and the newest files respectively. The message writing and deleting are performed in a sequential writing mode, so that the throughput is extremely high, the function is simpler, and the message writing and deleting are only a simple first-in first-out persistent queue.
Boltdb is an embedded database based on Key-Value type storage, and is realized by adopting Golang (also called Go, which is a static strong type and compiled language and has the functions of memory safety, garbage collection, structural morphology, concurrent calculation and the like) language, and transactions are supported, wherein Key is ordered according to a specified ordering mode, next retry time is used as Key by utilizing ordered attribute, failure request corresponding to the next retry time is used as Value, and a regression retry algorithm can be realized. The retry time may be determined by the user, for example, it is assumed that the user performs retry after setting for 2 seconds, that is, after the response of the request fails, the failed request is put into Boltdb, and after 2 seconds, the failed request is taken out from Boltdb to perform retry; if again failed, this retry time would double to 4 seconds until the retry succeeded, or the maximum number of retries was reached.
It can be seen that the retry component can implement the request retry by using only the plug-in of Boltdb, but both the writing and the deleting of Boltdb involve random reading and writing of the disk (i.e., the enqueuing and dequeuing of each request are accompanied by a random reading and writing of the disk), and the addressing time of the disk is generally in the order of milliseconds (ms), so the time consumption of the writing and deleting operation of Boltdb is also in the order of ms; and the writing and deleting of the Dque are both additional operations of the file, so that the writing and deleting operation of the Dque takes a short time, which is only on the order of microseconds (us). That is, the time consuming for request retrying based on Boltdb is higher than the time consuming for request retrying based on dqe (i.e. the time consuming for retrying of network requests that fail to respond based on the second persistence queue is higher than the first persistence queue), while Boltdb has a high flexibility, can be ordered according to the retry time of the requests to implement the decay retry algorithm, the throughput of Boltdb is relatively low compared to the throughput of dqe, and therefore if only one plug-in of Boltdb is used, the performance of the retry component may be affected when the number of failed requests that need to be processed is too large.
In view of this, in the embodiment of the present application, two queues of the disk_queue Queue and the signaled_queue Queue are set in the retry component at the same time: the disk_queue Queue adopts a Dque-based implementation mode, so that the disk_queue has extremely high performance, but has single function and is only a first-in first-out Queue. In an actual production environment, a failed request needs to be retried in a fading manner, and the request of the advanced queue does not need to be first-out. Therefore, the network request processing method provided by the embodiment of the present application newly adds the signaled_queue Queue in the retry component to implement the function of the retry, where the signaled_queue may be ordered according to the Key, for example, the next retry time of the Queue element may be used as the Key, and after each retry of the element (i.e. the failed request) that is sent out of the Queue fails, the retry time is updated according to different retry policies (for example, the exponential retry time is sequentially 1 second, 2 seconds, 4 seconds, etc., or the fixed time retry time is fixed, for example, all the retry times are 2 seconds), and the retry time is returned to the signaled_queue, so that the policy of the retry can be implemented, and the advantages of high throughput of the dequeue and the advantages of high flexibility of the Boltdb Queue can be combined, thereby efficiently processing the request that is responsive to the failure.
For example, referring to fig. 7, fig. 7 is a schematic diagram of a network request processing method provided by an embodiment of the present application, as shown in fig. 7, diskSQ is a queue class encapsulated by Boltdb, where a Peek () function under the queue class takes a first element in a queue but does not delete it, a Pop () function takes a first element in the queue and deletes it, and an Add () function enqueues a new element. StoreData is a defined storage class into which Data of a business party is serialized, and other fields in the storage class are metadata (also called intermediate Data, which is Data describing Data, mainly information describing Data attributes, and used for supporting functions such as indicating storage locations, historical Data, resource searching, and file recording) for controlling the timing of a failed request to be dequeued. Specifically, metadata information in store data mainly includes: each failure request corresponds to a dequeue number (DequeCount), a time of initial enqueue (enqueues), a next retry time (NextVisibleTs), a unique Identification (ID) of stored Data, and specific retry Data (Data), respectively. The Listener is a listening class for listening for data satisfying retry opportunities from the deque and DiskSQ, and sending the monitored data to a channel (tunnel) for consumption by a node (worker). The replay is a principal class, and each retry instance corresponds to an instance object, and the above types are used, where the principal class includes a Config member variable for storing configuration information of the retry component, including a maximum retry number, the number of retry threads, and different decay retry strategies (including exponential decay retry and fixed time decay retry), and the like. DqueConfig is configuration information of a Dque queue, the Dque queue is an initial failure queue, namely, the Dque queue is used for storing a first failed request, the performance is higher, the function is single, after a request read from the Dque queue responds to failure again, request data which fails again needs to be stored in DiskSQ, the elements are ordered in the DiskSQ according to visible time of the elements (namely, retry time of the failed request), and the elements with the minimum visible time are arranged at the head of the DiskSQ queue.
For example, after a response fails, a request sent by a user will first put the failed request in the dqe queue and set a retry time stamp. When the Listener class monitors that the failed request reaches the retry time stamp, the failed request is read from the Dque Queue and sent to the worker cooperative for retry, and if the response fails again, the request is put into a sequencing sequence (namely, the issued_queue) based on the Boltdb implementation for controlling the decline retry. The Listener class is responsible for listening to both the Dque and DiskSQ queues, continuously checking the number of elements in both queues, retrieving the header message (i.e., the failed request) from inside, and submitting it to the worker routine for retry.
In addition, to avoid excessive elements in the ordering queue that affect performance, the size of the ordering queue needs to be controlled. When the size of the ordering queue exceeds the specified size, the elements in the Dque queue are not monitored until the size of the ordering queue is smaller than the specified size.
The workflow of the Listener class in fig. 7 is described in detail below.
For example, referring to fig. 8, fig. 8 is a flowchart of a network request processing method provided in the embodiment of the present application, as shown in fig. 8, a listener is implemented by a single cooperative process, and is decoupled through a channel (channel), and multiple workers read requests reaching retry opportunities from the channel, so as to improve maintainability of codes. The specific process is as follows: the monitoring program firstly judges whether the sequencing Queue is empty, when the sequencing Queue is empty, the monitoring program cancels information from a Disk Queue (namely disk_queue) and distributes the acquired information to a worker for consumption; when the sequencing queue is not empty, the monitoring program further judges whether the element at the top of the sequencing queue reaches retry time, when the retry time is reached, the information is fetched from the top of the sequencing queue, and the fetched information is distributed to a worker for consumption; when the retry time is not reached, the monitor program continues to judge whether the size of the sequencing queue reaches the maximum length (namely whether the size exceeds the specified size), and when the size does not exceed the specified size, the monitor program cancels the information from the disk queue; and when the monitoring time exceeds the preset time, continuing to monitor.
In other embodiments, when a user's newly sent request is also received while processing the failed request, then the failed request and the user's newly sent request may be coordinated by: judging whether the number of the currently-executed failed requests exceeds a number threshold (for example, the number of the failed requests is limited by the maximum number, for example, the currently-executed failed requests can be set to be 1000), if the number of the currently-executed failed requests is larger than 1000, preferentially executing the failed requests stored in the disk_queue and the issued_queue until the number of the currently-executed failed requests is smaller than 1000, and then synchronously processing the latest sent requests of the user and the failed requests, namely, when the number of the currently-executed failed requests is smaller than 1000, synchronously processing the latest sent requests of the user; and when the number of the failed requests to be processed is greater than 1000, the failed requests are preferentially processed.
According to the network request processing method, the failed requests are serialized into the disk, and the retried requests are deserialized into the memory object for retrying during retrying, so that the number of retried requests is not limited by the size of the memory, and the network request processing method is only related to the disk space and can bear large-scale failed requests in a short time; in addition, the request data stored in the disk is persistent, and there is no need to worry about the problem of data loss caused by service abnormal restart, power failure or the like.
The beneficial effects of the network request processing method provided in the embodiment of the present application will be further described below with reference to experimental data.
According to the network request processing method provided by the embodiment of the application, the pressure test and the performance test are carried out, a large number of failed scenes in the short time of CMQ message sending are simulated, the ID of the corresponding message is put into the redisset during retry operation, and the size of the redisset, the time consumption of the retry process and the load change condition of the memory and the disk are observed.
The test simulates 500 ten thousand pieces of data (i.e., 500 ten thousand failed requests) that fail in a loop, each identified by an integer ID, and a retry process (call back) places the 500 ten thousand IDs in redisset, and the test is performed periodically, once every 30 minutes.
For example, referring to fig. 9, fig. 9 is a schematic diagram of the message retry amount in the retry process according to the embodiment of the present application, and as shown in fig. 9, the final message retry amount is 500 ten thousand, which indicates that there is no message loss problem.
For example, referring to fig. 10, fig. 10 is a schematic diagram of time consumption in a retry process provided in the embodiment of the present application, as shown in fig. 10, 7 workers collectively retry 500 ten thousand failure requests, which is total time consumption of 17 minutes and is mainly network time consumption, and delay introduced by the retry component provided in the embodiment of the present application is negligible.
For example, referring to fig. 11, fig. 11 is a schematic view of the disk usage rate in the retry process provided in the embodiment of the present application, as shown in fig. 11, in the whole retry process, a failed request is retried and deleted, the usage amount of the disk is not suddenly increased, and the used disk space can be recovered in time.
For example, referring to fig. 12, fig. 12 is a schematic diagram of the memory usage rate in the retry process provided in the embodiment of the present application, as shown in fig. 12, since disk storage is used in the whole retry process, the memory usage rate will not rise, so that the memory usage is greatly saved, and the security of the requested data is also ensured.
Continuing with the description below of an exemplary architecture provided by embodiments of the present application in which the network request processing device 443 is implemented as a software module, in some embodiments, as shown in fig. 2, the software modules stored in the network request processing device 443 of the memory 440 may include: the acquisition module 4431, the serialization processing module 4432, the storage module 4433, the inverse serialization processing module 4434, the retry module 4435, the reception module 4436, the setting module 4437, the listening module 4438, and the deletion module 4439.
An obtaining module 4431, configured to obtain a network request with a response failure; the serialization processing module 4432 is configured to perform serialization processing on the network request to obtain a byte sequence corresponding to the network request; a storage module 4433, configured to store a byte sequence corresponding to the network request into a persistence queue; the anti-sequence processing module 4434 is configured to read at least one byte sequence corresponding to the network request from the persistence queue, so as to perform anti-sequence processing; a retry module 4435 for retrying the at least one network request obtained by the deserialization process.
In some embodiments, the network request processing apparatus 443 further includes a receiving module 4436 configured to receive a serialization processing instruction, where the serialization processing instruction includes a storage manner after the serialization processing; the serialization processing module 4432 is further configured to call a serialization interface function according to the storage mode, so as to perform serialization processing on the status information of the network request, and obtain a byte sequence that accords with the storage mode; wherein the status information includes at least one of: request mode, request address and request parameter of network request.
In some embodiments, the storage module 4433 is further configured to store a byte sequence corresponding to the network request that fails to respond first to the first request to a first persistent queue based on the nonvolatile memory, where different byte sequences corresponding to the network requests are stored in the first persistent queue in a first-in-first-out order; the network request processing apparatus 443 further includes a setting module 4437 configured to set a first retry time of the network request in the first persistent queue, wherein the first retry time of different network requests sequentially increases, and the sequence is a chronological order of times when byte sequences corresponding to the different network requests are stored in the first persistent queue.
In some embodiments, the obtaining module 4431 is further configured to obtain at least one network request that fails to respond repeatedly when executing the first persistence queue; the serialization processing module 4432 is further configured to perform serialization processing on at least one network request that fails in the repeated response; the storage module 4433 is further configured to store the byte sequence corresponding to the obtained at least one network request with the repeated response failure to the second persistence queue; a setting module 4437 further configured to set a second retry time for repeated response to the failed network request in the second persistence queue; wherein the byte sequences corresponding to different network requests are stored in the second persistence queue in descending order of the second retry time.
In some embodiments, the network request processing apparatus 443 further includes a snoop module 4438 for periodically snooping the number of byte sequences in the second persistent queue; the storage module 4433 is further configured to, when the number exceeds the first number threshold, not store a byte sequence corresponding to the network request for which the new repeated response fails to the second persistence queue any more until the number is less than the second number threshold; wherein the first number threshold is greater than or equal to the second number threshold.
In some embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request that fails to respond for the first time, the snoop module 4438 is further configured to perform a snoop operation on the first persistence queue to read from the first persistence queue the byte sequence corresponding to the at least one network request that reaches the first retry time; the deserialization module 4434 is further configured to deserialize the byte sequence of the at least one network request reaching the first retry time.
In some embodiments, when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to the network request that fails in the first response and a second persistence queue for storing a byte sequence corresponding to the network request that fails in the repeated response, the deserialization processing module 4434 is further configured to read the byte sequence corresponding to the network request that fails in the repeated response from a header of the second persistence queue, and deserialize the read byte sequence corresponding to the network request that fails in the repeated response; and when the second persistence queue is empty, reading the byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure.
In some embodiments, when the persistence queues include a first persistence queue for storing a byte sequence corresponding to a network request that failed in a first response and a second persistence queue for storing a byte sequence corresponding to a network request that failed in a repeated response, the deserialization processing module 4434 is further configured to allocate a plurality of threads, by each thread alternately performing the following: reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure; and reading the byte sequence corresponding to the network request with failed repeated response from the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with failed repeated response.
In some embodiments, when the persistence queues include a first persistence queue for storing a byte sequence corresponding to a network request that failed in a first response and a second persistence queue for storing a byte sequence corresponding to a network request that failed in a repeated response, the deserialization processing module 4434 is further configured to allocate a respective number of threads to the first persistence queue and the second persistence queue according to weights of the first persistence queue and the second persistence queue; reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue through a thread distributed for the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure; and reading a byte sequence corresponding to the network request with failed repeated response from the second persistence queue through the thread distributed for the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with failed repeated response.
In some embodiments, the retry module 4435 is further configured to send the at least one network request obtained after the deserialization process into the channel, so that the container nodes in the container cluster can read and re-execute the read network request from the channel; the network request processing apparatus 443 further includes a deletion module 4439 configured to delete any byte sequence from the persistence queue when the retry number of the network request corresponding to any byte sequence stored in the persistence queue exceeds a retry number threshold or the network request successfully responds.
In some embodiments, the storage module 4433 is further configured to store the received unresponsive network request to a non-persistent queue created based on volatile memory; a retry module 4435, configured to execute a network request with failed response, which is read from the persistence queue and is obtained through deserialization processing, when the number of byte sequences corresponding to the network request with failed response included in the persistence queue is greater than a third number threshold; and the method is used for respectively distributing a plurality of threads for the persistent queue and the non-persistent queue to synchronously execute the unresponsive network requests read from the non-persistent queue and the response-failed network requests read from the persistent queue and obtained through deserialization when the number of response-failed network requests included in the persistent queue is smaller than or equal to a third number threshold.
In some embodiments, the retry module 4435 is further configured to read the unresponsive network request from the non-persistent queue by a plurality of threads allocated for the non-persistent queue and execute the read unresponsive network request; and the method is used for reading the byte sequence corresponding to the network request with failed response from the persistent queue through a plurality of threads distributed for the persistent queue, performing deserialization processing on the read byte sequence corresponding to the network request with failed response, and executing the network request with failed response obtained after the deserialization processing.
It should be noted that, the description of the apparatus in the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. The technical details of the network request processing apparatus provided in the embodiments of the present application may be understood from the description of any one of fig. 3 or fig. 4.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the network request processing method according to the embodiment of the present application.
The embodiments of the present application provide a computer readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method provided by the embodiments of the present application, for example, a network request processing method as shown in fig. 3 or fig. 4.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
In summary, in the embodiment of the present application, the serializing process is performed on the network request with the failed response, the byte sequence obtained after the serializing process is stored in the persistence queue, and then the byte sequence corresponding to the network request to be retried is read from the persistence queue to perform the deserializing process, and the network request obtained after the deserializing process is retried, so that the number of retried network requests is not limited by the memory size, and the number of retried network requests can be greatly increased; in addition, the byte sequence stored in the persistence queue cannot be lost due to service restarting, power failure and the like, so that the security of network request data is ensured.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (15)

1. A method for processing a network request, the method comprising:
acquiring a network request with failed response;
receiving a serialization processing instruction, wherein the serialization processing instruction comprises a storage mode after serialization processing;
Calling a serialization interface function according to the storage mode to perform serialization processing on the state information of the network request so as to obtain a byte sequence conforming to the storage mode;
storing a byte sequence corresponding to the network request into a persistence queue, wherein the persistence queue is a message queue created based on a nonvolatile memory;
reading at least one byte sequence corresponding to the network request from the persistence queue to perform deserialization processing;
and sending the at least one network request obtained through the inverse serialization processing to a channel so that the container nodes in the container cluster can read and re-execute the read network request from the channel.
2. The method of claim 1, wherein the status information comprises at least one of: the request mode, the request address and the request parameter of the network request.
3. The method of claim 1, wherein storing the byte sequence corresponding to the network request to a persistence queue comprises:
storing a byte sequence corresponding to a network request with failed first response to a first persistence queue based on a nonvolatile memory, wherein different byte sequences corresponding to the network requests are stored in the first persistence queue according to a first-in first-out sequence;
And setting the first retry time of the network requests in the first persistence queue, wherein the first retry time sequence of different network requests is increased, and the sequence is the sequencing of the time when byte sequences corresponding to the different network requests are stored in the first persistence queue.
4. The method of claim 1, wherein storing the byte sequence corresponding to the network request to a persistence queue comprises:
acquiring at least one network request with repeated response failure when executing a first persistence queue;
serializing at least one network request with failed repeated response, and storing the obtained byte sequence corresponding to the at least one network request with failed repeated response into a second persistence queue;
setting a second retry time of the repeated response failed network request in the second persistence queue;
and storing byte sequences corresponding to different network requests in the second persistence queue in descending order of the second retry time.
5. The method according to claim 4, wherein the method further comprises:
periodically monitoring the number of byte sequences in the second persistence queue;
When the number exceeds a first number threshold, no longer storing a byte sequence corresponding to the network request with a new repeated response failure to the second persistence queue until the number is smaller than a second number threshold;
wherein the first number threshold is greater than or equal to the second number threshold.
6. The method of claim 1, wherein when the persistence queue includes a first persistence queue for storing a byte sequence corresponding to a network request that fails in response to a first time, the reading at least one byte sequence corresponding to a network request from the persistence queue for deserialization processing includes:
monitoring operation is conducted on the first persistence queue so as to read a byte sequence corresponding to at least one network request reaching first retry time from the first persistence queue;
and performing deserialization processing on the byte sequence of the at least one network request reaching the first retry time.
7. The method of claim 1, wherein when the persistence queues include a first persistence queue for storing a byte sequence corresponding to a network request for which a first response fails and a second persistence queue for storing a byte sequence corresponding to a network request for which a repeated response fails, the reading at least one byte sequence corresponding to a network request from the persistence queues for performing a deserialization process includes:
Reading a byte sequence corresponding to the network request with failed repeated response from the head of the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with failed repeated response;
and when the second persistence queue is empty, reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure.
8. The method of claim 1, wherein when the persistence queues include a first persistence queue for storing a byte sequence corresponding to a network request for which a first response fails and a second persistence queue for storing a byte sequence corresponding to a network request for which a repeated response fails, the reading at least one byte sequence corresponding to a network request from the persistence queues for performing a deserialization process includes:
a plurality of threads are allocated, and the following processing is alternately executed through each thread:
reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure;
And reading a byte sequence corresponding to the network request with failed repeated response from the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with failed repeated response.
9. The method of claim 1, wherein when the persistence queues include a first persistence queue for storing a byte sequence corresponding to a network request for which a first response fails and a second persistence queue for storing a byte sequence corresponding to a network request for which a repeated response fails, the reading at least one byte sequence corresponding to a network request from the persistence queues for performing a deserialization process includes:
distributing corresponding number of threads to the first persistence queue and the second persistence queue according to the weights of the first persistence queue and the second persistence queue;
reading a byte sequence corresponding to the network request with the first response failure from the first persistence queue through the thread distributed for the first persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with the first response failure;
and reading a byte sequence corresponding to the network request with repeated response failure from the second persistence queue through the thread distributed for the second persistence queue, and performing deserialization processing on the read byte sequence corresponding to the network request with repeated response failure.
10. The method according to claim 1, wherein the method further comprises:
and deleting any byte sequence from the persistence queue when the retry number of the network request corresponding to the any byte sequence stored in the persistence queue exceeds a retry number threshold or the network request is successfully responded.
11. The method according to claim 1, wherein the method further comprises:
storing the received non-responded to network request to a non-persistent queue, the non-persistent queue created based on a volatile memory;
when the number of byte sequences corresponding to the network requests with failed responses included in the persistence queue is larger than a third number threshold, executing the network requests with failed responses, which are read from the persistence queue and are subjected to deserialization processing;
and when the number of the network requests with failed responses included in the persistence queue is smaller than or equal to the third number threshold, respectively distributing a plurality of threads for the persistence queue and the non-persistence queue so as to synchronously execute the network requests which are read from the non-persistence queue and are not responded and the network requests with failed responses read from the persistence queue and obtained through deserialization processing.
12. The method of claim 11, wherein the synchronously executing the unresponsive network requests read from the non-persistent queue and the response-failed network requests read from the persistent queue and deserialized, comprises:
reading an unresponsive network request from the non-persistent queue through a plurality of threads allocated for the non-persistent queue, and executing the read unresponsive network request;
and reading a byte sequence corresponding to the network request with failed response from the persistence queue through a plurality of threads distributed for the persistence queue, performing deserialization processing on the read byte sequence corresponding to the network request with failed response, and executing the network request with failed response obtained after the deserialization processing.
13. A network request processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring a network request with failed response;
the receiving module is used for receiving the serialization processing instruction, wherein the serialization processing instruction comprises a storage mode after serialization processing;
the serialization processing module is used for calling a serialization interface function according to the storage mode so as to carry out serialization processing on the state information of the network request, and a byte sequence conforming to the storage mode is obtained;
The storage module is used for storing the byte sequence corresponding to the network request into a persistence queue, wherein the persistence queue is a message queue created based on a nonvolatile memory;
the anti-sequence processing module is used for reading at least one byte sequence corresponding to the network request from the persistence queue so as to perform anti-sequence processing;
and the retry module is used for sending the at least one network request obtained after the inverse serialization processing to a channel so as to read and re-execute the read network request from the channel by a container node in the container cluster.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the network request processing method of any one of claims 1 to 12 when executing executable instructions stored in said memory.
15. A computer readable storage medium storing executable instructions which when executed are adapted to implement the network request processing method of any one of claims 1 to 12.
CN202110178915.1A 2021-02-09 2021-02-09 Network request processing method and device, electronic equipment and storage medium Active CN114915659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110178915.1A CN114915659B (en) 2021-02-09 2021-02-09 Network request processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110178915.1A CN114915659B (en) 2021-02-09 2021-02-09 Network request processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114915659A CN114915659A (en) 2022-08-16
CN114915659B true CN114915659B (en) 2024-03-26

Family

ID=82761717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110178915.1A Active CN114915659B (en) 2021-02-09 2021-02-09 Network request processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114915659B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473165A (en) * 2009-08-18 2012-05-23 弗里塞恩公司 Method and system for intelligent routing of requests over epp
CN104660708A (en) * 2015-03-13 2015-05-27 黄庆宇 HTTP (Hyper Text Transfer Protocol) based mobile application message forwarding method and system
CN107391269A (en) * 2016-03-28 2017-11-24 阿里巴巴集团控股有限公司 A kind of method and apparatus being used for by persistence queue processing message
CN109428861A (en) * 2017-08-29 2019-03-05 阿里巴巴集团控股有限公司 Network communication method and equipment
CN109474688A (en) * 2018-11-27 2019-03-15 北京微播视界科技有限公司 Sending method, device, equipment and the medium of instant messaging network request message
CN111104232A (en) * 2019-11-09 2020-05-05 苏州浪潮智能科技有限公司 Method, device and medium for accelerating message writing of message queue
US10673971B1 (en) * 2015-06-17 2020-06-02 Amazon Technologies, Inc. Cross-partition messaging using distributed queues
CN111479334A (en) * 2020-03-20 2020-07-31 平安国际智慧城市科技股份有限公司 Network request retry method, device and terminal equipment
CN111770030A (en) * 2019-05-17 2020-10-13 北京京东尚科信息技术有限公司 Message persistence processing method, device and storage medium
WO2020215558A1 (en) * 2019-04-26 2020-10-29 平安科技(深圳)有限公司 Data storage method, data query method, apparatus and device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473165A (en) * 2009-08-18 2012-05-23 弗里塞恩公司 Method and system for intelligent routing of requests over epp
CN104660708A (en) * 2015-03-13 2015-05-27 黄庆宇 HTTP (Hyper Text Transfer Protocol) based mobile application message forwarding method and system
US10673971B1 (en) * 2015-06-17 2020-06-02 Amazon Technologies, Inc. Cross-partition messaging using distributed queues
CN107391269A (en) * 2016-03-28 2017-11-24 阿里巴巴集团控股有限公司 A kind of method and apparatus being used for by persistence queue processing message
CN109428861A (en) * 2017-08-29 2019-03-05 阿里巴巴集团控股有限公司 Network communication method and equipment
CN109474688A (en) * 2018-11-27 2019-03-15 北京微播视界科技有限公司 Sending method, device, equipment and the medium of instant messaging network request message
WO2020215558A1 (en) * 2019-04-26 2020-10-29 平安科技(深圳)有限公司 Data storage method, data query method, apparatus and device and storage medium
CN111770030A (en) * 2019-05-17 2020-10-13 北京京东尚科信息技术有限公司 Message persistence processing method, device and storage medium
CN111104232A (en) * 2019-11-09 2020-05-05 苏州浪潮智能科技有限公司 Method, device and medium for accelerating message writing of message queue
CN111479334A (en) * 2020-03-20 2020-07-31 平安国际智慧城市科技股份有限公司 Network request retry method, device and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于通用消息的持久化消息队列设计;郭盛兴;王晶;廖建新;;北京工商大学学报(自然科学版)(第01期);全文 *

Also Published As

Publication number Publication date
CN114915659A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN109582466B (en) Timed task execution method, distributed server cluster and electronic equipment
US11403152B2 (en) Task orchestration method and system
US8799906B2 (en) Processing a batched unit of work
US20160275123A1 (en) Pipeline execution of multiple map-reduce jobs
CN108712457B (en) Method and device for adjusting dynamic load of back-end server based on Nginx reverse proxy
CN113452774B (en) Message pushing method, device, equipment and storage medium
US8627327B2 (en) Thread classification suspension
US20190138375A1 (en) Optimization of message oriented middleware monitoring in heterogenenous computing environments
CN111897633A (en) Task processing method and device
CN113032099B (en) Cloud computing node, file management method and device
CN111930706B (en) Remote call-based distributed network file storage system and method
CN114416200A (en) System and method for monitoring, acquiring, configuring and dynamically managing and loading configuration of declarative cloud platform
US9298765B2 (en) Apparatus and method for handling partially inconsistent states among members of a cluster in an erratic storage network
CN110740145A (en) Message consumption method, device, storage medium and electronic equipment
JP4634058B2 (en) Real-time remote backup system and backup method thereof
CN114915659B (en) Network request processing method and device, electronic equipment and storage medium
CN112948096A (en) Batch scheduling method, device and equipment
CN113760522A (en) Task processing method and device
US20070067488A1 (en) System and method for transferring data
CN110825536A (en) Communication method and device between tasks in embedded real-time operating system
CN110929126A (en) Distributed crawler scheduling method based on remote procedure call
CN108121580B (en) Method and device for realizing application program notification service
CN114237891A (en) Resource scheduling method and device, electronic equipment and storage medium
CN113918364A (en) Redis-based lightweight message queue processing method and device
CN110288309B (en) Data interaction method, device, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant