CN110290217B - Data request processing method and device, storage medium and electronic device - Google Patents

Data request processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110290217B
CN110290217B CN201910586036.5A CN201910586036A CN110290217B CN 110290217 B CN110290217 B CN 110290217B CN 201910586036 A CN201910586036 A CN 201910586036A CN 110290217 B CN110290217 B CN 110290217B
Authority
CN
China
Prior art keywords
data
message queue
data request
target data
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910586036.5A
Other languages
Chinese (zh)
Other versions
CN110290217A (en
Inventor
周罗武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910586036.5A priority Critical patent/CN110290217B/en
Publication of CN110290217A publication Critical patent/CN110290217A/en
Application granted granted Critical
Publication of CN110290217B publication Critical patent/CN110290217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/62Establishing a time schedule for servicing the requests

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a data request processing method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: receiving a target data request; writing target data requests into a first message queue, wherein the first message queue comprises data requests to be sent to a data source server, the data requests in the first message queue are set to be taken out of the first message queue according to sending periods and sent to the data source server, and the number of the data requests taken out of the first message queue in each sending period is the same; and when the target sending period is reached, taking out a plurality of data requests including the target data request from the first message queue, and sending the plurality of data requests including the target data request to the data source server so as to inform the data source server to acquire the target data requested by the plurality of data requests.

Description

Data request processing method and device, storage medium and electronic device
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing a data request, a storage medium, and an electronic apparatus.
Background
In some scenarios, the server data is difficult to obtain, and has limited frequency requirements, for example, only query interaction within a specified threshold value can be checked every second, but the request volume of an application party is very large and far exceeds the query frequency limit, so that when the user volume is increased or the service peak period is long, the problems of data query failure, data calling overtime, and even data provider service being dragged and lost can occur. This will seriously affect the user experience and traffic growth, as well as the reliability and stability of the data provider services.
The solution in the prior art is to relieve the pressure of the data server by additionally arranging a cache server, but the data in the cache server cannot ensure the real-time performance of the data, and when the cache server does not have the requested data, a large number of requests still flow into the data server, and the fundamental problem is not solved.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a data request processing method and device, a storage medium and an electronic device, so as to reduce traffic pressure on a data server during a traffic peak.
According to an aspect of an embodiment of the present application, there is provided a method for processing a data request, including: receiving a target data request; writing the target data request into a first message queue, wherein the first message queue comprises data requests to be sent to the data source server, the data requests in the first message queue are set to be taken out of the first message queue according to sending periods and sent to the data source server, and the number of the data requests taken out of the first message queue in each sending period is the same; and when a target sending period is reached, taking out a plurality of data requests including the target data request from the first message queue, and sending the plurality of data requests including the target data request to the data source server so as to inform the data source server of acquiring target data requested by the plurality of data requests.
According to another aspect of the embodiments of the present application, there is also provided a device for processing a data request, including: a receiving module configured to receive a target data request; a write module configured to write the target data request into a first message queue, where the first message queue includes data requests to be sent to the data source server, the data requests in the first message queue are set to be taken out from the first message queue according to sending cycles and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending cycle is the same; a fetching module configured to fetch a plurality of data requests including the target data request from the first message queue when a target transmission period is reached; the sending module is configured to send a plurality of data requests including the target data request to the data source server so as to inform the data source server to acquire the target data requested by the plurality of data requests.
Optionally, the apparatus further comprises: the data source server comprises a first acquisition module, a second acquisition module and a first display module, wherein the first acquisition module is configured to acquire a first data request, and the first data request is used for requesting first data from the data source server; a first discarding module configured to discard the first data request when the number of data requests to be sent to the data source server included in the first message queue is greater than a predetermined threshold; a second write module configured to write the first data request into a second message queue, wherein the data requests included in the second message queue are set to be written back into the first message queue if the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state.
Optionally, the apparatus further comprises: the cache module is configured to store the discarded first data request on a cache server and keep the first data request for a specified time; a write back module configured to write the discarded first data request back to the first message queue if the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state.
Optionally, the first writing module includes: a first writing unit configured to write the target data request in a head-of-queue position in the first message queue if the priority of the target data request is higher than a predetermined threshold, and write the target data request in an end-of-queue position in the first message queue on the intermediate server if the priority of the target data request is lower than the predetermined threshold; a second writing unit configured to write the target data request to a head-of-queue position in the first message queue if the priority of the target data request is higher than the priority of a data request located at the head-of-queue position in the first message queue, and write the target data request to an end-of-queue position in the first message queue if the priority of the target data request is lower than the priority of a data request located at the end-of-queue position in the first message queue.
Optionally, the apparatus further comprises: a second obtaining module configured to obtain a second data request, where the second data request is used to request the target data from the data source server; a second discard module configured to discard the second data request.
Optionally, the apparatus further comprises: the second sending module is configured to send the target data request to the data source server through a proxy server after sending the target data request to the data source server and after the data source server obtains target data requested by the target data request; a third obtaining module configured to obtain the target data received by the proxy server from the data source server.
According to another aspect of the embodiments of the present application, there is also provided a storage medium, in which a computer program is stored, where the computer program is configured to execute the processing method of the data request when running.
According to another aspect of the embodiments of the present application, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the method for processing the data request through the computer program.
In the embodiment of the application, the target data request is received on the intermediate server; then writing the target data request into a first message queue, wherein the first message queue comprises data requests to be sent to a data source server, the data requests in the first message queue are set to be taken out from the first message queue according to sending periods and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending period is the same; the method for sending the data request regularly and quantitatively can control the flow of the request to a certain extent, and ensures that the quantity of the requests sent to the data source server is not too small, so that resources are not wasted, and the server pressure is not too high; when a target sending period is reached, a plurality of data requests including the target data request are taken out from the first message queue, the data requests including the target data request are sent to the data source server to inform the data source server to obtain target data requested by the data requests, and the data source server obtains the data in advance according to the received data requests, so that the effectiveness and the real-time performance of the data are guaranteed, a cache effect is achieved, and the flow pressure on the data server in a traffic peak period is further reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a diagram illustrating a hardware environment for an alternative method for processing a data request according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of processing a data request according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application environment of an alternative data request processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an alternative user performance information query interface of a certain community according to an embodiment of the application;
FIG. 5 is a block diagram of an alternative data request processing apparatus according to an embodiment of the present application;
FIG. 6 is a block diagram of an alternative data request processing system according to an embodiment of the present application;
FIG. 7 is an interaction flow diagram of an alternative data request processing method according to an embodiment of the application;
FIG. 8 is an interaction flow diagram within an alternative intermediary server of FIG. 7 in accordance with embodiments of the present application;
FIG. 9 is a schematic diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, game data are difficult to obtain in some scenes, and limited frequency is required, for example, only query interaction within a specified threshold value can be checked every second, but the request amount of an application party is very large and far exceeds the query frequency limit, so that when the user amount is increased or the business peak period is long, the problems of data query failure, overtime game data calling and even the problem of dragging down the service of a game data provider can occur. This will seriously affect the user experience and traffic growth, as well as the reliability and stability of the services of the data provider.
At present, the industry generally uses a mode of increasing database cache data and reducing interaction with a data source, and a frequency limit module is generally arranged between a logic server and a data source server and used as a cache database. The main process is as follows: a user initiates a query request at a client; after receiving the request, the logic server firstly goes to a data server to inquire whether data exists; the data is directly returned to the client, and if no data exists, the query request is forwarded to the data source server; the data source server returns the result to the frequency limiting server, the frequency limiting server returns the result to the logic server, and then the logic server returns the result to the client.
The above architecture has the following drawbacks: the number of requests of the client is not controlled, and when the requests of the client are increased, the pressure of the logic server and the data source server is overlarge, so that the data server fails to pull the data. Although the interaction with the data source server can be reduced to a certain extent by adding the cache, the real-time performance of the data is influenced and is not accurate.
In order to solve the above technical problem, an embodiment of the present application provides a method for processing a data request. Fig. 1 is a schematic diagram of a hardware environment of an optional data request processing method according to an embodiment of the present application, and as shown in fig. 1, the data request processing method mainly includes the following steps:
step S102, the user equipment 102 sends a target data request to the network 110;
step S104, the network terminal 110 forwards the target data request to the server 112;
step S106, the server 112 acquires a data result according to the target data request;
step S108, the server 112 returns the target data result to the network terminal 110;
in step S110, the network 110 feeds back the target data result to the user equipment 102.
The user device 102 may include, but is not limited to, the memory 104, the processor 106, and the display 108 internally, and the server 112 may include, but is not limited to, the database 114 and the processing engine 116 internally.
Fig. 2 is a flowchart of an alternative data request processing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
step S201, receiving a target data request;
step S203, writing the target data request into a first message queue, wherein the first message queue comprises data requests to be sent to a data source server, the data requests in the first message queue are set to be taken out from the first message queue according to sending periods and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending period is the same;
step S205, when the target sending period is reached, a plurality of data requests including the target data request are taken out from the first message queue, and the plurality of data requests including the target data request are sent to the data source server, so as to notify the data source server to obtain the target data requested by the plurality of data requests.
Optionally, in this embodiment, the processing method of the data request may be applied to a hardware environment formed by the client 302, the intermediate server 304, and the data source server 306 shown in fig. 3, an execution subject of each step shown in fig. 2 may be, but is not limited to, the intermediate server 304, optionally, the intermediate server may include a plurality of servers, and the first message queue may be one of the intermediate servers. The intermediate server can be arranged between the logic server and the data source server, and can also be arranged between the client and the data source server. As shown in fig. 3, the intermediate server 304 receives a target data request sent by the client 302, and writes the target data request into a first message queue, where the first message queue includes data requests to be sent to the data source server, the data requests in the first message queue are set to be taken out from the first message queue according to sending cycles and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending cycle is the same; when the target sending period is reached, a plurality of data requests including the target data request are taken out from the first message queue, and the plurality of data requests including the target data request are sent to the data source server 306 so as to inform the data source server to acquire the target data requested by the plurality of data requests, and then the client 302 requests the data source server 306 for the target data.
Alternatively, the above data processing method is not limited to be applied to the scene of game data acquisition, and may be applied to any other server application scene in which the request volume is concurrent during the business peak, such as shopping, game, instant communication, etc.
Alternatively, the target data request may be, but not limited to, data involved in a game process, such as account login, game battle performance query, query of hero characters, acquisition of game background, query of character skins, game forum information, chat content, event records, data of killing, attack assistance and death times displayed in a game picture, score data of a game, data of broadcasting operation in the game process, and the like. Fig. 4 is a schematic diagram of a battle performance information query interface of a certain community of users according to an embodiment of the present application.
Alternatively, the first message queue may be a message queue composed of data requests, the data requests in the message queue are fetched from the message queue according to a preset sending period, the number of data requests fetched each time is a preset number M, M may be a fixed number, or may be changed according to the sending period, for example, in the first sending period, M is 100, in the second sending period M is 200, and in the third sending period M is 100, so that the operations are performed alternately, of course, the numbers herein are merely used for illustration, and the actual application may be specifically set according to the load-bearing capacity and the application scenario of the server.
Optionally, in this embodiment of the application, multiple data requests including a target data request are sent to the data source server, and the data source server may request data from the game server according to the received data requests, and then store the acquired data in a local database of the data source server, where the acquired data includes the target data. The subsequent client can directly send the target data request to the data source server to obtain the corresponding target data, so that multiple interactions between the game server and the front end are avoided, and the pressure of the game server is reduced. And because the data acquired in the data source server is acquired in real time according to the target data request, the validity and real-time performance of the data can be ensured, the data is prevented from being updated, but the data stored in the data source server is delayed.
As an optional scheme, after writing the target data request into the first message queue, the method further includes:
s1, acquiring a first data request, wherein the first data request is used for requesting first data from a data source server;
s2, in case that the number of data requests to be sent to the data source server included in the first message queue is greater than a predetermined threshold, discarding the first data request, or writing the first data request into a second message queue, wherein the data requests included in the second message queue are set to be written back into the first message queue in case that the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state.
Optionally, the acquired first data request may be, but is not limited to, data involved in the game process, such as account login, game battle performance inquiry, inquiry of hero characters, acquisition of game background, inquiry of character skins, game forum information, chat content, event records, data of killing, attack assistance and death times displayed in game pictures, score data of one game, data of broadcasting operations during the game process (three-killing, super-god, wonderful and the like), and the like.
Optionally, the maximum number of data requests set in the first message queue, that is, the predetermined threshold is fixed, and beyond the predetermined threshold, subsequent data requests cannot be written into the first message queue any more, which is also a control on the request flow.
Optionally, when the number of requests in the first message queue is greater than the preset threshold, the subsequent data requests may be processed in two ways, one is directly discarded, and the other is written into the second message queue, where the second message queue may also set a maximum bearer number, and when the second message queue is also full, the subsequent data requests are discarded. For example, the preset threshold of the first message queue may be, but is not limited to, 100, and when 100 data requests are written in the first message queue, the 101 th request message may be directly discarded, or may be written in the second message queue. The maximum threshold of the second message queue is 100, and when the second message queue is also full of data requests, 201 st message is discarded.
Optionally, when the first message queue works normally, the second message queue exists as a standby queue, and does not interact with other servers directly. A portion of the data requests in the second message queue may be written back to the first message queue when the number of data requests in the first message queue is less than a predetermined threshold (e.g., 50), or may be written back to the first message queue when the first message queue is idle.
Alternatively, the second message queue may work in place of the first message queue when the first message queue fails. The data requests in the first message queue can be written into the second message queue, or the data requests in the first message queue are discarded due to faults, and a specified number of data requests are directly taken out from the second message queue and sent to the data source server.
As an optional scheme, after discarding the first data request, the method further includes:
s1, storing the discarded first data request on a cache server and keeping the first data request for a specified time;
s2, writing the discarded first data request back to the first message queue in case the number of data requests included in the first message queue is less than a predetermined threshold and/or the first message queue is in an idle state.
Optionally, the cache server may be one of the intermediate servers, or may exist independently of the intermediate server. The data requests discarded on the first message queue or the second message queue can be temporarily stored on the cache server and reserved for a specified time, so that the problem of poor user experience caused by directly discarding important requests is avoided. In this embodiment, the data request on the cache server may be written back to the first message queue when the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state, and may also be written back to a standby message queue, for example, the second message queue, when the first message queue fails.
As an alternative, writing the target data request into the first message queue may be implemented by:
s1, writing the target data request into the head position of the first message queue under the condition that the priority of the target data request is higher than the preset threshold value, and writing the target data request into the tail position of the first message queue under the condition that the priority of the target data request is lower than the preset threshold value; or
S2, writing the target data request to the head position of the first message queue if the priority of the target data request is higher than the priority of the data request at the head position of the first message queue, and writing the target data request to the tail position of the first message queue if the priority of the target data request is lower than the priority of the data request at the tail position of the first message queue.
It should be noted that, the above-mentioned manner of writing the target data request into the first message queue mainly utilizes the priority arrangement of the data requests to determine the priority levels of different data requests, and further determines the sequence of processing time of different data requests. By determining the priority, relatively more important data requests can be prioritized, preventing being dropped or delayed. The setting method of the priority can set different priority rules according to different application scenes. For example, in a game application scenario, user a requests to log in to the game, user B requests to obtain a previous battle performance ranking, and user C requests to view a chat log of a dialog box. An alternative priority setting is that the priority of user a is highest and set to 5, the priority of user B is lowest and set to 1, and the priority of user C is in the middle and set to 3.
Alternatively, thresholds for priority may be set, for example, an upper threshold and a lower threshold, respectively. And writing the target data request into a head position in the first message queue under the condition that the priority of the target data request is higher than a preset upper limit threshold, and writing the target data request into a tail position in the first message queue under the condition that the priority of the target data request is lower than a preset lower limit threshold. For example, if the priority of the request of the user a is higher than the upper threshold 4, the data request of the user a is written into the head position of the first message queue, and the priority of the data request of the user B is lower than the lower threshold 2, the data request of the user B is written into the tail of the first message queue. Or only one preset threshold value 3 can be set, the priority of the request of the user A is higher than the preset threshold value 3, the data request of the user A is written into the head position of the first message queue, the priority of the data request of the user B is lower than the preset threshold value 3, and the data request of the user B is written into the tail of the first message queue. The request of the user C may be written in a normal sequence, for example, the last bit of the current first message queue, or may be written in a middle position of the first message queue or any position between the head and the tail of the first message queue according to the priority level, which is not limited in this embodiment of the present application.
Alternatively, if a data request with priority 1 is received and the priority of the data request at the tail of the current first message queue is 2, that is, the priority of the newly received data request is lower than the priority of the data request at the tail of the current message queue, the newly received data request with priority 1 may be written into the tail of the queue. Vice versa, if a data request with a priority of 5 is received and the priority of the data request at the head of the current first message queue is 4, i.e. the priority of the newly received data request is greater than the priority of the data request at the head of the current message queue, then the newly received data request with a priority of 5 may be written to the head of the queue.
When the data requests are subsequently taken out from the message queue according to the preset period, the data requests are taken out from the head of the queue each time, and the taken out data requests are sent to the data source server after the specified number is taken out, so that the data requests with high priority are guaranteed to be preferentially processed.
In an optional aspect, before writing the target data request into the first message queue, the method further includes:
s1, acquiring a second data request, wherein the second data request is used for requesting target data from the data source server;
and S2, discarding the second data request.
It should be noted that, in the second data request, the requested data and the data requested by the target data request are identical, and are all the requested target data. In order to save resources, before writing the request into the first message queue, the received data requests are filtered, namely merged and deduplicated, and the repeated data requests are merged into one, namely, when the data requests which are repeated with the target data request are subsequently received, the data requests are directly discarded without being written into the first message queue. The occupation of a large number of repeated data requests on server resources is effectively reduced.
For example, in the course of a competition, the user a requests to acquire the current game battle performance, after acquiring the corresponding data from the data source server, before the competition is finished, the user a requests to acquire the current game battle performance again, after receiving the data request of the user a again, the intermediate server judges that the same request has been made before, and the data corresponding to the request is not updated, and the intermediate server may directly discard the request received again. At this time, the logic server is on line, and it should be noted that when the user a requests game battle performance data for the first time, the logic server stores the corresponding data locally, and when the user requests the same data again, the logic server can directly return the stored data to the user a, and it is not necessary to request the repeated requests again through the intermediate server and the data source server.
Optionally, the combining and deduplication may be performed by a scaling server, and the scaling server may be one of the intermediate servers, which is not limited in this embodiment of the present application.
In an optional aspect, after sending the target data request to the data source server, the method further includes:
s1, after sending the target data request to the data source server and after the data source server obtains the target data requested by the target data request, sending the target data request to the data source server through the proxy server;
and S2, acquiring the target data received by the proxy server from the data source server.
In an optional scheme, the client does not directly interact with the intermediate server or directly interact with the data source server. A logic server is arranged between the client and the intermediate server, and a logic server and a proxy server are arranged between the client and the data source server.
The client of the user sends the target data request to the logic server, and the logic server forwards the data request to the intermediate server and sends the data request to the data source server through the intermediate server, so that the data source server obtains the corresponding target data. After the data source server acquires the target data, the logic server queries the target data from the data source server through the proxy server, then the proxy server returns the acquired target data to the logic server, and the logic server returns the target data to the client.
By the method, after the data source server acquires the target data, the target data request is sent to the proxy server, and the proxy server requests the data source server to acquire the data according to the target data request, so that the target data can be ensured to be updated in real time.
In an alternative scheme, the logical server may be made aware of when to send the target data request to the proxy server by:
s1, after sending the target data request to the data source server, receiving a notification message, wherein the notification message is used for indicating that the data source server has acquired the target data requested by the target data request;
and S2, responding to the notification message and sending the target data request to the data source server through the proxy server.
Alternatively, the notification message may be carried on any communication message between the servers. And after the logic server receives the notification message sent by the intermediate server and knows that the data source server has acquired the target data, the logic server sends a target data request to the proxy server. Or after the intermediate server sends the data request to the data source server, the intermediate server can not directly send the notification message to the logic server without feedback of the data source server, and the notification logic server can send the data request to the proxy server.
Or setting a clock signal on the logic server, and sending the target data request to the proxy server after the target data request is sent to the intermediate server for a predetermined time. For example, after the logical server sends the target data request to the intermediate server 1s, the logical server sends the target data request to the proxy server.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiments of the present application, there is also provided a data request processing apparatus for implementing the data request processing method, as shown in fig. 5, the apparatus includes:
a receiving module 50 configured to receive a target data request;
a first writing module 52, configured to write the target data request into a first message queue, where the first message queue includes data requests to be sent to the data source server, the data requests in the first message queue are set to be taken out from the first message queue according to sending cycles and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending cycle is the same;
a fetching module 54 configured to fetch a plurality of data requests including the target data request from the first message queue upon reaching the target sending period;
the first sending module 56 is configured to send a plurality of data requests including the target data request to the data source server to notify the data source server to obtain the target data requested by the plurality of data requests.
Optionally, the apparatus further comprises: the data source server comprises a first acquisition module, a second acquisition module and a first display module, wherein the first acquisition module is configured to acquire a first data request, and the first data request is used for requesting first data from the data source server; a first discarding module configured to discard the first data request when the number of data requests to be sent to the data source server included in the first message queue is greater than a predetermined threshold; a second write module configured to write the first data request into a second message queue, wherein the data requests included in the second message queue are set to be written back into the first message queue if the number of data requests included in the first message queue is less than a predetermined threshold and/or the first message queue is in an idle state.
Optionally, the apparatus further comprises: the cache module is configured to store the discarded first data request on a cache server and keep the first data request for a specified time; a write back module configured to write the discarded first data requests back to the first message queue if the number of data requests included in the first message queue is less than a predetermined threshold and/or the first message queue is in an idle state.
Optionally, the first writing module 52 includes: a first writing unit configured to write the target data request in a head-of-queue position in the first message queue if the priority of the target data request is higher than a predetermined threshold, and write the target data request in an end-of-queue position in the first message queue on the intermediate server if the priority of the target data request is lower than the predetermined threshold; and the second writing unit is configured to write the target data request into the head of queue position in the first message queue if the priority of the target data request is higher than that of the data request positioned at the head of queue position in the first message queue, and write the target data request into the tail of queue position in the first message queue if the priority of the target data request is lower than that of the data request positioned at the tail of queue position in the first message queue.
Optionally, the apparatus further comprises: the second acquisition module is configured to acquire a second data request, wherein the second data request is used for requesting target data from the data source server; a second discard module configured to discard the second data request.
Optionally, the apparatus further comprises: the second sending module is configured to send the target data request to the data source server through the proxy server after sending the target data request to the data source server and after the data source server acquires target data requested by the target data request; and the third acquisition module is configured to acquire the target data received by the proxy server from the data source server.
Optionally, the second sending module includes: the receiving unit is configured to receive a notification message after sending the target data request to the data source server, wherein the notification message is used for indicating that the data source server has acquired the target data requested by the target data request; and the sending unit is configured to respond to the notification message and send the target data request to the data source server through the proxy server.
According to another embodiment of the present application, there is also provided a system for processing a data request, configured to perform any of the above method embodiments. Fig. 6 is a block diagram of a structure of an optional data request processing system according to an embodiment of the present application, and fig. 7 is an interaction flowchart of a data request processing method according to an embodiment of the present application, and as shown in fig. 6 and fig. 7, the system includes:
a logic server 62 configured to obtain the target data request sent by the client 60 and send the target data request to an intermediate server 64;
the intermediate server 64 is configured to send a target data request to the data source server 66, where the intermediate server 64 includes a scaling server 640, a message queue 642 and a flow control server 644, the scaling server 640 combines and deduplicates the received data requests and sends the combined data requests to the message queue 642, the message queue 642 prioritizes the data requests according to a priority order and discards the data requests exceeding a preset threshold number, and the flow control server 644 takes out a specified number of data requests from a queue head of the message queue according to a time period and sends the data requests to the data source server 66;
a data source server 66 configured to obtain target data requested by the target data request from a game data server 68;
the logical server 62 is further configured to send a target data request to the proxy server 70 after the intermediate server 64 sends the target data to the data source server 66;
a proxy server 70 configured to send a target data request to the data source server 66, receive target data returned from the data source server 66, and send the received target data to the logic server 62;
the logical server 62 is also configured to send the target data to the client 60.
Fig. 8 is an interaction flowchart of the internal process of the intermediate server shown in fig. 7 according to an embodiment of the present application, as shown in fig. 8, the intermediate server 64 includes a scaling server 640, a message queue 642 and a flow control server 644, the scaling server 640 merges and deduplicates the received data requests and sends the merged data requests to the message queue 642, the message queue 642 prioritizes the data requests according to a priority order and discards the data requests exceeding a preset threshold number, and the flow control server 644 takes out a specified number of data requests from a head of the message queue according to a time period and sends the data requests to the data source server 66.
The data requests are processed layer by the volume reduction server, the message queue and the flow control server, so that the data requests received by the data source server are prevented from causing server crash due to sudden increase of the service volume, excessive pressure of the data requests sent to the game data server by the data source server to the game data server is avoided, and the problem that the excessive pressure of the game data server is caused due to too frequent calling of the game data when the user volume is increased or the service peak is effectively solved.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the above media file searching method, where the electronic device may be applied, but not limited to, the server 112 shown in fig. 1. As shown in fig. 9, the electronic device comprises a memory 902 and a processor 904, the memory 902 having a computer program stored therein, the processor 904 being arranged to perform the steps of any of the above-described method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, receiving the target data request on the intermediate server;
s2, writing the target data request into a first message queue on the intermediate server, wherein the first message queue comprises the data request to be sent to the data source server, the data request in the first message queue is set to be taken out from the first message queue according to sending periods and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending period is the same;
and S3, when the target sending period is reached, taking out a plurality of data requests including the target data request from the first message queue, and sending the plurality of data requests including the target data request to the data source server so as to inform the data source server to acquire the target data requested by the plurality of data requests.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
The memory 902 may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for processing a data request in the embodiment of the present invention, and the processor 904 executes various functional applications and data processing by running the software programs and modules stored in the memory 902, that is, implementing the method for processing a data request. The memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 902 may further include memory located remotely from the processor 904, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 902 may be, but is not limited to, specifically configured to store data requested by the target data request. As an example, as shown in fig. 9, the memory 902 may include, but is not limited to, a receiving module 50, a first writing module 52, a fetching module 54, and a first sending module 56 in the data request processing apparatus. In addition, the data request processing apparatus may further include, but is not limited to, other module units in the data request processing apparatus, which is not described in this example again.
Optionally, the transmitting device 906 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 906 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 906 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 908 for displaying various media files; and a connection bus 910 for connecting the respective module parts in the above-described electronic apparatus.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, receiving the target data request on the intermediate server;
s2, writing the target data request into a first message queue on the intermediate server, wherein the first message queue comprises the data request to be sent to the data source server, the data request in the first message queue is set to be taken out from the first message queue according to sending periods and sent to the data source server, and the number of the data requests taken out from the first message queue in each sending period is the same;
and S3, when the target sending period is reached, taking out a plurality of data requests including the target data request from the first message queue, and sending the plurality of data requests including the target data request to the data source server so as to inform the data source server to acquire the target data requested by the plurality of data requests.
Optionally, the storage medium is further configured to store a computer program for executing the steps included in the method in the foregoing embodiment, which is not described in detail in this embodiment.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (12)

1. A method for processing a data request, comprising:
receiving a target data request;
writing the target data request into a first message queue under the condition that the number of data requests to be sent to a data source server, which are included in the first message queue, is less than a predetermined threshold, wherein the first message queue includes the data requests to be sent to the data source server, the data requests in the first message queue are set to be taken out of the first message queue according to sending cycles and sent to the data source server, and the number of the data requests taken out of the first message queue in each sending cycle is the same;
when a target sending period is reached, taking out a plurality of data requests including the target data request from the first message queue, and sending the plurality of data requests including the target data request to the data source server so as to inform the data source server of acquiring target data requested by the plurality of data requests;
receiving a notification message, wherein the notification message is used for indicating that the data source server has acquired the target data requested by the target data request;
responding to the notification message, and sending the target data request to the data source server through a proxy server;
and acquiring the target data received by the proxy server from the data source server.
2. The method of claim 1, wherein after writing the target data request to the first message queue, the method further comprises:
acquiring a first data request, wherein the first data request is used for requesting first data from the data source server;
the method comprises the steps of discarding a first data request when the number of data requests to be sent to the data source server included in the first message queue is larger than a preset threshold value, or writing the first data request into a second message queue, wherein the data requests included in the second message queue are set to be written back into the first message queue when the number of data requests included in the first message queue is smaller than the preset threshold value and/or the first message queue is in an idle state.
3. The method of claim 2, wherein after discarding the first data request, the method further comprises:
storing the discarded first data request on a cache server and keeping the first data request for a specified time;
writing the discarded first data request back into the first message queue in a case that the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state.
4. The method of claim 1, wherein writing the target data request to a first message queue comprises:
writing the target data request to a head-of-line position in the first message queue if the priority of the target data request is higher than a predetermined threshold, and writing the target data request to a tail-of-line position in the first message queue if the priority of the target data request is lower than the predetermined threshold; or
And writing the target data request into the head queue position in the first message queue under the condition that the priority of the target data request is higher than that of the data request positioned at the head queue position in the first message queue, and writing the target data request into the tail queue position in the first message queue under the condition that the priority of the target data request is lower than that of the data request positioned at the tail queue position in the first message queue.
5. The method of claim 1, wherein prior to writing the target data request to the first message queue, the method further comprises:
acquiring a second data request, wherein the second data request is used for requesting the target data from the data source server;
discarding the second data request.
6. An apparatus for processing a data request, comprising:
a receiving module configured to receive a target data request;
a first write module, configured to write the target data request into a first message queue when a number of data requests to be sent to a data source server included in the first message queue is smaller than a predetermined threshold, where the first message queue includes the data requests to be sent to the data source server, the data requests in the first message queue are set to be taken out from the first message queue and sent to the data source server according to sending cycles, and the number of data requests taken out from the first message queue in each sending cycle is the same;
a fetching module configured to fetch a plurality of data requests including the target data request from the first message queue when a target transmission period is reached;
a first sending module, configured to send a plurality of data requests including the target data request to the data source server to notify the data source server to obtain target data requested by the plurality of data requests;
the second sending module is configured to send the target data request to the data source server through a proxy server after sending the target data request to the data source server and after the data source server obtains target data requested by the target data request;
a third obtaining module configured to obtain the target data received by the proxy server from the data source server;
the second sending module further comprises:
a receiving unit, configured to receive a notification message after sending the target data request to the data source server, where the notification message is used to indicate that the data source server has acquired the target data requested by the target data request;
and the sending unit is configured to respond to the notification message and send the target data request to the data source server through the proxy server.
7. The apparatus of claim 6, further comprising:
the data source server comprises a first acquisition module, a second acquisition module and a first display module, wherein the first acquisition module is configured to acquire a first data request, and the first data request is used for requesting first data from the data source server;
a first discarding module configured to discard the first data request when the number of data requests to be sent to the data source server included in the first message queue is greater than a predetermined threshold;
a second write module configured to write the first data request into a second message queue, wherein the data requests included in the second message queue are set to be written back into the first message queue if the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state.
8. The apparatus of claim 7, further comprising:
the cache module is configured to store the discarded first data request on a cache server and keep the first data request for a specified time;
a write back module configured to write the discarded first data request back to the first message queue if the number of data requests included in the first message queue is less than the predetermined threshold and/or the first message queue is in an idle state.
9. The apparatus of claim 6, wherein the first write module comprises:
a first writing unit configured to write the target data request in a head-of-queue position in the first message queue if the priority of the target data request is higher than a predetermined threshold, and write the target data request in an end-of-queue position in the first message queue if the priority of the target data request is lower than the predetermined threshold;
a second writing unit configured to write the target data request to a head-of-queue position in the first message queue if the priority of the target data request is higher than the priority of a data request located at the head-of-queue position in the first message queue, and write the target data request to an end-of-queue position in the first message queue if the priority of the target data request is lower than the priority of a data request located at the end-of-queue position in the first message queue.
10. The apparatus of claim 6, further comprising:
a second obtaining module configured to obtain a second data request, where the second data request is used to request the target data from the data source server;
a second discard module configured to discard the second data request.
11. A storage medium storing a program, wherein the program when executed by a processor performs the method of any of claims 1 to 5.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 5 by means of the computer program.
CN201910586036.5A 2019-07-01 2019-07-01 Data request processing method and device, storage medium and electronic device Active CN110290217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910586036.5A CN110290217B (en) 2019-07-01 2019-07-01 Data request processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910586036.5A CN110290217B (en) 2019-07-01 2019-07-01 Data request processing method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110290217A CN110290217A (en) 2019-09-27
CN110290217B true CN110290217B (en) 2022-04-26

Family

ID=68021577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910586036.5A Active CN110290217B (en) 2019-07-01 2019-07-01 Data request processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110290217B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190745B (en) * 2019-11-05 2024-01-30 腾讯科技(深圳)有限公司 Data processing method, device and computer readable storage medium
CN110990444A (en) * 2019-11-27 2020-04-10 中诚信征信有限公司 Data query method and device
CN111324536A (en) * 2020-02-19 2020-06-23 香港乐蜜有限公司 Pressure testing method and device, electronic equipment and storage medium
CN111445157A (en) * 2020-03-31 2020-07-24 深圳前海微众银行股份有限公司 Service data management method, device, equipment and storage medium
CN111488366B (en) * 2020-04-09 2023-08-01 百度在线网络技术(北京)有限公司 Relational database updating method, relational database updating device, relational database updating equipment and storage medium
CN111580993B (en) * 2020-05-11 2024-05-17 广州虎牙信息科技有限公司 Data processing method and device, electronic equipment and storage medium
CN111614549B (en) * 2020-05-21 2022-05-31 腾讯科技(深圳)有限公司 Interaction processing method and device, computer equipment and storage medium
CN111949424A (en) * 2020-09-18 2020-11-17 成都精灵云科技有限公司 Method for realizing queue for processing declarative events
CN112347107A (en) * 2020-11-11 2021-02-09 Oppo(重庆)智能科技有限公司 Data persistence method, mobile terminal and computer-readable storage medium
CN112699391B (en) * 2020-12-31 2023-06-06 青岛海尔科技有限公司 Target data sending method and privacy computing platform
CN114760357A (en) * 2022-03-23 2022-07-15 北京字节跳动网络技术有限公司 Request processing method and device, computer equipment and storage medium
CN115190173B (en) * 2022-07-08 2024-02-23 迈普通信技术股份有限公司 Network communication method, device, equipment and storage medium
CN116233053A (en) * 2022-12-05 2023-06-06 中国联合网络通信集团有限公司 Method, device and storage medium for sending service request message
CN116757796B (en) * 2023-08-22 2024-01-23 深圳硬之城信息技术有限公司 Shopping request response method based on nginx and related device
CN117234998B (en) * 2023-09-12 2024-06-07 中科驭数(北京)科技有限公司 Multi-host data access method and system
CN117687763B (en) * 2024-02-03 2024-04-09 成都医星科技有限公司 High concurrency data weak priority processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457906A (en) * 2010-10-26 2012-05-16 中国移动通信集团河南有限公司 Load balancing control method and system of message queues
CN104601675A (en) * 2014-12-29 2015-05-06 小米科技有限责任公司 Server load balancing method and device
CN106603703A (en) * 2016-12-29 2017-04-26 北京奇艺世纪科技有限公司 Back-to-source node determination method and apparatus
CN107645386A (en) * 2017-09-25 2018-01-30 网宿科技股份有限公司 A kind of method and apparatus for obtaining data resource
CN109600415A (en) * 2018-10-23 2019-04-09 平安科技(深圳)有限公司 The method, apparatus and computer equipment of target data are obtained from multiple source servers

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8972526B2 (en) * 2012-10-17 2015-03-03 Wal-Mart Stores, Inc. HTTP parallel processing router
CN106470169A (en) * 2015-08-19 2017-03-01 阿里巴巴集团控股有限公司 A kind of service request method of adjustment and equipment
CN107317763B (en) * 2017-06-30 2021-04-30 郑州云海信息技术有限公司 Flow control method and device between client and metadata server
CN107767236A (en) * 2017-11-14 2018-03-06 北京小度信息科技有限公司 A kind of order method for pushing, device, server and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457906A (en) * 2010-10-26 2012-05-16 中国移动通信集团河南有限公司 Load balancing control method and system of message queues
CN104601675A (en) * 2014-12-29 2015-05-06 小米科技有限责任公司 Server load balancing method and device
CN106603703A (en) * 2016-12-29 2017-04-26 北京奇艺世纪科技有限公司 Back-to-source node determination method and apparatus
CN107645386A (en) * 2017-09-25 2018-01-30 网宿科技股份有限公司 A kind of method and apparatus for obtaining data resource
CN109600415A (en) * 2018-10-23 2019-04-09 平安科技(深圳)有限公司 The method, apparatus and computer equipment of target data are obtained from multiple source servers

Also Published As

Publication number Publication date
CN110290217A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110290217B (en) Data request processing method and device, storage medium and electronic device
CN111030936B (en) Current-limiting control method and device for network access and computer-readable storage medium
CN109462631B (en) Data processing method, data processing device, storage medium and electronic device
CN109600437B (en) Downloading method of streaming media resource and cache server
CN107172171B (en) Service request processing method and device and computer readable storage medium
CN110659151B (en) Data verification method and device and storage medium
WO2019041738A1 (en) Client resource obtaining method and apparatus, terminal device, and storage medium
CN114039875B (en) Data acquisition method, device and system based on eBPF technology
US20170155712A1 (en) Method and device for updating cache data
CN108924043A (en) System monitoring method, gateway communication, gateway apparatus, service processing equipment
CN111930305A (en) Data storage method and device, storage medium and electronic device
CN105450513A (en) Method for filing mail attachments, and cloud storage server
CN113377507A (en) Task processing method, device, equipment and computer readable storage medium
WO2023125380A1 (en) Data management method and corresponding apparatus
CN101388863A (en) Implementing method and system for WAP gateway extraction service
CN110545453B (en) Content distribution method, device and system of content distribution network
CN110598085B (en) Information query method for terminal and terminal
CN104346228A (en) Application program sharing method and terminal
CN110324366B (en) Data processing method, device and system
CN112073747A (en) Streaming media data preview method, user end equipment and relay server
CN112688980A (en) Resource distribution method and device, and computer equipment
CN114466032B (en) CDN system merging and source returning method, device and storage medium
CN117729260A (en) Request sending method, client, electronic device and storage medium
CN116319305A (en) Routing configuration issuing method and device of virtual machine, storage medium and electronic equipment
CN116634481A (en) Method, device, equipment and storage medium for determining performance index

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant