CN113590314A - Network request data processing method and system - Google Patents

Network request data processing method and system Download PDF

Info

Publication number
CN113590314A
CN113590314A CN202110790757.5A CN202110790757A CN113590314A CN 113590314 A CN113590314 A CN 113590314A CN 202110790757 A CN202110790757 A CN 202110790757A CN 113590314 A CN113590314 A CN 113590314A
Authority
CN
China
Prior art keywords
data
processing
server
processed
task processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110790757.5A
Other languages
Chinese (zh)
Inventor
汪云爱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yitan Network Technology Co ltd
Original Assignee
Shanghai Yitan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yitan Network Technology Co ltd filed Critical Shanghai Yitan Network Technology Co ltd
Priority to CN202110790757.5A priority Critical patent/CN113590314A/en
Publication of CN113590314A publication Critical patent/CN113590314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/604Tools and structures for managing or administering access control systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2141Access rights, e.g. capability lists, access control lists, access tables, access matrices

Abstract

The application relates to a network request data processing method and system. The method comprises the following steps: the scheduling server sends a scheduling request to the task processing server, wherein the scheduling request carries a network request data identifier; at least one task processing server competes for the processing permission of the data to be processed corresponding to the network request data identifier, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifiers; when the task processing server obtains the processing right of the data to be processed corresponding to the network request data identifier, the task processing server obtains the data to be processed corresponding to the network request data identifier; and the task processing server processes the data to be processed to obtain a processing result. The method can ensure the full utilization of resources.

Description

Network request data processing method and system
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and a system for processing network request data.
Background
With the rapid development of the internet, in the process of interaction between the client and the server, the server needs to process a large number of network requests, and in order to process the large number of network requests, the server is provided with a main server and a standby server, generally, the main server serves the outside, and the standby server can only serve if the main server fails.
However, in the above scheme of the active/standby server, when the primary server fails to execute the task, the standby server is woken up, which wastes resources of the standby server.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method and system for processing network request data, which improve resource utilization.
A method of network request data processing, the method comprising:
the scheduling server sends a scheduling request to the task processing server, wherein the scheduling request carries a network request data identifier;
at least one task processing server competes for the processing permission of the data to be processed corresponding to the network request data identifier, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifiers;
when the task processing server obtains the processing right of the data to be processed corresponding to the network request data identifier, the task processing server obtains the data to be processed corresponding to the network request data identifier;
and the task processing server processes the data to be processed to obtain a processing result.
In one embodiment, the method further comprises:
when the task processing server fails in processing the data to be processed, the scheduling server resends a scheduling request corresponding to the network request data identifier;
the task processing server which does not have faults competes for the processing authority of the data to be processed corresponding to the network request data identification;
when the task processing server which does not have a fault competes to obtain the processing right of the data to be processed corresponding to the network request data identifier, the task processing server which does not have the fault obtains the data to be processed corresponding to the network request data identifier;
and the task processing server which does not have faults processes the acquired data to be processed to obtain a processing result.
In one embodiment, the method further comprises:
the newly added task processing server registers to the scheduling server;
the scheduling server sends a scheduling request to the task processing server, and the scheduling request comprises the following steps:
and the scheduling server sends scheduling requests to all the task processing servers, wherein all the task processing servers comprise newly added task processing servers.
In one embodiment, the step of the task processing server competing for the processing right of the to-be-processed data corresponding to the network request data identifier includes:
the task processing server competes for a processing lock of the data to be processed corresponding to the network request data identifier;
after the task processing server processes the data to be processed to obtain a processing result, the method comprises the following steps:
and the task processing server releases the processing lock.
In one embodiment, after the task processing server contends for the processing lock of the to-be-processed data corresponding to the network request data identifier, the method further includes:
the task processing server acquires the effective time of the processing lock;
the data storage module judges whether the effective time of the processing lock is expired;
and when the valid time of the processing lock expires, the data storage module releases the processing lock.
In one embodiment, the scheduling period is greater than the sum of the effective time of the processing lock and the task processing time; the scheduling server sends a scheduling request to the task processing server, and the scheduling request comprises the following steps:
and the scheduling server sends scheduling requests to all the task processing servers.
7. According to the above method, the method further comprises:
a gateway receives a matching request sent by a terminal, wherein the matching request carries a network request data identifier;
the gateway forwards the matching request to a request processing server;
the request processing server acquires data to be processed according to the matching request and stores the data to be processed into a data storage module queue corresponding to the corresponding network request data identifier;
the task processing server obtains the data to be processed corresponding to the network request data identifier, and the method comprises the following steps:
and the task processing server reads all the data to be processed in the data storage module queue corresponding to the network request data identifier from the data storage module.
In one embodiment, after the task processing server processes the data to be processed to obtain a processing result, the method includes:
and the task processing server asynchronously sends the processing result to a result processing server, and the result processing server is used for sending the processing result to a corresponding terminal and storing the processing result.
In one embodiment, the data storage module is a non-relational database, and the data storage module stores the processing result as a relational database.
A network request data processing system, the system comprising a scheduling server and at least one task processing server;
the scheduling server is used for sending a scheduling request to the task processing server, wherein the scheduling request carries a network request data identifier;
the task processing servers are used for competing the processing permission of the data to be processed corresponding to the network request data identification, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifications; when the processing right of the data to be processed corresponding to the network request data identification is obtained through competition, the data to be processed corresponding to the network request data identification is obtained; and processing the data to be processed to obtain a processing result.
According to the network request data processing method and system, at least one task processing server competes for the processing permission of the to-be-processed data corresponding to the network request data identifier, and different task processing servers obtain the processing permission of the to-be-processed data corresponding to different network request data identifiers, so that each task processing server can be used, and the full utilization of resources is guaranteed.
Drawings
FIG. 1 is a block diagram of a network request data processing system in accordance with one embodiment;
FIG. 2 is a flow diagram illustrating a method for processing network request data in one embodiment;
FIG. 3 is a flowchart illustrating a method for processing network request data according to one embodiment;
FIG. 4 is a timing diagram of a method for network requested data processing in one embodiment;
FIG. 5 is a diagram illustrating storage of data to be processed, according to one embodiment;
FIG. 6 is a schematic diagram of a processing lock, in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The network request data processing method provided by the application can be applied to the application environment shown in fig. 1. The terminal 101 communicates with the gateway 102 through a network, the gateway 102 communicates with the request processing server 103 and the result processing server 104, the request processing server 103 further communicates with the information storage server and the data storage module 105, the scheduling server 106 communicates with the data storage module 105 and at least one task processing server 107, and the result processing server 104 further communicates with the result storage module 108.
The terminal 101 may send a matching request to the gateway 102, the gateway 102 forwards the matching request to the request processing server 103, and the request processing server 103 acquires the data to be processed from the information storage server and stores the data to be processed in the data storage module 105. The scheduling server 106 can monitor the data in the data storage module 105, if the data exists in the data storage module, the scheduling task is started, and the scheduling server 106 can also start the scheduling task at regular time, that is, the scheduling server 106 sends a scheduling request to the task processing server 107, so that at least one task processing server competes for the processing permission of the data to be processed corresponding to the network request data identifier, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifiers; when the task processing server obtains the processing right of the data to be processed corresponding to the network request data identifier, the task processing server obtains the data to be processed corresponding to the network request data identifier; and the task processing server processes the data to be processed to obtain a processing result. Therefore, at least one task processing server competes for the processing permission of the data to be processed corresponding to the network request data identifier, and different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifiers, so that each task processing server can be used, and the full utilization of resources is ensured. The terminal 101 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the request processing server 103, the result processing server 104, the scheduling server 106, the task processing server 107, and the result processing server 108 may be implemented by independent servers or a server cluster formed by a plurality of servers. The data storage module 105 is a non-relational database, which may be any one of the non-relational databases such as Redis, Hbase, memcached, mongodb, CouchBase, LevelDB, or Tair. The result storage module 108 is a relational database, which may be any one of relational databases mysql, oracle, db2, sqlserver, etc., where the processing rights may be implemented by a lock, such as any one of a Redis lock, a zookeeper distributed lock, or a mysql optimistic lock.
In one embodiment, as shown in fig. 2, there is provided a network request data processing method, including the steps of:
s202: and the scheduling server sends a scheduling request to the task processing server, wherein the scheduling request carries the network request data identifier.
Specifically, the scheduling request may be sent by the scheduling server at a fixed time, or sent by the scheduling server when the scheduling server monitors that the amount of the to-be-processed data existing in the data storage module meets a certain requirement. The network request data identifier is a packet identifier obtained after grouping the data to be processed, and is described by taking a team formation scene in a game as an example, and the network request data identifier may be a game identifier. It should be noted that one scheduling request may carry at least one network request data identifier, and optionally, when the scheduling request is triggered regularly, the number of network request data identifiers carried by the scheduling request may be the number of games.
S204: at least one task processing server competes for the processing permission of the data to be processed corresponding to the network request data identifier, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifiers.
Specifically, the processing permission means that the task processing server can process the to-be-processed data corresponding to the network request data identifier, and other task processing servers cannot process the to-be-processed data corresponding to the network request data identifier during the period that the task processing server processes the to-be-processed data.
Optionally, the processing permission may be implemented in a data lock manner, for example, in a data lock manner of the data storage module, and the at least one task processing server competes for the data lock of the to-be-processed data corresponding to the network request data identifier, so as to obtain the processing permission of the to-be-processed data corresponding to the network request data identifier, and then process the to-be-processed data.
The task processing servers are used for processing data to be processed. The scheduling server sends a scheduling request to the task processing server cluster each time, so that the task processing servers in the cluster start to compete to obtain the processing permission of the to-be-processed data corresponding to the network request data identifier, and each task processing server can only obtain the processing permission of the to-be-processed data corresponding to one network request data identifier once. Optionally, the number of the task processing servers participating in the competition each time in the cluster is matched with the number of the network request data identifiers in the scheduling request, for example, the number of the task processing servers participating in the competition each time in the cluster is greater than or equal to the number of the network request data identifiers in the scheduling request, preferably equal to the number of the network request data identifiers, so that each task processing server performs processing on subsequent data to be processed, and resources in the cluster are fully utilized.
If the processing authority of the data to be processed corresponding to the network request data identifier is contended by at least one task processing server, the task processing server executes setnx first, and if the return value is 1, the contention is successful; after the competition of the task processing servers is successful, expire operation needs to be executed, expiration time is added to the lock, the lock is automatically released after timeout, and the situation that the lock is not released due to abnormity in the execution period and the lock cannot be competed by other task processing servers is avoided. In addition, after the task processing server contends for the lock completes processing of the data to be processed, delete operation needs to be executed, and the lock is released in time.
Further, it should be noted that the competition for the processing authority may include three cases: one is that when there is data to be processed corresponding to a new network request data identifier, the task processing servers in the task processing server cluster compete to obtain the processing permission of the data to be processed corresponding to the new network request data identifier; the second is that when the task processing server is out of line when the task processing server is in fault, other task processing servers acquire the processing permission of the data to be processed corresponding to the corresponding network request data identifier and acquire the corresponding data to be processed again when the next task is scheduled, so that data loss is avoided; and thirdly, adding a new task processing server, wherein the new task processing server and other task processing servers compete for the processing authority of the data to be processed corresponding to the network request data identifier.
S206: and when the task processing server obtains the processing right of the data to be processed corresponding to the network request data identifier, the task processing server obtains the data to be processed corresponding to the network request data identifier.
Specifically, when the task processing server obtains the processing right of the to-be-processed data corresponding to the network request data identifier, the to-be-processed data is processed, where a point to be described is that after each task processing server obtains the processing right, the to-be-processed data corresponding to the network request data identifier can be obtained in parallel and processed.
S208: and the task processing server processes the data to be processed to obtain a processing result.
Specifically, after the task processing server obtains the data to be processed, the data to be processed is processed.
Still taking the formation in the game as an example for explanation, if each task processing server acquires the lock of the corresponding game, the user data of the to-be-formed group corresponding to the game is acquired, and then the formation result is obtained by matching the formation according to the user data and the business rule.
According to the network request data processing method, at least one task processing server competes for the processing permission of the to-be-processed data corresponding to the network request data identification, and different task processing servers obtain the processing permission of the to-be-processed data corresponding to different network request data identifications, so that each task processing server can be used, and the full utilization of resources is guaranteed.
In one embodiment, the network request data processing method further includes: when the task processing server fails in processing the data to be processed, the scheduling server resends the scheduling request corresponding to the network request data identifier; the task processing server which does not have faults competes for the processing authority of the data to be processed corresponding to the network request data identification; when the task processing server which does not have the fault competes for obtaining the processing right of the data to be processed corresponding to the network request data identifier, the task processing server which does not have the fault obtains the data to be processed corresponding to the network request data identifier; and the task processing server which does not have the fault processes the acquired data to be processed to obtain a processing result.
Specifically, in the embodiment, the scenario is that the task processing server has a fault, and in order to avoid data loss due to the fault, distributed processing is adopted in the embodiment, when one task processing server has a fault, the to-be-processed data is processed again after the scheduling server resends the scheduling request corresponding to the network request data identifier, and since the to-be-processed data is not processed by the faulty task processing server, the to-be-processed data in the data storage module is unchanged, and if the task processing server has not failed, the to-be-processed data in the data storage module is modified correspondingly after the to-be-processed data is processed, for example, the already-processed to-be-processed data is added with the identifier, or is removed, or is only moved to another location for storage, and the like.
In this embodiment, since the task processing server fails, the data to be processed is not changed, and therefore, in the next scheduling, the task processing server that has not failed competes for obtaining the processing permission of the data to be processed corresponding to the network request data identifier, and continues to obtain and process the corresponding data to be processed according to the processing permission.
In the embodiment, by means of distributed processing and introduction of the processing permission, when the task processing server is abnormal, the data to be processed can still be processed by other task processing servers, so that the accuracy of data processing is ensured.
In one embodiment, the network request data processing method further includes: the newly added task processing server registers with the scheduling server. The scheduling server sends a scheduling request to the task processing server, and the scheduling request comprises the following steps: and the scheduling server sends scheduling requests to all the task processing servers, wherein all the task processing servers comprise the newly added task processing server.
Specifically, in the present embodiment, a scenario of adding a new task processing server is provided, and when a new task processing server is added, the new task processing server registers in the scheduling server, so that the scheduling server sends a scheduling request to all task processing servers, so that the new task processing server also participates in processing of data to be processed, and a specific processing manner may be referred to above, which is not described herein again.
In the above embodiment, a new task processing server may be added to solve the problem of system upgrade iteration.
In one embodiment, the competing of the processing authority of the to-be-processed data corresponding to the network request data identifier by the task processing server includes: the task processing server competes for a processing lock of the data to be processed corresponding to the network request data identifier; after the task processing server processes the data to be processed to obtain a processing result, the method comprises the following steps: the task processing server releases the processing lock.
Specifically, in this embodiment, the processing right is a processing lock, the task processing server competes for the processing lock of the to-be-processed data corresponding to the network request data identifier, then processes the to-be-processed data according to the processing right of the to-be-processed data acquired by the processing lock, releases the processing lock after the processing is completed, and removes the processed to-be-processed data or identifies the processed to-be-processed data.
Thus, the processing lock is released, and when the next call request is processed, the task processing server can also compete for the processing lock of the data to be processed corresponding to the network request data identifier, so that circular processing is realized.
In one embodiment, after the task processing server contends for the processing lock of the data to be processed corresponding to the network request data identifier, the method further includes: the task processing server acquires the effective time of a processing lock; the data storage module judges whether the effective time of the processing lock is expired; and when the valid time of the processing lock expires, the data storage module releases the processing lock.
Specifically, in this embodiment, in order to avoid that the processing lock is not released all the time when the task server fails, so that the subsequent call request cannot be processed, the effective time of the processing lock is set, and when the processing lock is effectively disabled, the processing lock is released, so that the subsequent call request is normally processed.
Wherein setting the validity time of the processing lock may be setting a revocation time according to the validity time of the processing lock, such that when the revocation time is reached, the data storage module releases the processing lock.
In one embodiment, the scheduling period is greater than the sum of the effective time of the processing lock and the task processing time; the scheduling server sends a scheduling request to the task processing server, and the scheduling request comprises the following steps: the scheduling server sends scheduling requests to all the task processing servers.
In order to fully utilize the task processing servers, the scheduling period is greater than the sum of the effective time of the processing lock and the task processing time, for example, the maximum time of the processing lock owned by the task processing server is 200ms, the scheduling period is 300ms, and the processing time of the data to be processed is within 100ms, so that all the task processing servers are idle during scheduling, and the scheduling server sends scheduling requests to all the task processing servers, so that the task processing servers compete for the processing lock and perform subsequent processing.
In one embodiment, the network request data processing method further includes: a gateway receives a matching request sent by a terminal, wherein the matching request carries a network request data identifier; the gateway forwards the matching request to a request processing server; the request processing server acquires data to be processed according to the matching request and stores the data to be processed into a data storage module queue corresponding to the corresponding network request data identifier; the task processing server acquires data to be processed corresponding to the network request data identifier, and the method comprises the following steps: and reading the data to be processed in the data storage module queue corresponding to the network request data identification from the data storage module.
Specifically, in this embodiment, the terminal sends a matching request to the gateway, and the gateway forwards the matching request to the corresponding request processing server, so that the request processing server obtains the data to be processed from another data server, and stores the data to be processed in the data storage module queue corresponding to the corresponding network request data identifier. Therefore, when the task processing server acquires the data to be processed, the data to be processed in the data storage module queue corresponding to the network request data identifier is read from the data storage module, so that matching is realized. In other embodiments, it is preferable to read a certain amount of data to be processed according to the processing amount or the generation time of the data to be processed, or to read all the data to be processed in the queue.
In one embodiment, after processing the data to be processed to obtain a processing result, the method includes: and asynchronously sending the processing result to a result processing server, wherein the result processing server is used for sending the processing result to a corresponding terminal and storing the processing result.
Specifically, in order to ensure the processing of the task processing server, after the processing result is obtained, the task processing server asynchronously sends the processing result to the result processing server, so that the result processing server can send the processing result to the corresponding terminal on the one hand and store the processing result on the other hand, and the two steps can be performed in parallel.
Optionally, the data storage module is a non-relational database, and the data storage module stores the processing result as a relational database.
Because the relational database is established on the basis of the relational database model, the relational database is used for storing the processing result. The invention relates to a non-relational database system for storing and retrieving data, which uses the non-relational database to store the data to be processed, wherein the batch insertion, query and deletion of the non-relational database only consumes one tenth of the time of the relational database, thus, in order to take high efficiency and data safety into account, all the data to be processed are stored in the non-relational database, the acquisition and deletion of the data in the integration processing process are operated in the non-relational database, the processing result is stored in the relational database, and the processing is triggered asynchronously by a distributed message mq, thus the execution efficiency of data processing is not influenced.
In order to make those skilled in the art fully understand the present application, please refer to fig. 3 and fig. 4, where fig. 3 is a flowchart of a network request data processing method in another embodiment, and fig. 4 is a timing diagram of the network request data processing method in an embodiment, and a game team forming scene is taken as an example for description, where the game team forming scene includes a plurality of terminals, such as a terminal, a gateway, a request processing server, a data storage module, a scheduling server, a task processing server, an outcome storage module, and the like.
The terminal is used for sending a matching request to the gateway, for example, the player terminal sends the matching request to the game gateway, and the gateway receives the matching request and then forwards the matching request to the corresponding request processing server. The game gateway is a service module used for forwarding the matching request, maintaining the login information and the heartbeat of the client, and realizing a series of functions such as load balancing, current limiting and the like.
The request processing server receives the matching request, acquires the data to be processed from other information storage servers, for example, the request processing server acquires the data to be processed for data processing, such as user gender, segment, member level, and the like from other services, and then stores the acquired data to be processed in the data storage module. Specifically, referring to fig. 5, fig. 5 is a schematic diagram illustrating storage of data to be processed in one embodiment.
The data storage module can use a non-relational database to improve the processing time of retrieval, query, deletion and insertion.
The scheduling server initiates a scheduling request, which as above may be a timed trigger or a trigger by monitoring the data storage module for pending data. The scheduling server sends the scheduling request to the task execution server, so that the task execution servers compete for processing locks, the processing locks are distributed locks, one task execution server competes for the processing locks of a plurality of games, and the processing lock of one game can only be obtained by the competition of one task execution server. Specifically, referring to fig. 6, fig. 6 is a schematic diagram of processing locks in an embodiment, where one game corresponds to one processing lock, and one processing lock can only be acquired by competition of one task execution server.
In the scene of game formation, a plurality of objects meeting the conditions are searched from a matching pool and are formed into a formation. In the application, a plurality of users initiate matching requests, and finally return to the user team formation result after matching calculation. A distributed lock is a mutual exclusion mechanism that controls access to shared resources across machines in a distributed cluster system. In this embodiment, a distributed lock function is realized by performing setnx, expire, delete operation combination on a single key in the Redis, specifically, the task processing server executes setnx first, and if the return value is 1, the lock competition is successful; after the competition of the task processing servers is successful, expire operation needs to be executed, expiration time is added to the lock, the lock is automatically released after timeout, and the situation that the lock is not released due to abnormity in the execution period and the lock cannot be competed by other task processing servers is avoided. In addition, after the task processing server contends for the lock completes processing of the data to be processed, delete operation needs to be executed, and the lock is released in time.
After the task processing server acquires a processing lock of a certain game, the queue of the game is acquired from the data storage module, corresponding data in the queue is stored in a memory of the task processing server, and the data to be processed are processed in the task processing server according to a business rule to obtain a processing result. In addition, after the processing result is obtained, the task processing server deletes the successfully processed data to be processed in the queue of the corresponding game and releases the corresponding processing lock.
It should be noted that there are three specific situations when the task processing server processes the data to be processed:
when the task processing server needs to support a new game, the task processing server competes for a processing lock of the new game to the data storage module, namely, the network request data identifier, wherein the name of the processing lock is named by the game identifier, and the task processing server competing for the new processing lock processes the corresponding data to be processed.
When the task processing server is abnormal, the gateway can log off the task processing server, and other normal task processing servers acquire the processing lock acquired by the abnormal task processing server again and pull the to-be-processed data to be processed from the data storage module for processing.
When a new task processing server is added, the scheduling server schedules the new task processing server to compete with other task processing servers for the processing lock, and if the new task processing server competes for the processing lock, the new task processing server processes the data to be processed.
After the task processing server processes the obtained processing result, the processing result is asynchronously sent to the result processing server, for example, asynchronously sent to the result processing server through a message queue mq.
And the result processing server sends the processing result to the gateway on one hand so that the gateway sends the processing result to the corresponding terminal on the other hand, and stores the processing result to the result storage module on the other hand. Wherein preferably the result processing server is a parallel process.
And after the task processing server successfully processes the data to be processed and obtains a processing result, removing the successfully processed data to be processed from the data to be processed, and releasing a processing lock.
Wherein the outcome processing server may be implemented via a relational database for storing the final processing outcome, such as a team outcome in the game.
In the above embodiment, the plurality of task processing servers are not divided into the main task and the standby task, different task processing servers can process different games, there is no waste of resources, and when a certain task processing server is abnormal, other task processing servers can obtain the processing lock of the corresponding game in the next round of scheduling to process the previous data to be processed, while the abnormal task processing server goes offline, and the data is not lost, and no manual participation is needed. When the program is upgraded and modified, other task processing servers can obtain a processing lock in next round of scheduling, and the service cannot be interrupted and the data cannot be lost when the data to be processed before processing is processed.
It should be understood that although the steps in the flowcharts of fig. 2, 3 and 4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3 and 4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in FIG. 1, there is provided a network requested data processing system comprising: a scheduling server 106 and at least one task processing server 107;
the scheduling server 106 is configured to send a scheduling request to the task processing server 107, where the scheduling request carries a network request data identifier;
the task processing server 107 is configured to compete for processing permissions of the to-be-processed data corresponding to the network request data identifier, so that different task processing servers 107 obtain processing permissions of the to-be-processed data corresponding to different network request data identifiers; when the competition obtains the processing right of the data to be processed corresponding to the network request data identifier, acquiring the data to be processed corresponding to the network request data identifier; and processing the data to be processed to obtain a processing result.
In one embodiment, the scheduling server 106 is further configured to resend the scheduling request corresponding to the network request data identifier when the task processing server 107 fails while processing the data to be processed.
The task processing server 107 without the fault is used for competing the processing authority of the data to be processed corresponding to the network request data identifier. When the task processing server 107 which does not have a fault competes for obtaining the processing right of the data to be processed corresponding to the network request data identifier, obtaining the data to be processed corresponding to the network request data identifier; and processing the acquired data to be processed to obtain a processing result.
In one embodiment, the system further includes a newly added task processing server 107, where the newly added task processing server 107 is configured to register with the scheduling server 106;
the scheduling server 106 is further configured to send a scheduling request to all task processing servers 107, where all task processing servers 107 include the newly added task processing server 107.
In one embodiment, the task processing server 107 is configured to compete for a processing lock of the to-be-processed data corresponding to the network request data identifier; and releasing the processing lock after processing the data to be processed to obtain a processing result.
In one embodiment, the task processing server 107 is further configured to obtain the valid time of the processing lock after competing for the processing lock of the data to be processed corresponding to the network request data identifier;
the data storage module 105 is configured to determine whether the valid time of the processing lock is expired; and when the valid time of the processing lock expires, releasing the processing lock.
In one embodiment, the scheduling period is greater than the sum of the effective time of the processing lock and the task processing time; the scheduling server 106 is configured to send scheduling requests to all the task processing servers 107.
In one embodiment, the system further includes a gateway 102 and a request processing server 103, where the gateway 102 is configured to receive a matching request sent by the terminal 101, and the matching request carries a network request data identifier; the matching request is forwarded to the request processing server 103.
The request processing server 103 is configured to obtain data to be processed according to the matching request, and store the data to be processed in the data storage module 105 queue corresponding to the corresponding network request data identifier.
The task processing server 107 is configured to read all the to-be-processed data in the queue of the data storage module 105 corresponding to the network request data identifier from the data storage module 105.
In one embodiment, the task processing server 107 is further configured to asynchronously send the processing result to the result processing server 104 after processing the data to be processed to obtain the processing result, and the result processing server 104 is configured to send the processing result to the corresponding terminal 101 and store the processing result in the result storage module 108.
In one embodiment, the data storage module 105 is a non-relational database and stores the results of the processing as a relational database.
For specific limitations of the network request data processing system, reference may be made to the above limitations of the network request data processing method, which is not described herein again. The various modules in the network-request data processing system described above may be implemented in whole or in part in software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for processing network request data, the method comprising:
the scheduling server sends a scheduling request to the task processing server, wherein the scheduling request carries a network request data identifier;
at least one task processing server competes for the processing permission of the data to be processed corresponding to the network request data identifier, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifiers;
when the task processing server obtains the processing right of the data to be processed corresponding to the network request data identifier, the task processing server obtains the data to be processed corresponding to the network request data identifier;
and the task processing server processes the data to be processed to obtain a processing result.
2. The method of claim 1, further comprising:
when the task processing server fails in processing the data to be processed, the scheduling server resends a scheduling request corresponding to the network request data identifier;
the task processing server which does not have faults competes for the processing authority of the data to be processed corresponding to the network request data identification;
when the task processing server which does not have a fault competes to obtain the processing right of the data to be processed corresponding to the network request data identifier, the task processing server which does not have the fault obtains the data to be processed corresponding to the network request data identifier;
and the task processing server which does not have faults processes the acquired data to be processed to obtain a processing result.
3. The method of claim 1, further comprising:
the newly added task processing server registers to the scheduling server;
the scheduling server sends a scheduling request to the task processing server, and the scheduling request comprises the following steps:
and the scheduling server sends scheduling requests to all the task processing servers, wherein all the task processing servers comprise newly added task processing servers.
4. The method according to any one of claims 1 to 3, wherein the task processing server contends for the processing right of the data to be processed corresponding to the network request data identifier, including:
the task processing server competes for a processing lock of the data to be processed corresponding to the network request data identifier;
after the task processing server processes the data to be processed to obtain a processing result, the method comprises the following steps:
and the task processing server releases the processing lock.
5. The method according to claim 4, wherein after the task processing server contends for the processing lock of the data to be processed corresponding to the network request data identifier, the method further comprises:
the task processing server acquires the effective time of the processing lock;
the data storage module judges whether the effective time of the processing lock is expired;
and when the valid time of the processing lock expires, the data storage module releases the processing lock.
6. The method of claim 5, wherein the scheduling period is greater than a sum of an active time of the processing lock and a task processing time; the scheduling server sends a scheduling request to the task processing server, and the scheduling request comprises the following steps:
and the scheduling server sends scheduling requests to all the task processing servers.
7. The method of claim 1, further comprising:
a gateway receives a matching request sent by a terminal, wherein the matching request carries a network request data identifier;
the gateway forwards the matching request to a request processing server;
the request processing server acquires data to be processed according to the matching request and stores the data to be processed into a data storage module queue corresponding to the corresponding network request data identifier;
the task processing server obtains the data to be processed corresponding to the network request data identifier, and the method comprises the following steps:
and the task processing server reads all the data to be processed in the data storage module queue corresponding to the network request data identifier from the data storage module.
8. The method according to claim 7, wherein after the task processing server processes the data to be processed to obtain a processing result, the method comprises:
and the task processing server asynchronously sends the processing result to a result processing server, and the result processing server is used for sending the processing result to a corresponding terminal and storing the processing result.
9. The method of claim 8, wherein the data storage module is a non-relational database and the processing result is stored in a relational database.
10. A network request data processing system, characterized in that the system comprises a scheduling server and at least one task processing server;
the scheduling server is used for sending a scheduling request to the task processing server, wherein the scheduling request carries a network request data identifier;
the task processing servers are used for competing the processing permission of the data to be processed corresponding to the network request data identification, so that different task processing servers obtain the processing permission of the data to be processed corresponding to different network request data identifications; when the processing right of the data to be processed corresponding to the network request data identification is obtained through competition, the data to be processed corresponding to the network request data identification is obtained; and processing the data to be processed to obtain a processing result.
CN202110790757.5A 2021-07-13 2021-07-13 Network request data processing method and system Pending CN113590314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110790757.5A CN113590314A (en) 2021-07-13 2021-07-13 Network request data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110790757.5A CN113590314A (en) 2021-07-13 2021-07-13 Network request data processing method and system

Publications (1)

Publication Number Publication Date
CN113590314A true CN113590314A (en) 2021-11-02

Family

ID=78247209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110790757.5A Pending CN113590314A (en) 2021-07-13 2021-07-13 Network request data processing method and system

Country Status (1)

Country Link
CN (1) CN113590314A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070250849A1 (en) * 2006-04-07 2007-10-25 Advance A/S Method and device for media quiz
CN101317163A (en) * 2005-11-30 2008-12-03 国际商业机器公司 Non-stop transaction processing system
CN103092682A (en) * 2011-10-28 2013-05-08 浙江大华技术股份有限公司 Asynchronous network application program processing method
CN105260238A (en) * 2015-10-13 2016-01-20 珠海许继芝电网自动化有限公司 Multi-process performance improvement deployment method
CN105765547A (en) * 2013-10-25 2016-07-13 超威半导体公司 Method and apparatus for performing a bus lock and translation lookaside buffer invalidation
CN106874094A (en) * 2017-02-17 2017-06-20 广州爱九游信息技术有限公司 timed task processing method, device and computing device
CN108132830A (en) * 2016-12-01 2018-06-08 北京金山云网络技术有限公司 A kind of method for scheduling task, apparatus and system
CN109558223A (en) * 2018-10-11 2019-04-02 珠海许继芝电网自动化有限公司 A kind of multi-process promotes workflow dispositions method and system
CN109885624A (en) * 2019-01-23 2019-06-14 金蝶软件(中国)有限公司 Data processing method, device, computer equipment and storage medium
CN110990200A (en) * 2019-11-26 2020-04-10 苏宁云计算有限公司 Flow switching method and device based on multi-activity data center
CN111541762A (en) * 2020-04-20 2020-08-14 广州酷狗计算机科技有限公司 Data processing method, management server, device and storage medium
CN111782360A (en) * 2020-06-28 2020-10-16 中国工商银行股份有限公司 Distributed task scheduling method and device
CN111835809A (en) * 2019-09-23 2020-10-27 北京嘀嘀无限科技发展有限公司 Work order message distribution method, work order message distribution device, server and storage medium
CN112148447A (en) * 2020-09-22 2020-12-29 京东数字科技控股股份有限公司 Task processing method and system based on risk control and electronic equipment
CN112861346A (en) * 2021-02-07 2021-05-28 北京润尼尔网络科技有限公司 Data processing system, method and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101317163A (en) * 2005-11-30 2008-12-03 国际商业机器公司 Non-stop transaction processing system
US20070250849A1 (en) * 2006-04-07 2007-10-25 Advance A/S Method and device for media quiz
CN103092682A (en) * 2011-10-28 2013-05-08 浙江大华技术股份有限公司 Asynchronous network application program processing method
CN105765547A (en) * 2013-10-25 2016-07-13 超威半导体公司 Method and apparatus for performing a bus lock and translation lookaside buffer invalidation
CN105260238A (en) * 2015-10-13 2016-01-20 珠海许继芝电网自动化有限公司 Multi-process performance improvement deployment method
CN108132830A (en) * 2016-12-01 2018-06-08 北京金山云网络技术有限公司 A kind of method for scheduling task, apparatus and system
CN106874094A (en) * 2017-02-17 2017-06-20 广州爱九游信息技术有限公司 timed task processing method, device and computing device
CN109558223A (en) * 2018-10-11 2019-04-02 珠海许继芝电网自动化有限公司 A kind of multi-process promotes workflow dispositions method and system
CN109885624A (en) * 2019-01-23 2019-06-14 金蝶软件(中国)有限公司 Data processing method, device, computer equipment and storage medium
CN111835809A (en) * 2019-09-23 2020-10-27 北京嘀嘀无限科技发展有限公司 Work order message distribution method, work order message distribution device, server and storage medium
CN110990200A (en) * 2019-11-26 2020-04-10 苏宁云计算有限公司 Flow switching method and device based on multi-activity data center
CN111541762A (en) * 2020-04-20 2020-08-14 广州酷狗计算机科技有限公司 Data processing method, management server, device and storage medium
CN111782360A (en) * 2020-06-28 2020-10-16 中国工商银行股份有限公司 Distributed task scheduling method and device
CN112148447A (en) * 2020-09-22 2020-12-29 京东数字科技控股股份有限公司 Task processing method and system based on risk control and electronic equipment
CN112861346A (en) * 2021-02-07 2021-05-28 北京润尼尔网络科技有限公司 Data processing system, method and electronic equipment

Similar Documents

Publication Publication Date Title
US11159649B2 (en) Systems and methods of rate limiting for a representational state transfer (REST) application programming interface (API)
CN108762931A (en) Method for scheduling task, server based on distributed scheduling system and storage medium
US11729007B2 (en) Methods and apparatus to manage timing in a blockchain network
CN111447102B (en) SDN network device access method and device, computer device and storage medium
CN109918187B (en) Task scheduling method, device, equipment and storage medium
CN111711697A (en) Message pushing method, device, equipment and storage medium
CN107423942B (en) Service transfer method and device
CN111818117A (en) Data updating method and device, storage medium and electronic equipment
CN110673933A (en) ZooKeeper-based distributed asynchronous queue implementation method, device, equipment and medium
CN111698126B (en) Information monitoring method, system and computer readable storage medium
CN111541762B (en) Data processing method, management server, device and storage medium
CN111385294B (en) Data processing method, system, computer device and storage medium
US8359601B2 (en) Data processing method, cluster system, and data processing program
CN109766317B (en) File deletion method, device, equipment and storage medium
CN113301390B (en) Data processing method, device and server for calling virtual resources
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN112448883A (en) Message pushing method and device, computer equipment and storage medium
CN113590314A (en) Network request data processing method and system
CN116701020A (en) Message delay processing method, device, equipment, medium and program product
CN115563160A (en) Data processing method, data processing device, computer equipment and computer readable storage medium
CN112671636A (en) Group message pushing method and device, computer equipment and storage medium
CN111953621A (en) Data transmission method and device, computer equipment and storage medium
CN107704557B (en) Processing method and device for operating mutually exclusive data, computer equipment and storage medium
CN114780217B (en) Task scheduling method, device, computer equipment and medium
CN111163117A (en) Zookeeper-based peer-to-peer scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211102

RJ01 Rejection of invention patent application after publication