CN111314434B - Request processing method and server - Google Patents

Request processing method and server Download PDF

Info

Publication number
CN111314434B
CN111314434B CN202010064448.5A CN202010064448A CN111314434B CN 111314434 B CN111314434 B CN 111314434B CN 202010064448 A CN202010064448 A CN 202010064448A CN 111314434 B CN111314434 B CN 111314434B
Authority
CN
China
Prior art keywords
user request
request data
data
user
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010064448.5A
Other languages
Chinese (zh)
Other versions
CN111314434A (en
Inventor
刘东阳
高传集
于沈课
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202010064448.5A priority Critical patent/CN111314434B/en
Publication of CN111314434A publication Critical patent/CN111314434A/en
Application granted granted Critical
Publication of CN111314434B publication Critical patent/CN111314434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The invention provides a request processing method and a server side, wherein the method comprises the following steps: receiving at least one user request of at least one client; aiming at each received user request, acquiring a request path corresponding to the user request, and combining the user request and the request path into user request data; respectively storing the data requested by each user into a preset buffer space; reading at least two user request data from the buffer space in sequence, and determining at least one data packet according to the request path included by the data packet; for each data packet, distributing each user request data included in the data packet to a processor group according to a target request path carried by each user request data in the data packet; for each processor group, processing, by at least one processor included in the processor group, each user request data included in the data packet distributed to the processor group. The scheme can reduce the cost for processing the high concurrent requests.

Description

Request processing method and server
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a request processing method and a server.
Background
The technology of the current internet is changing day by day, and the application of the internet brings convenience to our lives in all aspects. Meanwhile, the dependence of people on internet application also brings huge challenges to internet application. With the explosive growth of internet data, processing under the scenario of high concurrent requests of internet applications is often a bottleneck concerned by developers and designers as a key point and difficulty.
At present, for the processing of a high-concurrency request, a mode of combining components and service in a distributed manner is generally adopted for implementation, which requires that various components are often required to be deployed in the deployment process of the service, the request is shunted and processed through a reverse proxy server and distributed to different nodes, and the service application is decomposed in a micro-service manner, so that shunting of the request is implemented, and the pressure of the request on the application is reduced.
As can be seen from the above description, in the prior art, a high-concurrency request is processed in a manner of combining components and services in a distributed manner, and a large-scale deployment of the components is required, which results in a high cost for processing the high-concurrency request.
Disclosure of Invention
The embodiment of the invention provides a request processing method and a server, which can reduce the cost for processing high concurrent requests.
In a first aspect, the present invention provides a request processing method, applied to a server, including:
receiving at least one user request from at least one client;
aiming at each received user request, acquiring a request path corresponding to the user request, and combining the user request with the request path to acquire user request data;
respectively storing the user request data into a preset buffer space;
reading at least two user request data from the buffer space in sequence, and determining at least one data packet according to the request path included by the user request data, wherein the same data packet includes at least two user request data carrying the same request path, and the request paths carried by the user request data in different data packets are different;
for each data packet, according to a target request path carried by each user request data in the data packet, distributing each user request data included in the data packet to a processor group subscribing the target request path;
for each of the processor groups, processing, by at least one processor included in the processor group, each of the user request data included in the data packet distributed to the processor group.
Preferably, the first and second liquid crystal display panels are,
the storing each user request data to a preset buffer space respectively includes:
s0: storing each of the user request data into a buffer, wherein the buffer space comprises the buffer and the memory;
s1: judging whether the time length for backing up the buffer zone for the last time reaches a preset backup period, if so, executing S2, otherwise, executing S3;
s2: backing up at least one user request data stored in the buffer area to the memory to form a backup file containing at least one user request data, and executing S1;
s3: and detecting whether the data capacity of the user request data stored in the buffer memory reaches a preset capacity threshold, if so, executing S2, otherwise, executing S1.
Preferably, the first and second liquid crystal display panels are,
the sequentially reading at least two user request data from the buffer space includes:
when at least one backup file is stored in the memory, reading the backup file generated firstly according to the generation time of the backup file.
Preferably, the first and second liquid crystal display panels are,
after the S0, further comprising:
determining at least two partition marks;
adding one partition mark for each user request data respectively, so that the difference of the number of the user request data added with different partition marks is smaller than a preset number threshold;
for each processor group, processing, by at least one processor included in the processor group, each user request data included in the data packet distributed to the processor group, including:
and for each user request data in each processor group, distributing each user request data to the processor responsible for the corresponding partition for processing according to the partition mark corresponding to the user request data.
Preferably, the first and second liquid crystal display panels are,
after the S0, further comprising:
adding a position mark for each user request data, wherein the position marks of different user data are different;
detecting whether the processing of the user request data is stopped;
if the processing of the user request data has been stopped, recording the position mark of the last processed user request data, so as to process the user request data next to the recorded position mark as a start position when the processing of the user request data is started next time.
In a second aspect, the present invention provides a server, including:
a receiving module for receiving at least one user request from at least one client;
a first processing module, configured to, for each user request received by the receiving module, obtain a request path corresponding to the user request, and combine the user request and the request path to obtain user request data;
the storage module is used for respectively storing the user request data acquired by the first processing module into a preset buffer space;
a first determining module, configured to read at least two pieces of user request data stored by the storage module from the buffer space in sequence, and determine at least one data packet according to the request path included in the user request data, where the same data packet includes at least two pieces of user request data carrying the same request path, and the request paths carried by the user request data in different data packets are different;
a distribution module, configured to, for each data packet determined by the first determination module, distribute, according to a target request path carried by each user request data in the data packet, each user request data included in the data packet to a processor group subscribed to the target request path;
and a second processing module, configured to, for each processor group, process, by using at least one processor included in the processor group, each user request data included in the data packet distributed to the processor group by the distribution module.
Preferably, the first and second electrodes are formed of a metal,
the storage module is used for executing:
s0: storing each of the user request data into a buffer, wherein the buffer space comprises the buffer and the memory;
s1: judging whether the time length for backing up the buffer zone for the last time reaches a preset backup period, if so, executing S2, otherwise, executing S3;
s2: backing up at least one user request data stored in the buffer area to the memory to form a backup file including at least one user request data, and executing S1;
s3: and detecting whether the data capacity of the user request data stored in the buffer memory reaches a preset capacity threshold, if so, executing S2, otherwise, executing S1.
Preferably, the first and second electrodes are formed of a metal,
the first determining module, when performing reading at least two user request data stored by the storing module from the buffer space in sequence, is configured to:
and when at least one backup file is stored in the memory, reading the first generated backup file according to the generation time of the backup file.
Preferably, the first and second liquid crystal display panels are,
further comprising:
a second determining module for determining at least two partition markers;
the partition marking module is used for adding one partition mark determined by the second determining module to each piece of user request data respectively, so that the difference between the numbers of the user request data added with different partition marks is smaller than a preset number threshold;
the second processing module is configured to perform:
and for each user request data in each processor group, distributing each user request data to the processor responsible for the corresponding partition for processing according to the partition mark corresponding to the user request data.
Preferably, the first and second liquid crystal display panels are,
further comprising:
a location marking module, configured to add a location mark to each user request data stored in the buffer space by the storage module, where the location mark is different for different user data;
the detection module is used for detecting whether the processing of the user request data is stopped or not;
a recording module, configured to record the position mark of the last processed user request data if the detection module detects that processing of the user request data has stopped, so as to process, when processing of the user request data is started next time, the user request data next to the recorded position mark as a start position.
In order to handle a high-concurrency application request, the embodiment of the invention provides a request processing method and a server, and needs to store the acquired user request data into a preset buffer space first and then read the user request data from the buffer space in sequence. Each processor group comprises a plurality of processors, each processor carries out corresponding request processing according to request path grouping of user request data, and each user request sent by a client is received by a server and carries a corresponding request path, so that the user request and the corresponding request path can be combined into user request data, and data grouping is carried out according to the corresponding request path of each combined user request data, so that the same data grouping comprises at least two user request data carrying the same request path, and then the processors can carry out grouping processing on each user request data according to the corresponding grouping responsible for the processor. By the method, the phenomenon that when the user requests high concurrency, the load of the server side is overlarge, a large number of application components are not required to be deployed is avoided, and further the cost waste of processing the high concurrency requests due to the large number of application components deployed when the user requests the high concurrency can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a request processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another request processing method according to an embodiment of the present invention
Fig. 3 is a schematic diagram of a server according to an embodiment of the present invention;
fig. 4 is a schematic diagram of another server according to an embodiment of the present invention;
fig. 5 is a schematic diagram of another server according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, it is obvious that the described embodiments are some, but not all embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a request processing method, which may include the following steps:
step 101: receiving at least one user request from at least one client;
step 102: aiming at each received user request, acquiring a request path corresponding to the user request, and combining the user request and the request path to obtain user request data;
step 103: respectively storing the data requested by each user into a preset buffer space;
step 104: reading at least two user request data from the buffer space in sequence, and determining at least one data packet according to a request path included by the user request data, wherein the same data packet includes at least two user request data carrying the same request path, and the request paths carried by the user request data in different data packets are different;
step 105: aiming at each data packet, according to a target request path carried by each user request data in the data packet, each user request data included in the data packet is distributed to a processor group subscribed with the target request path;
step 106: for each processor group, processing, by at least one processor included in the processor group, respective user request data included in data packets distributed to the processor group.
In the embodiment of the present invention, in order to handle a highly concurrent application request, the acquired user request data needs to be stored in a preset buffer space first, and then the user request data is sequentially read from the buffer space. Each processor group comprises a plurality of processors, each processor carries out corresponding request processing according to request path grouping of user request data, and each user request sent by a client is received by a server and carries a corresponding request path, so that the user request and the corresponding request path can be combined into user request data, and data grouping is carried out according to the corresponding request path of each combined user request data, so that the same data grouping comprises at least two user request data carrying the same request path, and then the processors can carry out grouping processing on each user request data according to the corresponding grouping responsible for the processor. By the method, the phenomenon that when the user requests high concurrency, the load of the server side is overlarge, a large number of application components are not needed to be deployed, and further the cost waste of processing the high concurrency request due to the fact that a large number of application components are deployed when the request is high concurrency can be reduced.
In order to detect the state of the buffer area to determine whether a backup operation of user request data is required, in an embodiment of the present invention, the step of storing each user request data in a preset buffer space in the above embodiment may be specifically implemented by the following method:
s0: storing each user request data into a buffer area, wherein the buffer space comprises the buffer area and a memory;
s1: judging whether the time length from the last backup of the buffer zone reaches a preset backup period, if so, executing S2, otherwise, executing S3;
s2: backing up the at least one user request data stored in the buffer area to the memory to form a backup file including the at least one user request data, and performing S1;
s3: and detecting whether the data capacity of the user request data stored in the buffer reaches a preset capacity threshold, if so, executing S2, otherwise, executing S1.
In the embodiment of the present invention, in order to respond to a high-concurrency application request, the acquired user request data needs to be stored in a buffer area of a buffer space first, and when the data capacity of the user request data stored in the buffer area reaches a preset capacity threshold, the user request data stored in the buffer area needs to be backed up to a memory to form a backup file including at least one user request data, so that when the user request is high-concurrency, the user request can be processed in time. Or when the backup period of the buffer area is reached, the server will also backup the user request data of the buffer area to the memory, so that whether the user request data needs to be backed up can be judged according to the backup period of the buffer area or the data capacity of the user request data stored in the buffer area.
In order to read user request data so as to process a user request in time, in an embodiment of the present invention, the step of reading at least two user request data from the buffer space in sequence in the above embodiment may be specifically implemented by the following manner:
when at least one backup file is stored in the memory, reading the backup file generated firstly according to the generation time of the backup file.
In the embodiment of the invention, because the user request data can only be backed up to the memory through the cache region and the user request data is read from the memory, when at least one backup file is stored in the memory, the reading operation of the backup file can be carried out, and the backup file generated firstly is read firstly according to the generation time of the backup file, so that the high-concurrency request can be processed in time.
In order to facilitate the processor to process different user request data and split the user request, in an embodiment of the present invention, after step S0, the foregoing embodiment further includes the following steps:
determining at least two partition marks;
adding a partition mark for each user request data respectively, so that the difference of the number of the user request data added with different partition marks is smaller than a preset number threshold;
for each processor group, processing, by at least one processor included in the processor group, each user request data included in a data packet distributed to the processor group, including:
and aiming at each user request data in each processor group, distributing each user request data to a processor which is responsible for a corresponding partition for processing according to a partition mark corresponding to the user request data.
In the embodiment of the invention, under a high-concurrency application request, the same group of user request data in the cache region needs to be partitioned to distribute the user request, the pressure of the high-concurrency user request on a server side is reduced, at least two partition marks are determined, and one partition mark is added to each user request data, so that the difference of the number of the user request data added with different partition marks is smaller than a preset number threshold, that is, the user request data are uniformly divided into the partitions, and the user request is distributed.
In order to avoid repeated processing or missing processing on the user request, in an embodiment of the present invention, after step S0, the above embodiment further includes the following steps:
adding a position mark for each user request data, wherein the position marks of different user data are different;
detecting whether the processing of the user request data is stopped;
if the processing of the user request data has been stopped, the position mark of the last processed user request data is recorded to process the user request data next to the recorded position mark as the start position when the processing of the user request data is started next.
In the embodiment of the invention, whether the next user request data processing can be executed or not can be determined by detecting whether the processing of the user request data is stopped or not, the position mark is added to each user request data, and the position mark of the last processed user request data is recorded, so that the next user request data of the recorded position mark can be used as the initial position to be processed when the processing of the user request data is started next time, and the repeated processing or the omitted processing of the user request is avoided.
At present, the processing of high-concurrency user requests is generally realized by adopting a mode of combining components and services in a distributed mode. The method requires that various components are often deployed in the process of deploying the service, a user request is shunted and processed through a reverse proxy server similar to Nginx and is distributed to different nodes, a micro-service mode is adopted to decompose the service application, or a Kubernet-like virtualization technology is adopted to place the application into a container, and the measures adopt a large-scale component and application deployment mode to realize shunting of the user request, so that the pressure of the user request on the application is reduced. For a portion of high-concurrency requests, such a scheme is well-designed and hardware resource utilization efficient. The deployment method has good effect on special high-concurrency scenes such as e-commerce second killing, spring transportation ticket grabbing and the like. The scenes usually need to be responded to the result requested by the user in time by the application, and the user needs to be responded at the first time if the user kills the commodity or robbes the ticket in seconds. However, for other scenarios, such as when a merchant user reserves goods or initiates an audit process, which do not require timely feedback, in a high-concurrency situation, a large-scale component deployment mode is adopted to operate, and a certain waste exists for resources.
Aiming at such a scenario, a user can process a high-load request in a mode of requesting for publication and subscription, user request data is put into a buffer, and the buffered user request data is distributed to a processor group to complete processing. The application software design mode can greatly reduce the dependence of the application on the load balancing component. By the method, a single node of the application has the capability of bearing high concurrent requests, the requirement of the application on the number of cluster nodes is effectively reduced, the resource utilization rate of the single node is improved, large-scale clusters do not need to be deployed to distribute user requests, and the deployment, operation and maintenance cost is reduced.
As shown in fig. 2, in order to more clearly illustrate the technical solution and advantages of the present invention, the following detailed description of the request processing method provided by the present invention may specifically include the following steps:
step 201: receiving at least one user request from at least one client;
step 202: aiming at each received user request, acquiring a request path corresponding to the user request, and combining the user request and the request path to acquire user request data;
specifically, after receiving at least one user request sent by at least one client, a server can intercept all requests through a request interceptor, and after the request interception, because each user request carries a corresponding request path, the server can perform grouping according to the request path corresponding to each user request to generate user request data, then convert the user request data into a uniform format and compress the user request data, and the compressed user request data carries grouping information thereof and can be distributed into a buffer in batches through a message queue to distribute high-concurrency user requests.
Step 203: storing each user request data into a buffer area, wherein the buffer space comprises the buffer area and a memory;
step 204: determining at least two partition marks;
step 205: adding a partition mark for each user request data respectively, so that the difference of the number of the user request data added with different partition marks is smaller than a preset number threshold;
step 206: adding a position mark for each user request data, wherein the position marks of different user data are different;
step 207: detecting whether the processing of the user request data is stopped;
step 208: if the processing of the user request data has been stopped, the position mark of the last processed user request data is recorded to process the user request data next to the recorded position mark as the start position when the processing of the user request data is started next time.
Specifically, the coordinator may monitor a user request data adding operation of the buffer, partition the same group of user request data in the buffer, and divide the same group of user request data into n partitions and perform partition marking respectively. Meanwhile, the position of each user request data can be marked, and the coordinator is used as an independent thread and can be responsible for recording the last marked position of the last processed user request data.
Step 209: and judging whether the time length for backing up the buffer zone from the last time reaches a preset backup period, if so, executing step 210, otherwise, executing step 211.
Step 210: the at least one user request data stored in the buffer is backed up to the memory to form a backup file including the at least one user request data, and step 209 is performed.
Step 211: and detecting whether the data capacity of the user request data stored in the buffer memory reaches a preset capacity threshold, if so, executing step 210, otherwise, executing step 209.
Specifically, the buffer is mainly responsible for caching the user request data, the component is an area in the memory, and the size of the area is settable, and if the data capacity of the user request data reaches a preset capacity threshold, for example, the capacity threshold is 100, a backup device is triggered at this time, and the user request data is stored in the storage, for example: and a hard disk. And the buffer area can be recycled once every certain backup period, and all the user request data in the area are stored on the hard disk.
Step 212: when at least one backup file is stored in the memory, reading the backup file generated at first according to the generation time of the backup file, and determining at least one data packet according to a request path included by user request data, wherein the same data packet comprises at least two user request data carrying the same request path, and the request paths carried by the user request data in different data packets are different;
specifically, the backup file can be read by a restorer, the component restores user request data from the hard disk according to the requirement of the coordinator, groups the user request data according to a request path, finally delivers the user request data to the distributor, and feeds back the user request data processing completion to the coordinator, so as to record the position mark of the last processed user request data, and when the user request data is processed next time, processes the next user request data of the recorded position mark as the initial position.
Step 213: aiming at each data packet, according to a target request path carried by each user request data in the data packet, each user request data included in the data packet is distributed to a processor group subscribed with the target request path;
in particular, user request data may be distributed to processors by a distributor in groups to which the processor groups subscribe.
Step 214: and aiming at each user request data in each processor group, distributing each user request data to a processor in charge of a corresponding partition for processing according to a partition mark corresponding to the user request data.
Specifically, each processor group is composed of a plurality of processors, and the processor group is responsible for subscribing the user request data issued by the interceptor according to the request group. And after receiving the data of the subscription group, the processor group distributes the partitioned data to different processors according to the partition marks. Here, the number of processors in the processor group should be equal to or less than the number of partitions, n.
When the application is closed, the coordinator stores the last position of the currently processed user request data on the hard disk, and when the application is started next time, the coordinator continues to use the marked position mark, and the user request data is processed from the position.
In one embodiment of the invention, the client sends a user request0, the request interceptor intercepts the request0, and sorts into request packets url1 according to the request path url1 of the request 0. The request interceptor converts the user request into a format req0 according to a specified format, compresses the format req0 into a user request compressReq0, and then the user request compressReq0 carries grouping information url1 to form user request data reqData0 to be issued to the buffer.
After the buffer receives the user request data reqData0, the coordinator monitors the event that new user request data reqData0 is added to the buffer, and obtains the user request data reqData0, marks the partition2 for the user request data, and marks the offset-1000 of the position of the user request data.
When the user request data stored in the buffer reaches a preset capacity threshold (for example, 100 pieces) of data capacity or reaches a buffer backup period (for example, 30 minutes), triggering the backup action of the backup device, storing the user request data reqData0 in the hard disk, and storing the user request data into block-100.
After receiving the feedback of the last processing completion of the user request data from the restorer, the coordinator starts to read data from the block-100 in which the user request data reqData0 is located. The coordinator sends a processing instruction to the restorer, the restorer reads the data of the block-100, analyzes the data from the specified offset, acquires a packet url1 of user request data reqData0 after analyzing the data to reqData0, puts the user request data reqData0 into the data packet url1, and sends url1 to the distributor.
The distributor gets a data packet url 1. Since the handlerGroup1 subscribed to the data packet url1, the data packet url1 was distributed to handlerGroup 1.
After the processor group handlerGroup1 obtains a group of data packets url1, the data in the group is handed to different processors for processing according to different partitions. The user request data reqData0 is handed to the processor handler2 for processing according to the partition2, the handler2 receives the user request data reqData0, takes out the compressReq0 for decompression and processing according to the service content.
As shown in fig. 3, an embodiment of the present invention provides a server, including:
a receiving module 301, configured to receive at least one user request from at least one client;
a first processing module 302, configured to, for each user request received by the receiving module 301, obtain a request path corresponding to the user request, and combine the user request and the request path to obtain user request data;
a storage module 303, configured to store each user request data obtained by the first processing module 302 into a preset buffer space;
a first determining module 304, configured to read at least two user request data stored by the storage module from the buffer space in sequence, and determine at least one data packet according to a request path included in the user request data, where the same data packet includes at least two user request data carrying the same request path, and request paths carried by the user request data in different data packets are different;
a distributing module 305, configured to, for each data packet determined by the first determining module 304, distribute, according to a target request path carried by each user request data in the data packet, each user request data included in the data packet to a processor group subscribing to the target request path;
a second processing module 306, configured to, for each processor group, process, by using at least one processor included in the processor group, each user request data included in the data packet distributed to the processor group by the distribution module 305.
In the embodiment of the present invention, in order to handle a highly concurrent application request, the acquired user request data needs to be stored in the preset buffer space through the storage module, and then the user request data is sequentially read from the buffer space through the first determining module. Each processor group comprises a plurality of processors, each processor carries out corresponding request processing according to request path grouping of user request data, and each user request sent by a client is received by a server through a receiving module and carries a corresponding request path, so that the user request and the corresponding request path are combined into user request data through a first processing module, the data grouping is carried out according to the corresponding request path of each combined user request data, at least two user request data carrying the same request path are included in the same data grouping, then each user request data is distributed to a processor group subscribed with the target request path through a distribution module, and each user request data is subjected to grouping processing according to the corresponding grouping responsible by a second processing module. By the method, the phenomenon that when the user requests high concurrency, the load of the server side is overlarge, a large number of application components are not required to be deployed is avoided, and further the cost waste of processing the high concurrency requests due to the large number of application components deployed when the user requests the high concurrency can be reduced.
In an embodiment of the present invention, the storage module 303 is configured to perform:
s0: storing each user request data into a buffer area, wherein the buffer space comprises the buffer area and a memory;
s1: judging whether the time length from the last backup of the buffer zone reaches a preset backup period, if so, executing S2, otherwise, executing S3;
s2: backing up the at least one user request data stored in the buffer area to the memory to form a backup file including the at least one user request data, and performing S1;
s3: and detecting whether the data capacity of the user request data stored in the buffer reaches a preset capacity threshold, if so, executing S2, otherwise, executing S1.
In an embodiment of the present invention, the first determining module 304, when sequentially reading at least two user request data stored by the storage module from the buffer space, is configured to:
when at least one backup file is stored in the memory, reading the backup file generated firstly according to the generation time of the backup file.
Based on the server shown in fig. 3, as shown in fig. 4, in an embodiment of the present invention, the method further includes:
a second determining module 307 for determining at least two partition marks;
a partition marking module 308, configured to add a partition mark determined by the second determining module 307 to each user request data, respectively, so that a difference between the numbers of user request data to which different partition marks are added is smaller than a preset number threshold;
a second processing module 306, configured to perform:
and aiming at each user request data in each processor group, distributing each user request data to a processor in charge of a corresponding partition for processing according to a partition mark corresponding to the user request data.
Based on the server shown in fig. 3, as shown in fig. 5, in an embodiment of the present invention, the method further includes:
a location marking module 309, configured to add a location mark to each user request data stored in the buffer space by the storage module, where the location marks of different user data are different;
a detecting module 310, configured to detect whether processing of user request data has stopped;
a recording module 311, configured to record the position mark of the last processed user request data if the detection module detects that the processing of the user request data has stopped, so as to process the next user request data of the recorded position mark as the starting position when the processing of the user request data is started next time.
It should be understood that the illustrated structure of the embodiment of the present invention does not form a specific limitation on the server. In other embodiments of the invention, the server side may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Because the information interaction, execution process, and other contents between the units in the device are based on the same concept as the method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.
The embodiment of the invention also provides a server, which comprises: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine-readable program to perform the request processing method in any embodiment of the present invention.
Embodiments of the present invention further provide a computer-readable medium, where computer instructions are stored on the computer-readable medium, and when executed by a processor, the computer instructions cause the processor to execute the request processing method in any embodiment of the present invention. Specifically, a system or an apparatus equipped with a storage medium on which software program codes that realize the functions of any of the above-described embodiments are stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer by a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform a part or all of the actual operations based on instructions of the program code.
Further, it is to be understood that the program code read out from the storage medium is written to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion unit connected to the computer, and then causes a CPU or the like mounted on the expansion board or the expansion unit to perform part or all of the actual operations based on instructions of the program code, thereby realizing the functions of any of the above-described embodiments.
The embodiments of the invention have at least the following beneficial effects:
1. in an embodiment of the present invention, in order to handle a highly concurrent application request, the acquired user request data needs to be stored in a preset buffer space first, and then the user request data is sequentially read from the buffer space. Each processor group comprises a plurality of processors, each processor carries out corresponding request processing according to request path grouping of user request data, and each user request sent by a client is received by a server and carries a corresponding request path, so that the user request and the corresponding request path can be combined into the user request data, the data grouping is carried out according to the corresponding request path of each combined user request data, the same data grouping comprises at least two user request data carrying the same request path, and then the processors can carry out grouping processing on each user request data according to the corresponding grouping in charge of the processors. By the method, the phenomenon that when the user requests high concurrency, the load of the server side is overlarge, a large number of application components are not required to be deployed is avoided, and further the cost waste of processing the high concurrency requests due to the large number of application components deployed when the user requests the high concurrency can be reduced.
2. In an embodiment of the present invention, in order to handle a high-concurrency application request, acquired user request data needs to be stored in a buffer area of a buffer space first, and when a data capacity of the user request data stored in the buffer area reaches a preset capacity threshold, the user request data stored in the buffer area needs to be backed up to a memory to form a backup file including at least one user request data, so that when the user request is high-concurrency, the user request can be processed in time. Or when the backup period of the buffer area is reached, the server will also backup the user request data of the buffer area to the memory, so that whether the user request data needs to be backed up can be judged according to the backup period of the buffer area or the data capacity of the user request data stored in the buffer area.
3. In the embodiment of the invention, because the user request data can only be backed up to the memory through the cache region and the user request data is read from the memory, when at least one backup file is stored in the memory, the reading operation of the backup file can be carried out, and the backup file generated firstly is read firstly according to the generation time of the backup file so as to process the high concurrency request in time.
It should be noted that not all steps and modules in the above flows and system structure diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by a plurality of physical entities, or some components in a plurality of independent devices may be implemented together.
In the above embodiments, the hardware unit may be implemented mechanically or electrically. For example, a hardware element may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware elements may also comprise programmable logic or circuitry, such as a general purpose processor or other programmable processor, that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
While the invention has been shown and described in detail in the drawings and in the preferred embodiments, it is not intended to limit the invention to the embodiments disclosed, and it will be apparent to those skilled in the art that various combinations of the code auditing means in the various embodiments described above may be used to obtain further embodiments of the invention, which are also within the scope of the invention.

Claims (10)

1. The request processing method is characterized by being applied to a server and comprising the following steps:
receiving at least one user request from at least one client;
aiming at each received user request, acquiring a request path corresponding to the user request, and combining the user request and the request path to acquire user request data;
respectively storing each user request data to a preset buffer space;
reading at least two user request data from the buffer space in sequence, and determining at least one data packet according to the request path included by the user request data, wherein the same data packet includes at least two user request data carrying the same request path, and the request paths carried by the user request data in different data packets are different;
for each data packet, according to a target request path carried by each user request data in the data packet, distributing each user request data included in the data packet to a processor group subscribing the target request path;
and for each processor group, processing each user request data included in the data packet distributed to the processor group by using at least one processor included in the processor group.
2. The method according to claim 1, wherein the separately storing each user request data in a predetermined buffer space comprises:
s0: storing each of the user request data into a buffer, wherein the buffer space comprises the buffer and a memory;
s1: judging whether the time length from the last backup of the buffer zone reaches a preset backup period, if so, executing S2, otherwise, executing S3;
s2: backing up at least one user request data stored in the buffer to the memory to form a backup file including at least one user request data, and performing S1;
s3: detecting whether the data capacity of the user request data stored in the buffer reaches a preset capacity threshold, if so, executing S2, otherwise, executing S1.
3. The method of claim 2, wherein the sequentially reading at least two of the user request data from the buffer space comprises:
and when at least one backup file is stored in the memory, reading the first generated backup file according to the generation time of the backup file.
4. The method of claim 2,
after the S0, further comprising:
determining at least two partition marks;
adding one partition mark for each user request data respectively, so that the difference of the number of the user request data added with different partition marks is smaller than a preset number threshold;
for each processor group, processing, by at least one processor included in the processor group, each user request data included in the data packet distributed to the processor group, including:
and for each user request data in each processor group, distributing each user request data to the processor responsible for the corresponding partition for processing according to the partition mark corresponding to the user request data.
5. The method according to any one of claims 2 to 4, further comprising, after the S0:
adding a position mark for each user request data, wherein the position marks of different user request data are different;
detecting whether the processing of the user request data is stopped;
if the processing of the user request data has been stopped, recording the position mark of the last processed user request data, so as to process the user request data next to the recorded position mark as a start position when the processing of the user request data is started next time.
6. The server side is characterized by comprising:
a receiving module for receiving at least one user request from at least one client;
a first processing module, configured to, for each user request received by the receiving module, obtain a request path corresponding to the user request, and combine the user request and the request path to obtain user request data;
the storage module is used for respectively storing the user request data acquired by the first processing module into a preset buffer space;
a first determining module, configured to read at least two pieces of user request data stored by the storage module from the buffer space in sequence, and determine at least one data packet according to the request path included in the user request data, where the same data packet includes at least two pieces of user request data carrying the same request path, and the request paths carried by the user request data in different data packets are different;
a distribution module, configured to, for each data packet determined by the first determination module, distribute, according to a target request path carried by each user request data in the data packet, each user request data included in the data packet to a processor group subscribed to the target request path;
a second processing module, configured to, for each processor group, utilize at least one processor included in the processor group to process each user request data included in the data packet distributed to the processor group by the distribution module.
7. The server of claim 6,
the storage module is used for executing:
s0: storing each of the user request data into a buffer, wherein the buffer space comprises the buffer and a memory;
s1: judging whether the time length for backing up the buffer zone for the last time reaches a preset backup period, if so, executing S2, otherwise, executing S3;
s2: backing up at least one of the user requested data stored in the buffer area to the memory to form a backup file including at least one of the user requested data, and performing S1;
s3: detecting whether the data capacity of the user request data stored in the buffer reaches a preset capacity threshold, if so, executing S2, otherwise, executing S1.
8. The server of claim 7,
the first determining module, when performing reading at least two user request data stored by the storing module from the buffer space in sequence, is configured to:
and when at least one backup file is stored in the memory, reading the first generated backup file according to the generation time of the backup file.
9. The server according to claim 7, further comprising:
a second determining module for determining at least two partition marks;
the partition marking module is used for adding one partition mark determined by the second determining module to each piece of user request data respectively, so that the difference between the numbers of the user request data added with different partition marks is smaller than a preset number threshold;
the second processing module is configured to perform:
and for each user request data in each processor group, distributing each user request data to the processor responsible for the corresponding partition for processing according to the partition mark corresponding to the user request data.
10. The server according to any of the claims 6 to 9,
further comprising:
a location marking module, configured to add a location mark to each user request data stored in the buffer space by the storage module, where the location mark is different for different user request data;
the detection module is used for detecting whether the processing of the user request data is stopped or not;
a recording module, configured to record the position mark of the last processed user request data if the detection module detects that processing of the user request data has stopped, so as to process, when processing of the user request data is started next time, the user request data next to the recorded position mark as a start position.
CN202010064448.5A 2020-01-20 2020-01-20 Request processing method and server Active CN111314434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010064448.5A CN111314434B (en) 2020-01-20 2020-01-20 Request processing method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010064448.5A CN111314434B (en) 2020-01-20 2020-01-20 Request processing method and server

Publications (2)

Publication Number Publication Date
CN111314434A CN111314434A (en) 2020-06-19
CN111314434B true CN111314434B (en) 2022-08-19

Family

ID=71148363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010064448.5A Active CN111314434B (en) 2020-01-20 2020-01-20 Request processing method and server

Country Status (1)

Country Link
CN (1) CN111314434B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565337B (en) * 2020-11-06 2022-09-30 北京奇艺世纪科技有限公司 Request transmission method, server, client, system and electronic equipment
CN113472875A (en) * 2021-06-28 2021-10-01 深信服科技股份有限公司 Connection multiplexing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219286A (en) * 2014-08-13 2014-12-17 腾讯科技(深圳)有限公司 Method and device for processing stream media, client, CDN (content delivery network) node server and terminal
CN106537924A (en) * 2014-07-16 2017-03-22 爱播股份有限公司 Operating method of client and server for streaming service
CN107241305A (en) * 2016-12-28 2017-10-10 神州灵云(北京)科技有限公司 A kind of network protocol analysis system and its analysis method based on polycaryon processor
CN109743348A (en) * 2018-11-27 2019-05-10 无锡天脉聚源传媒科技有限公司 A kind of data transfer request response method, system, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106537924A (en) * 2014-07-16 2017-03-22 爱播股份有限公司 Operating method of client and server for streaming service
CN104219286A (en) * 2014-08-13 2014-12-17 腾讯科技(深圳)有限公司 Method and device for processing stream media, client, CDN (content delivery network) node server and terminal
CN107241305A (en) * 2016-12-28 2017-10-10 神州灵云(北京)科技有限公司 A kind of network protocol analysis system and its analysis method based on polycaryon processor
CN109743348A (en) * 2018-11-27 2019-05-10 无锡天脉聚源传媒科技有限公司 A kind of data transfer request response method, system, device and storage medium

Also Published As

Publication number Publication date
CN111314434A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111338773B (en) Distributed timing task scheduling method, scheduling system and server cluster
CN107920094B (en) Data acquisition method and device, server and network equipment
US20210036907A1 (en) Methods and apparatuses for pushing a message
CN111314434B (en) Request processing method and server
CN109064345A (en) Message treatment method, system and computer readable storage medium
CN109951323B (en) Log analysis method and system
WO2002065282A2 (en) Distribution of binary executables and content from peer locations/machines
CN109669822B (en) Electronic device, method for creating backup storage pool, and computer-readable storage medium
US11429450B2 (en) Aggregated virtualized compute accelerators for assignment of compute kernels
WO2022174735A1 (en) Data processing method and apparatus based on distributed storage, device, and medium
CN106713388B (en) Burst service processing method and device
CN114615340B (en) Request processing method and device, computer equipment and storage device
CN113206877A (en) Session keeping method and device
CN112511580A (en) Message pushing method, device, storage medium and equipment
CN114024972A (en) Long connection communication method, system, device, equipment and storage medium
CN110784498A (en) Personalized data disaster tolerance method and device
CN111338688B (en) Data long-acting caching method and device, computer system and readable storage medium
CN111221653B (en) Service processing method and device and computer readable storage medium
CN114598749A (en) Service access method and device
CN110798402B (en) Service message processing method, device, equipment and storage medium
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN112054919B (en) Method, device, storage medium and system for generating ID (identity) of container cluster under stateless condition
CN114615276B (en) Domain name scheduling method and device for content distribution network
CN110545296A (en) Log data acquisition method, device and equipment
US10644963B2 (en) Systems and methods for detecting a zombie server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220802

Address after: 250100 No. 1036 Tidal Road, Jinan High-tech Zone, Shandong Province, S01 Building, Tidal Science Park

Applicant after: Inspur cloud Information Technology Co.,Ltd.

Address before: Floor S06, Inspur Science Park, No. 1036, Inspur Road, hi tech Zone, Jinan City, Shandong Province

Applicant before: SHANDONG HUIMAO ELECTRONIC PORT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant