CN116048819A - High concurrency data storage method and system - Google Patents
High concurrency data storage method and system Download PDFInfo
- Publication number
- CN116048819A CN116048819A CN202310325936.0A CN202310325936A CN116048819A CN 116048819 A CN116048819 A CN 116048819A CN 202310325936 A CN202310325936 A CN 202310325936A CN 116048819 A CN116048819 A CN 116048819A
- Authority
- CN
- China
- Prior art keywords
- processing
- request
- message queue
- requests
- immediate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the field of data processing, in particular to a high concurrency data storage method and system. A high concurrency data storage system comprising: the system comprises an access request monitoring module, a service analysis module, a request distribution module, a data processing module, an access request classification module, a service analysis message queue storage module and a message queue storage module to be processed. The invention detects high concurrent access request data by setting the maximum threshold value of the access request, classifies the concurrent access request data, sends the immediate processing request which has higher importance and needs immediate processing into the data processing server for storage and preferential processing, delays the delay processing request which has low importance and does not need immediate response for processing, reduces the pressure of the server when in high concurrency, reduces the chaotic degree of processing, and realizes orderly and efficient processing.
Description
Technical Field
The invention relates to the field of data processing, in particular to a high concurrency data storage method and system.
Background
With the rapid development of internet information, people cannot leave the internet under the influence of a faster and more efficient network environment, and today where the internet is popular, more and more servers of internet services are encountering environments with high concurrency and a large amount of data, and the problem of data analysis and storage caused by the high concurrency can be generated. When the website receives a large number of requests in a short time, the website indicates that the website has high concurrency, the number of the requests exceeds the processing capacity of the website, and the website can stop processing, disorder processing, slow processing speed and the like, so that unnecessary influence is caused to the user who sends the requests.
Disclosure of Invention
The invention provides a high concurrency data storage method and a system, which detect high concurrency access request data by setting a maximum access request threshold, classify the concurrency access request data, send immediate processing requests which have higher importance and need immediate processing into a data processing server for storage and preferential processing, delay the delay processing requests which have low importance and do not need immediate response, reduce the pressure of the server during high concurrency, reduce the chaotic degree of processing, and realize orderly and efficient processing.
A method of high concurrency data storage, comprising the steps of:
s1: continuously monitoring the acquired access requests, calculating the number of the access requests in a fixed time period, comparing the number of the access requests in the fixed time period with the maximum threshold of the access requests, and entering S2 if the number of the access requests in the fixed time period is larger than the maximum threshold of the access requests; if the number of the access requests in the fixed time period is not greater than the maximum threshold of the access requests, inputting the access requests to a local message queue in a corresponding data processing server through a service analysis layer and a load balancing server for storage;
s2: will beThe access requests are input into the deep learning classifier V1 one by one, the access requests are classified into immediate processing requests and delayed processing requests, and the immediate processing requests are added into a business analysis message queueS3, entering; adding a delayed processing request to a pending message queue +.>S4, entering;
s3: queuing traffic analysis messagesThe immediate processing requests are input to the local message queues in the corresponding data processing servers through the service analysis layer and the load balancing server for storage, and the space size of the local message queues is adjusted according to the number of the immediate processing requests input to the local message queues;
s4: continuously monitoring the acquired access requests, sequentially calculating the number of the access requests according to a fixed time period, and when the number of the access requests in the fixed time period is not more than the maximum threshold value of the access requests, queuing the messages to be processedExtracting delay processing requests one by one, randomly inserting the delay processing requests into access requests acquired in the fixed time period, and inputting the access requests into local message queues in corresponding data processing servers through a service analysis layer and a load balancing server for storage; otherwise, no operation is performed.
In a preferred aspect, in the step S3, the specific step of adjusting the size of the local message queue space includes:
T2: judgment'Whether or not it is true, wherein->For the remaining space threshold, ++>Initial space size for local message queue, if it is +.>"true, enter T3; if it is->"not true, enter T4;
t3: the space size of the corresponding local message queue is defined byExpansion to->,/>To expand the space size, enter T4;
t4: at a preset interval of timeAcquiring the number of immediate processing requests inside the local message queue +.>Returning to T2.
As a preferred aspect, the method further comprises the step of reordering immediate processing requests input to the local message queue, wherein the steps are as follows:
h1: acquiring a service analysis message queue Q1 and acquiring an immediate processing request in the service analysis message queue Q1, and recording as;
H2: traversing immediate processing requests stored within a processing complexity filter layer, computing immediate processing requestsSolving forSimilarity to all immediate processing requests stored inside the complexity filter layer>,/>Wherein->The number of all immediate processing requests stored in the complexity filtering layer;
and H3: traversing all similaritiesJudging whether there is "/">"condition of establishment>Is a similarity threshold, if there is->"true case, enter H4; if there is no->"true case, enter H5;
h4: the request will be processed immediatelyStore hysteresis processing queue->And will immediately process the request +.>Deleting from the service analysis message queue Q1 and entering H5;
and H5: judging immediate place in service analysis message queue Q1If the processing request is not selected completely, if the immediate processing request in the service analysis message queue Q1 is not selected completely, the next immediate processing request in the service analysis message queue Q1 is acquired and recorded asReturning to H2; if the immediate processing requests in the service analysis message queue Q1 are all selected, entering H6;
h6: will lag behind the processing queueThe immediate processing requests in (1) are uniformly inserted into a service analysis message queue Q1, and the obtained service analysis message queue is added with->The immediate processing request is input to a local message queue in a corresponding data processing server through a service analysis layer and a load balancing server for storage.
As a preferred aspect, the generating the processing complexity filter layer in the step H2 includes the steps of:
m1: when a data processing server processes an immediate processing request, a processing time for processing the immediate processing request is acquired;
M2: judgment'Whether or not it is true, wherein->For the processing time threshold, if yes->"not true, enter M3; if it is->"true, enter M4;
m3: copying a corresponding immediate processing request, adding the corresponding immediate processing request into a processing complexity filter layer, and entering M4, wherein the processing complexity filter layer is initially empty;
A high concurrency data storage system comprising:
the access request monitoring module is used for monitoring the acquired access requests and judging the size relation between the number of the access requests and the maximum threshold value of the access requests in a fixed time period;
the service analysis module is used for storing the deep learning classifier V2 and analyzing and classifying the access request;
the request distribution module is used for storing the load balancing server and distributing the access request according to the result of the service analysis module;
the data processing module is used for storing the data processing server, processing the corresponding access request, and arranging a local message queue in the data processing server, wherein the local message queue is used for storing the access request;
the access request classification module is used for storing the deep learning classifier V1 and dividing the access request into a processing request and a delay processing request;
the business analysis message queue storage module is used for storing the immediate processing request;
and the message queue storage module is used for storing the delay processing request.
As a preferred aspect, further comprising:
the immediate processing request quantity acquisition module is used for acquiring the quantity of immediate processing requests in the local message queue;
the immediate processing request quantity judging module is used for judging whether the quantity of the immediate processing requests input to the data processing server is close to the initial space size of the local message queue in the data processing server or not;
the local message queue space expansion module is used for expanding the size of the local message queue space.
As a preferred aspect, further comprising:
the processing complexity filter layer management module is used for generating and storing a processing complexity filter layer;
an immediate processing request reordering module for reordering immediate processing requests input to the local message queue.
The invention has the following advantages:
1. the invention detects high concurrent access request data by setting the maximum threshold value of the access request, classifies the concurrent access request data, sends the immediate processing request which has higher importance and needs immediate processing into the data processing server for storage and preferential processing, delays the delay processing request which has low importance and does not need immediate response for processing, reduces the pressure of the server when in high concurrency, reduces the chaotic degree of processing, and realizes orderly and efficient processing.
2. The invention monitors the number of the immediate processing requests of the input data processing server, if the number of the immediate processing requests of the input data processing server is close to the initial space size of the local message queue in the data processing server, expands the space of the corresponding local message queue, and avoids the possibility that redundant immediate processing requests are lost due to undefined storage positions.
3. The invention extracts the immediate processing request with overlong processing time by monitoring the processing time of the immediate processing request, screens out similar immediate processing requests with overlong processing time, distributes the immediate processing request with overlong processing time in the service analysis message queue at intervals, and avoids stagnation of network service of a website caused by that all data processing servers process the immediate processing request with long processing time, thereby causing bad user experience.
Drawings
FIG. 1 is a schematic diagram of a high concurrency data storage system according to an embodiment of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example 1
A high concurrency data storage method specifically comprises the following steps:
s1: continuously monitoring the acquired access requests, calculating the number of the access requests in a fixed time period, defaulting the fixed time period to 1min, and comparing the number of the access requests in the fixed time period with a preset access request maximum threshold value in advance of a user, wherein the access request maximum threshold value is determined by the performance of a server and the network condition, if the number of the access requests in the fixed time period is larger than the preset access request maximum threshold value in advance of the user, indicating that the server has high concurrency condition at the moment, and entering S2; if the number of the access requests in the fixed time period is not more than the preset access request maximum threshold value, the number of the access requests processed by the server is in the acceptable range of the server, and the access requests are input to a local message queue in the corresponding data processing server through the service analysis layer and the load balancing server for storage.
The service analysis layer is internally provided with a deep learning classifier V2, the deep learning classifier V2 classifies the access request into different request types according to the text information characteristics of the access request, and the access request passing through the service analysis layer is marked by the request type; the access requests are classified so as to facilitate subsequent processing, and the specific content of the service can be more clearly defined, so that judgment of a large amount of data in the subsequent processing process is reduced, the efficiency of data processing is improved, and S3 is entered; for example, when processing HTTP requests, the request types include GET requests, POST requests, HEAD requests, PUT requests, DELETE requests, CONNECT requests, TRACE requests, and OPTIONS requests, where GET requests are requests for specified content to a server; the POST request is to create new entity resources in the server; the HEAD request is a header requesting specified content from a server; the PUT request is used to update the content inside the server; DELETE request is used to DELETE the content specified inside the server; the CONNECT request is used to establish a link between resources; the TRACE request is used for testing the content received by the server; the OPTIONS request is used to detect the request type supported by the client.
The access requests are sent to the load balancing server, the load server shunts the access requests to the corresponding data processing servers according to the request types corresponding to each access request, and the data processing servers are set by users according to different request types, so that each data processing server can process the access requests of the corresponding request types or immediately process the requests.
The data processing server is internally provided with a local message queue, when access requests of different request types are shunted to the corresponding data processing server, the access requests of the same request type are added into the local message queue in the corresponding data processing server for storage, and the storage form of the access requests in the local message queue is a queue; the initial space of the local message queue is as follows,/>Indicates the number of access requests that can be stored corresponding to the local message queue,/->Setting by a user according to the processing capacity of the corresponding data processing server; because the access request is continuously input, the access request is temporarily stored through the local message queue, and the subsequent access request accumulation caused by untimely processing of the access request is avoided, thereby causing network access errors caused by the loss of the access request and affecting the use experience of a user.
The data processing server obtains the access request from the internal local message queue and performs corresponding processing, such as GET request, the data processing server searches and obtains the corresponding content according to the access request, and then feeds the obtained content back to the corresponding client according to the client information of the GET request.
S2: the access requests are input into the deep learning classifier V1 one by one according toThe text information features classify the access requests into immediate processing requests and delayed processing requests, wherein the immediate processing requests are access requests which have higher importance and need to be processed immediately, such as user login, user retrieval, user payment and the like; the delay processing request is an access request with low importance and no need of immediate response, such as user information input, browsing state input and the like; adding immediate processing requests to a traffic analysis message queueS3, entering; adding a delayed processing request to a pending message queue +.>And S4 is entered.
The text information feature obtains a corresponding word vector by word2vec learning of the request content in the access request, and a depth information model adopted in the deep learning classifier V1 is MLP.
S3: acquiring business analysis message queuesQueue business analysis message->The immediate processing request of the service analysis layer is sent to the service analysis layer, the service analysis layer is internally provided with a deep learning classifier V2, the deep learning classifier V2 classifies the immediate processing request into different request types according to the text information characteristics of the immediate processing request, the immediate processing request passing through the service analysis layer is marked by the request types, the immediate processing request is sent to a load balancing server, the load server distributes the immediate processing request to corresponding data processing servers according to the request types corresponding to each immediate processing request, local message queues in the corresponding data processing servers are stored, the data processing servers are set by users according to the different request types, so that each data processing server can process the immediate processing request of the corresponding request type or the immediate processing request, and the data processing server obtains the immediate processing request from the internal local message queuesI.e. processing the request and performing a corresponding process.
S4: obtaining a queue of pending messagesContinuously monitoring the acquired access requests, sequentially calculating the number of the access requests according to a fixed time period, defaulting the fixed time period to be 1min, comparing the number of the access requests in the fixed time period with a preset access request maximum threshold value in advance of a user, and when the number of the access requests in the fixed time period is not more than the preset access request maximum threshold value in advance of the user, indicating that the number of the access requests processed by the server is in a server acceptable range at the moment, and queuing a message to be processed from the server>The delay processing requests are extracted one by one and randomly inserted into the access requests acquired in the fixed time period, in particular, during the insertion process, if the message queue to be processed is +.>The delay processing requests in the data processing server are enough, the access requests in the fixed time period can be directly complemented to the maximum threshold value of the access requests, and then the access requests are input into the local message queues in the corresponding data processing server for storage through the service analysis layer and the load balancing server; otherwise, no operation is performed.
The invention detects high concurrent access request data by setting the maximum threshold value of the access request, classifies the concurrent access request data, sends the immediate processing request which has higher importance and needs immediate processing into the data processing server for storage and preferential processing, delays the delay processing request which has low importance and does not need immediate response for processing, reduces the pressure of the server when in high concurrency, reduces the chaotic degree of processing, and realizes orderly and efficient processing.
When high concurrency data occurs, the number of immediate processing requests delivered to a data processing server may exceed the initial size of the local message queue within the data processing serverRedundant immediate processing requests may be lost due to undefined storage locations, and therefore require an adjustment of the size of the local message queue within the data processing server, as follows:
T2: judgment'Whether or not it is true, wherein->The remaining space threshold value is set by a user according to the processing capacity of the data processing server, and the stronger the processing capacity of the data processing server is, the corresponding remaining space threshold value is +.>The smaller; if it is->"true" indicates that immediate processing requests within the local message queue within the current data processing server are approaching the initial space size +.>At this time, the space size of the corresponding local message queue needs to be expanded, and T3 is entered; if it is->"not true" indicates that the immediate processing request in the local message queue within the current data processing server is not approaching the initial space size +.>Enter T4.
T3: the space size of the corresponding local message queue is defined byExpansion to->,/>To expand the space size->A determination is made based on the number of actual immediate processing requests in the high concurrency data period that the corresponding data processing server history has experienced, and T4 is entered.
T4: at a preset interval of timeAcquiring the number of immediate processing requests inside the local message queue +.>Preset time->And determining by the user according to the network transmission rate transmitted to the server, and returning to T2.
The invention monitors the number of the immediate processing requests of the input data processing server, if the number of the immediate processing requests of the input data processing server is close to the initial space size of the local message queue in the data processing server, expands the space of the corresponding local message queue, and avoids the possibility that redundant immediate processing requests are lost due to undefined storage positions.
When the system classifies the access request as an immediate processing request which has a higher importance and needs to be processed immediately, the immediate processing request may have a problem of long processing time, and when all the data processing servers are processing the immediate processing request with long processing time, the system may cause stagnation of network services of a website and poor user experience, so that the immediate processing request with long processing time needs to be sent to the data processing servers for storage, and the immediate processing request input to the local message queue is reordered, which comprises the following specific steps:
h1: acquiring a service analysis message queue Q1 and acquiring an immediate processing request in the service analysis message queue Q1, and recording as。
H2: traversing the immediate processing request stored in the processing complexity filter layer, and calculating the immediate processing request through a TF-IDF algorithmSimilarity to all immediate processing requests stored inside the complexity filter layer>,Wherein->The number of requests is processed for all immediate processing stored inside the complexity filter layer.
And H3: traversing all similaritiesJudging whether there is "/">"condition of establishment>Is a similarity threshold, if there is->"case of being true", explaining immediate processing request +.>The treatment time is also longer, and H4 is entered; if there is no'"case of being true", explaining immediate processing request +.>The treatment time is in the normal range, and H5 is entered.
H4: the request will be processed immediatelyStore hysteresis processing queue->And will immediately process the request +.>Deleted from the traffic analysis message queue Q1 and enters H5.
And H5: judging whether the immediate processing request in the service analysis message queue Q1 is completely selected, if not, acquiring the next immediate processing request in the service analysis message queue Q1, and marking as the following immediate processing requestReturning to H2; if all the immediate processing requests in the service analysis message queue Q1 are selected, H6 is entered.
H6: will lag behind the processing queueThe immediate processing requests in (1) are uniformly inserted into a service analysis message queue Q1, and the obtained service analysis message queue is added with->The immediate processing request is input to a local message queue in a corresponding data processing server through a service analysis layer and a load balancing server for storage.
The generating of the processing complexity filter layer in the step H2 comprises the following steps:
m1: when the data processing server processes the immediate processing requestAcquiring processing time for processing immediate processing request。
M2: judgment'Whether or not it is true, wherein->For the processing time threshold, default to 5s, the user can set himself, if yes +_>If yes, the corresponding immediate processing request is excessively long, and M3 is entered; if it is->"not true" indicates that the corresponding immediate processing request processing time is within the normal range, and M4 is entered.
M3: the corresponding immediate processing request is copied and added into a processing complexity filter layer, the processing complexity filter layer is set in advance, and is initially empty and enters M4.
When the interior of the processing complexity filter layer is not empty, filtering the immediate processing request, wherein the method specifically comprises the following steps:
the invention extracts the immediate processing request with overlong processing time by monitoring the processing time of the immediate processing request, screens out similar immediate processing requests with overlong processing time, distributes the immediate processing request with overlong processing time in the service analysis message queue at intervals, and avoids stagnation of network service of a website caused by that all data processing servers process the immediate processing request with long processing time, thereby causing bad user experience.
Example 2
A high concurrency data storage system, as shown in figure 1, comprising:
the access request monitoring module is used for monitoring the acquired access requests and judging the size relation between the number of the access requests in a fixed time period and a preset access request maximum threshold value in advance of a user;
the service analysis module is used for storing the deep learning classifier V2 and analyzing and classifying the access request;
the request distribution module is used for storing the load balancing server and distributing the access request according to the result of the service analysis module;
the data processing module is used for storing the data processing server, processing the corresponding access request and arranging a local message queue in the data processing server;
the access request classification module is used for storing the deep learning classifier V1 and dividing the access request into a processing request and a delay processing request;
the business analysis message queue storage module is used for storing the immediate processing request;
and the message queue storage module is used for storing the delay processing request.
In order to implement adjustment of the size of the local message queue in the data processing server, as shown in fig. 1, the system further includes:
the immediate processing request quantity acquisition module is used for acquiring the quantity of immediate processing requests in the local message queue;
the immediate processing request quantity judging module is used for judging whether the quantity of the immediate processing requests input to the data processing server is close to the initial space size of the local message queue in the data processing server or not;
the local message queue space expansion module is used for expanding the size of the local message queue space;
in order to implement the adjustment of the immediate processing request with long processing time, as shown in fig. 1, the system further includes:
the processing complexity filter layer management module is used for generating and storing a processing complexity filter layer;
an immediate processing request reordering module for reordering immediate processing requests input to the local message queue.
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims. Parts of the specification not described in detail belong to the prior art known to those skilled in the art.
Claims (7)
1. A method of high concurrency data storage, comprising the steps of:
s1: continuously monitoring the acquired access requests, calculating the number of the access requests in a fixed time period, comparing the number of the access requests in the fixed time period with the maximum threshold of the access requests, and entering S2 if the number of the access requests in the fixed time period is larger than the maximum threshold of the access requests; if the number of the access requests in the fixed time period is not greater than the maximum threshold of the access requests, inputting the access requests to a local message queue in a corresponding data processing server through a service analysis layer and a load balancing server for storage;
s2: the access requests are input into the deep learning classifier V1 one by one, the access requests are classified into immediate processing requests and delayed processing requests, and the immediate processing requests are added into a business analysis message queueS3, entering; adding a delayed processing request to a pending message queue +.>S4, entering;
s3: queuing traffic analysis messagesThe immediate processing request is input into the corresponding data processing server through the business analysis layer and the load balancing server to be stored in the local message queueThe space size of the local message queue is adjusted according to the number of immediate processing requests input to the local message queue during the process;
s4: continuously monitoring the acquired access requests, sequentially calculating the number of the access requests according to a fixed time period, and when the number of the access requests in the fixed time period is not more than the maximum threshold value of the access requests, queuing the messages to be processedExtracting delay processing requests one by one, randomly inserting the delay processing requests into access requests acquired in the fixed time period, and inputting the access requests into local message queues in corresponding data processing servers through a service analysis layer and a load balancing server for storage; otherwise, no operation is performed.
2. The method for storing high concurrency data according to claim 1, wherein in step S3, the specific step of adjusting the size of the local message queue space includes:
T2: judgment'Whether or not it is true, wherein->For the remaining space threshold, ++>Initial space size for local message queue, if it is +.>"true, enter T3; if it is->"not true, enter T4;
t3: the space size of the corresponding local message queue is defined byExpansion to->,/>To expand the space size, enter T4;
3. A method of high concurrency data storage according to claim 2, further comprising reordering immediate processing requests input to the local message queues, comprising the steps of:
h1: acquiring a service analysis message queue Q1 and acquiring an immediate processing request in the service analysis message queue Q1, and recording as;
H2: traversing immediate processing requests stored within a processing complexity filter layer, computing the immediate processing requestsSimilarity to all immediate processing requests stored inside the complexity filter layer>,/>Wherein->The number of all immediate processing requests stored in the complexity filtering layer;
and H3: traversing all similaritiesJudging whether there is "/">"condition of establishment>Is a similarity threshold, if there is->"true case, enter H4; if there is no->"true case, enter H5;
h4: the request will be processed immediatelyStore hysteresis processing queue->And will immediately process the request +.>Deleting from the service analysis message queue Q1 and entering H5;
and H5: judging whether the immediate processing request in the service analysis message queue Q1 is completely selected, if not, acquiring the next immediate processing request in the service analysis message queue Q1, and marking as the following immediate processing requestReturning to H2; if the immediate processing requests in the service analysis message queue Q1 are all selected, entering H6;
h6: will lag behind the processing queueThe immediate processing requests in (1) are uniformly inserted into a service analysis message queue Q1, and the obtained service analysis message queue is added with->The immediate processing request is input to a local message queue in a corresponding data processing server through a service analysis layer and a load balancing server for storage.
4. A method of high concurrency data storage according to claim 3, wherein the generation of the process complexity filter layer in step H2 comprises the steps of:
m1: when a data processing server processes an immediate processing request, a processing time for processing the immediate processing request is acquired;
M2: judgment'Whether or not it is true, wherein->For the processing time threshold, if yes->"not true, enter M3; if it is->"true, enter M4;
m3: copying a corresponding immediate processing request, adding the corresponding immediate processing request into a processing complexity filter layer, and entering M4, wherein the processing complexity filter layer is initially empty;
5. A high concurrency data storage system, comprising:
the access request monitoring module is used for monitoring the acquired access requests and judging the size relation between the number of the access requests and the maximum threshold value of the access requests in a fixed time period;
the service analysis module is used for storing the deep learning classifier V2 and analyzing and classifying the access request;
the request distribution module is used for storing the load balancing server and distributing the access request according to the result of the service analysis module;
the data processing module is used for storing the data processing server, processing the corresponding access request, and arranging a local message queue in the data processing server, wherein the local message queue is used for storing the access request;
the access request classification module is used for storing the deep learning classifier V1 and dividing the access request into a processing request and a delay processing request;
the business analysis message queue storage module is used for storing the immediate processing request;
and the message queue storage module is used for storing the delay processing request.
6. The high concurrency data storage system of claim 5, further comprising:
the immediate processing request quantity acquisition module is used for acquiring the quantity of immediate processing requests in the local message queue;
the immediate processing request quantity judging module is used for judging whether the quantity of the immediate processing requests input to the data processing server is close to the initial space size of the local message queue in the data processing server or not;
the local message queue space expansion module is used for expanding the size of the local message queue space.
7. The high concurrency data storage system of claim 6, further comprising:
the processing complexity filter layer management module is used for generating and storing a processing complexity filter layer;
an immediate processing request reordering module for reordering immediate processing requests input to the local message queue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310325936.0A CN116048819A (en) | 2023-03-30 | 2023-03-30 | High concurrency data storage method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310325936.0A CN116048819A (en) | 2023-03-30 | 2023-03-30 | High concurrency data storage method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116048819A true CN116048819A (en) | 2023-05-02 |
Family
ID=86129898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310325936.0A Pending CN116048819A (en) | 2023-03-30 | 2023-03-30 | High concurrency data storage method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116048819A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107872398A (en) * | 2017-06-25 | 2018-04-03 | 平安科技(深圳)有限公司 | High concurrent data processing method, device and computer-readable recording medium |
CN109995666A (en) * | 2019-04-12 | 2019-07-09 | 深圳市元征科技股份有限公司 | A kind of method for message transmission and relevant apparatus |
CN111459686A (en) * | 2020-03-17 | 2020-07-28 | 无锡华云数据技术服务有限公司 | Queue message storing and forwarding method and system and computer device with operating system |
CN112468551A (en) * | 2020-11-16 | 2021-03-09 | 浪潮云信息技术股份公司 | Intelligent scheduling working method based on service priority |
CN113347238A (en) * | 2021-05-26 | 2021-09-03 | 湖南大学 | Message partitioning method, system, device and storage medium based on block chain |
US20220058181A1 (en) * | 2020-08-20 | 2022-02-24 | Shanghai Icekredit, Inc. | Distributed transaction processing method and system based on message queue and database |
CN114422517A (en) * | 2022-01-24 | 2022-04-29 | 广东三合电子实业有限公司 | Server load balancing system and method thereof |
CN115033393A (en) * | 2022-08-11 | 2022-09-09 | 苏州浪潮智能科技有限公司 | Priority queuing processing method, device, server and medium for batch request issuing |
CN115829233A (en) * | 2022-11-09 | 2023-03-21 | 大唐融合通信股份有限公司 | Customer service distribution method and device, communication equipment and readable storage medium |
-
2023
- 2023-03-30 CN CN202310325936.0A patent/CN116048819A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107872398A (en) * | 2017-06-25 | 2018-04-03 | 平安科技(深圳)有限公司 | High concurrent data processing method, device and computer-readable recording medium |
CN109995666A (en) * | 2019-04-12 | 2019-07-09 | 深圳市元征科技股份有限公司 | A kind of method for message transmission and relevant apparatus |
CN111459686A (en) * | 2020-03-17 | 2020-07-28 | 无锡华云数据技术服务有限公司 | Queue message storing and forwarding method and system and computer device with operating system |
US20220058181A1 (en) * | 2020-08-20 | 2022-02-24 | Shanghai Icekredit, Inc. | Distributed transaction processing method and system based on message queue and database |
CN112468551A (en) * | 2020-11-16 | 2021-03-09 | 浪潮云信息技术股份公司 | Intelligent scheduling working method based on service priority |
CN113347238A (en) * | 2021-05-26 | 2021-09-03 | 湖南大学 | Message partitioning method, system, device and storage medium based on block chain |
CN114422517A (en) * | 2022-01-24 | 2022-04-29 | 广东三合电子实业有限公司 | Server load balancing system and method thereof |
CN115033393A (en) * | 2022-08-11 | 2022-09-09 | 苏州浪潮智能科技有限公司 | Priority queuing processing method, device, server and medium for batch request issuing |
CN115829233A (en) * | 2022-11-09 | 2023-03-21 | 大唐融合通信股份有限公司 | Customer service distribution method and device, communication equipment and readable storage medium |
Non-Patent Citations (2)
Title |
---|
MARYAM ALAMI CHENTOUFI: "Adaptive traffic signal optimization considering emergency vehicle preemption and tram priority using PVS algorithm", SCA \'18: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON SMART CITY APPLICATIONS, pages 1 - 8 * |
唐蓉君;叶波;文俊浩;: "面向服务环境中的NServiceBus服务总线应用研究", 计算机科学, no. 07, pages 163 - 167 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11727064B2 (en) | Performing computations during idle periods at the storage edge | |
CN110109953B (en) | Data query method, device and equipment | |
US10455264B2 (en) | Bulk data extraction system | |
US20190238605A1 (en) | Verification of streaming message sequence | |
CN111061804A (en) | Asynchronous data processing method, device, equipment and storage medium based on big data | |
CN113687781A (en) | Method, device, equipment and medium for pulling up thermal data | |
WO2019127940A1 (en) | Video classification model training method, device, storage medium, and electronic device | |
EP2997715B1 (en) | Transmitting information based on reading speed | |
CN112015553A (en) | Data processing method, device, equipment and medium based on machine learning model | |
CN105407005B (en) | Content distribution method and device | |
CN109509110B (en) | Microblog hot topic discovery method based on improved BBTM model | |
US20120331381A1 (en) | Systems and Methods for Communicating Information | |
CN116048819A (en) | High concurrency data storage method and system | |
US20140236987A1 (en) | System and method for audio signal collection and processing | |
CN105989152A (en) | Search engine service quality monitoring methods, apparatus and system | |
CN108228101B (en) | Method and system for managing data | |
CN106453663A (en) | Improved cloud service-based storage capacity expansion method and device | |
US10250515B2 (en) | Method and device for forwarding data messages | |
CN108521382A (en) | A kind of message method, apparatus and system | |
JP7410425B2 (en) | Monitoring system, monitoring method and manager program | |
CN109726181B (en) | Data processing method and data processing device | |
CN113873025A (en) | Data processing method and device, storage medium and electronic equipment | |
CN109739817B (en) | Method and system for storing data file in big data storage system | |
US10528904B2 (en) | Workflow processing via policy workflow workers | |
Claeys et al. | A queueing-theoretic analysis of the threshold-based exhaustive data-backup scheduling policy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |