CN110704110B - Method and device for improving response speed of system under high concurrency condition - Google Patents

Method and device for improving response speed of system under high concurrency condition Download PDF

Info

Publication number
CN110704110B
CN110704110B CN201910940925.7A CN201910940925A CN110704110B CN 110704110 B CN110704110 B CN 110704110B CN 201910940925 A CN201910940925 A CN 201910940925A CN 110704110 B CN110704110 B CN 110704110B
Authority
CN
China
Prior art keywords
data
target data
cache
updating
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910940925.7A
Other languages
Chinese (zh)
Other versions
CN110704110A (en
Inventor
吴李烜
阚宝铎
张栋
李国涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Software Co Ltd
Original Assignee
Inspur Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Software Co Ltd filed Critical Inspur Software Co Ltd
Priority to CN201910940925.7A priority Critical patent/CN110704110B/en
Publication of CN110704110A publication Critical patent/CN110704110A/en
Application granted granted Critical
Publication of CN110704110B publication Critical patent/CN110704110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • G06F9/383Operand prefetching

Abstract

The invention provides a method and a device for improving the response speed of a system under the condition of high concurrency, wherein the method comprises the following steps: dividing data in a database into data with high timeliness requirements and data with low timeliness requirements in advance, acquiring a current concurrent request, judging whether target data requested by the current concurrent request is data with high timeliness requirements or not, if so, inquiring whether the target data exists in a cache or not, if so, returning the target data, and simultaneously triggering a first updating thread to update the data in the cache; if not, inquiring whether target data exist in the cache or not, judging whether the target data are invalid or not, if so, triggering a second updating thread, updating the data in the cache according to the data in the database by using the updating thread, and otherwise, returning the target data. The method and the device for improving the response speed of the system under the high concurrency condition can improve the response speed of the system under the high concurrency condition.

Description

Method and device for improving response speed of system under high concurrency condition
Technical Field
The invention relates to the technical field of monitoring, in particular to a method and a device for improving the response speed of a system under the condition of high concurrency.
Background
With the development of company business, high concurrency situations often occur, so that a great deal of examination is faced when designing a system, and the response speed of the system under the high concurrency situation needs to be improved at the same time.
In the prior art, when a concurrent request is received, a cache is updated, and after the cache is updated, whether data required by the concurrent request exists in the cache is queried. That is, the requesting user is always in a waiting state for response before the cache update is completed, and the response speed of the system is slow.
Therefore, a method is needed to improve the response speed of the system in high concurrency situations.
Disclosure of Invention
The invention provides a method and a device for improving the response speed of a system under the condition of high concurrency, which can improve the response speed of the system under the condition of high concurrency.
The embodiment of the invention provides a method for improving the response speed of a system under the condition of high concurrency, which comprises the following steps:
the method comprises the following steps of dividing data in a database into data with high requirement on timeliness and data with low requirement on timeliness in advance, and further comprising the following steps:
a1: acquiring a current concurrent request;
a2: judging whether the target data requested by the current concurrent request is data with high requirement on the validity, if so, executing A3 and executing A4 at the same time, otherwise, executing A5;
a3: inquiring whether the target data exists in the cache or not, and if so, returning the target data;
a4: triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
a5: inquiring whether the target data exists in the cache, if so, executing A6;
a6: and judging whether the target data is invalid, if so, triggering a second updating thread, and updating the data in the cache according to the data in the database by using the second updating thread, otherwise, returning the target data.
Preferably, the method further comprises:
in a3, when the target data does not exist in the cache, a first query failure result is returned.
In A5, when the target data does not exist in the cache, a second query failure result is returned.
In a6, if the target data fails, a result of the failure of the third query is returned.
Preferably, after the data in the cache is updated according to the data in the database by the first updating thread, the first updating thread is ended.
And ending the second updating thread after the second updating thread is used for updating the data in the cache according to the data in the database.
Preferably, the determining whether the target data is invalid includes:
and determining the expiration time of the data currently stored in the cache, judging whether the expiration time exceeds a preset time threshold, if so, determining that the target data is invalid, otherwise, determining that the target data is not invalid.
Preferably, before a1, the method further comprises:
adding at least one concurrent request sent by a user into an asynchronous queue;
the acquiring the current concurrent request comprises:
acquiring the current concurrent request from the asynchronous queue;
the returning the target data comprises:
and sending the target data to the asynchronous queue.
The embodiment of the invention also provides a device for improving the response speed of the system under the condition of high concurrency, which comprises the following steps: the device comprises a dividing unit, an obtaining unit and a response unit;
the dividing unit is used for dividing the data in the database into data with high requirement on timeliness and data with low requirement on timeliness;
the acquiring unit is configured to acquire a current concurrent request and send the current concurrent request to the responding unit;
the response unit is used for executing:
a2: judging whether the target data requested by the current concurrent request is data with high requirement on the validity, if so, executing A3 and executing A4 at the same time, otherwise, executing A5;
a3: inquiring whether the target data exists in the cache or not, and if so, returning the target data;
a4: triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
a5: inquiring whether the target data exists in the cache, if so, executing A6;
a6: and judging whether the target data is invalid, if so, triggering a second updating thread, and updating the data in the cache according to the data in the database by using the second updating thread, otherwise, returning the target data.
Preferably, the response unit is further configured to perform:
in A3, when the target data does not exist in the cache, returning a first query failure result;
in A5, when the target data does not exist in the cache, returning a result of failure of the second query;
in a6, if the target data fails, a result of the failure of the third query is returned.
Preferably, the response unit is further wary of ending the first update thread after the updating of data in the cache with the first update thread according to data in the database;
and ending the second updating thread after the second updating thread is used for updating the data in the cache according to the data in the database.
Preferably, when the determining whether the target data is invalid is performed, the response unit is specifically configured to determine an expiration time of data currently stored in the cache, determine whether the expiration time exceeds a preset time threshold, if so, determine that the target data is invalid, and otherwise, determine that the target data is not invalid.
Preferably, the apparatus further comprises: and adding a unit.
The adding unit is used for adding at least one concurrent request sent by the user into the asynchronous queue.
The obtaining unit is configured to obtain the current concurrent request from the asynchronous queue.
The response unit is specifically configured to send the target data to the asynchronous queue when the target data is returned.
The embodiment of the invention provides a method for improving the response speed of a system under the condition of high concurrency, wherein if the requested data has high requirement on timeliness, whether target data exists in a cache is inquired, if the target data exists, the target data is returned, and a new thread is triggered to update the data in the cache; if the requested data has low requirement on timeliness, inquiring whether target data exists in the cache or not and judging whether the target data is overdue or not, if yes, triggering a new thread to update the data in the cache, and if not, returning the target data. According to the method, when the system receives the concurrency request under the high concurrency condition, the new thread can be triggered to update the cache, the query of the main thread is not influenced, the condition that the result is returned after the cache is updated like the existing scheme is not needed, a user does not need to spend a large amount of time on waiting for the query result, and the response speed of the system under the high concurrency condition is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a method for increasing response speed in high concurrency of a system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for increasing response speed in high concurrency situations of a system according to another embodiment of the present invention;
fig. 3 is a schematic diagram of an apparatus for increasing response speed in a high concurrency condition of a system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for increasing a response speed in a high concurrency condition of a system, including:
step 101: dividing data in a database into data with high requirement on timeliness and data with low requirement on timeliness in advance;
102, acquiring a current concurrent request;
103, judging whether the target data requested by the current concurrent request is data with high requirement on the timeliness, if so, executing step 104 and step 105, otherwise, executing step 106;
104, inquiring whether target data exists in the cache or not, and if so, returning the target data;
step 105, triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
step 106, inquiring whether target data exists in the cache or not, and if so, executing step 107;
step 107, judging whether the target data is invalid, if so, executing step 108, otherwise, executing step 109;
step 108: triggering a second updating thread, and updating the data in the cache according to the data in the database by using the second updating thread;
step 109: and returning the target data.
The embodiment of the invention provides a method for improving the response speed of a system under the condition of high concurrency, wherein if the requested data has high requirement on timeliness, whether target data exists in a cache is inquired, if the target data exists, the target data is returned, and a new thread is triggered to update the data in the cache; if the requested data has low requirement on timeliness, inquiring whether target data exists in the cache or not and judging whether the target data is overdue or not, if yes, triggering a new thread to update the data in the cache, and if not, returning the target data. According to the method, when the system receives the concurrency request under the high concurrency condition, the new thread can be triggered to update the cache, the query of the main thread is not influenced, the condition that the result is returned after the cache is updated like the existing scheme is not needed, a user does not need to spend a large amount of time on waiting for the query result, and the response speed of the system under the high concurrency condition is improved.
In one embodiment of the invention, the method further comprises:
in step 104, when the target data does not exist in the cache in the query, returning a result of failure of the first query;
further comprising:
in step 106, when the target data does not exist in the cache in the query, returning a result of failure of the second query;
further comprising:
in step 107, if the target data fails, a result of the third query failure is returned.
In the embodiment of the invention, if the target data can not be queried, a result of query failure can be returned immediately, a single thread is not occupied, and the result is output after the cache is updated, so that the response speed of the system is improved.
In an embodiment of the present invention, after updating the data in the cache according to the data in the database by using the first update thread, the method further includes:
when the updating is completed, ending the first updating thread;
after updating the data in the cache according to the data in the database by using the second updating thread, the method further comprises the following steps:
and finishing the second updating thread after the updating is finished.
The thread is immediately ended after the update thread is ended, system resources are not occupied all the time, so that the running speed of the system is slowed down, the query of the main thread is not influenced, and the stability of the system is kept.
In one embodiment of the present invention, determining whether the target data is invalid includes:
determining the expiration time of the data currently stored in the cache;
and judging whether the expiration time exceeds a preset time threshold, if so, determining that the target data is invalid, and otherwise, determining that the target data is not invalid.
Wherein, judging whether the target data is invalid comprises:
confirming the expiration time of the currently stored data in the cache by a cache algorithm such as a first-in first-out queue method, a least usage method, a least frequently used page replacement algorithm and the like; and judging whether the expiration time exceeds a preset time threshold, if so, determining that the target data is invalid, and otherwise, determining that the target data is not invalid.
In an embodiment of the present invention, before step 102, further comprising:
adding at least one concurrent request sent by a user into an asynchronous queue;
acquiring a current concurrent request, comprising:
acquiring the current concurrent request from the asynchronous queue;
returning the target data, including:
and sending the target data to an asynchronous queue.
The server side adds the concurrent requests into the asynchronous queue, starts an independent thread to process the requests in the asynchronous queue to achieve the asynchronous effect, and finally sends the target data to the asynchronous queue, so that a user can know the query result more clearly.
As shown in fig. 2, an embodiment of the present invention provides a method for increasing a response speed in a high concurrency condition of a system, including:
step 201: initializing a database, dividing data in the database into data with high requirement on timeliness and data with low requirement on timeliness in advance, and intensively synchronizing the data needing to be cached into a cache.
In the embodiment of the invention, a database is initialized, data in the database is divided into data with high requirement on timeliness and data with low requirement on timeliness, and then the data needing to be cached is intensively synchronized into the cache according to a specific caching algorithm, such as a first-in first-out queue method, a least-used method or a least frequently used page replacement algorithm.
For example, the real-time access amount of a certain webpage is data with high requirement on the timeliness; the access volume of a certain webpage in the previous day is data with low requirement on the timeliness.
Step 202: adding at least one concurrent request sent by a user into an asynchronous queue, judging whether target data requested by the current concurrent request is data with high requirement on the timeliness, if so, executing a step 203, and simultaneously executing a step 206, otherwise, executing a step 205;
in one embodiment of the invention, the server adds the concurrent requests into the asynchronization, and starts a separate thread to process the requests in the asynchronous queue, thereby achieving the asynchronous effect.
For example, the first concurrent request is to query the real-time access amount of a certain web page, then step 203 is executed, and step 205 is executed, and the second concurrent request is to query the access amount of the previous day of the certain web page, and step 207 is executed.
Step 203: inquiring whether target data exist in the cache or not, if so, executing step 204; otherwise, step 205 is performed.
Step 204: and returning the target data.
Specifically, if the real-time access amount of a certain webpage is found to be 10000 in the cache, the target data is returned.
Step 205: and returning the result that the first query fails.
Specifically, if the target data does not exist in the cache, "query failure" is returned.
Step 206: triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
in an embodiment of the present invention, after updating the data in the cache according to the data in the database by using the first update thread, the method further includes: and when the updating is completed, ending the first updating thread.
Specifically, the access amount of a certain web page is increased from 10000 to 11000, the data is updated through the first updating thread, and then the first updating thread is ended.
Step 207: and inquiring whether target data exists in the cache or not, if so, executing step 209, and if not, executing step 208.
Step 208: returning a result of the second query failure;
specifically, the access amount of a certain webpage queried in the cache is 10000, and if target data does not exist in the cache, a query failure is returned.
Step 209: and judging whether the target data is invalid, and if so, executing step 211. If not, go to step 210.
Wherein, judging whether the target data is invalid comprises:
confirming the expiration time of the currently stored data in the cache by a cache algorithm such as a first-in first-out queue method, a least usage method, a least frequently used page replacement algorithm and the like; and judging whether the expiration time exceeds a preset time threshold, if so, determining that the target data is invalid, and otherwise, determining that the target data is not invalid.
Step 210: and returning the target data.
Specifically, the cache inquires that the visit amount of a certain webpage in 9 months and 10 days is 10000, and the data is 24 of 9 months and 11 days: 00 is expired, the query time is 9 months, 11 days and 23:00, the preset time threshold value is not exceeded, and the target data is returned.
Step 211: and triggering a second updating thread, updating the data in the cache according to the data in the database by using the second updating thread, and returning a result of the failure of the third query.
Specifically, the query time exceeds 9 months, 11 days, 24: 00, if the preset time threshold is exceeded, returning to 'query failure', and triggering a second updating thread to update the access amount of the webpage in 9 months and 11 days according to the data in the database.
As shown in fig. 3, an embodiment of the present invention provides an apparatus for increasing response speed in high concurrency of a system, including: a dividing unit 301, an acquisition unit 302 and a response unit 303.
The dividing unit 301 is configured to divide the data in the database into data with a high requirement on timeliness and data with a low requirement on timeliness;
the obtaining unit 302 is configured to obtain a current concurrent request, and send the current concurrent request to the response unit;
the response unit 303 is configured to perform:
a2: judging whether the target data requested by the current concurrent request is data with high requirement on the validity, if so, executing A3 and executing A4 at the same time, otherwise, executing A5;
a3: inquiring whether the target data exists in the cache or not, and if so, returning the target data;
a4: triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
a5: inquiring whether the target data exists in the cache, if so, executing A6;
a6: and judging whether the target data is invalid, if so, triggering a second updating thread, and updating the data in the cache according to the data in the database by using the second updating thread, otherwise, returning the target data.
In an embodiment of the present invention, the response unit 303 is further configured to:
in A3, when the target data does not exist in the cache, returning a first query failure result;
in A5, when the target data does not exist in the cache, returning a result of failure of the second query;
in a6, if the target data fails, a result of the failure of the third query is returned.
In an embodiment of the present invention, the response unit 303 is further configured to end the first update thread after the updating, by the first update thread, the data in the cache according to the data in the database;
and ending the second updating thread after the second updating thread is used for updating the data in the cache according to the data in the database.
In an embodiment of the present invention, when the determining whether the target data is invalid is performed, the responding unit 303 is specifically configured to determine an expiration time of data currently stored in the cache, determine whether the expiration time exceeds a preset time threshold, if so, determine that the target data is invalid, and otherwise, determine that the target data is not invalid.
In one embodiment of the present invention, further comprising: and adding a unit.
The adding unit is used for adding at least one concurrent request sent by a user into the asynchronous queue;
the obtaining unit is configured to obtain the current concurrent request from the asynchronous queue;
the response unit 303, when executing the returning of the target data, is configured to send the target data to the asynchronous queue.
Because the information interaction, execution process, and other contents between the units in the device are based on the same concept as the method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.
According to the above scheme, the method, the device and the system for monitoring the real-time state of the conference room provided by the embodiment of the invention at least have the following beneficial effects:
1. the embodiment of the invention provides a method for improving the response speed of a system under the condition of high concurrency, wherein if the requested data has high requirement on timeliness, whether target data exists in a cache is inquired, if the target data exists, the target data is returned, and a new thread is triggered to update the data in the cache; if the requested data has low requirement on timeliness, inquiring whether target data exists in the cache or not and judging whether the target data is overdue or not, if yes, triggering a new thread to update the data in the cache, and if not, returning the target data. According to the method, when the system receives the concurrency request under the high concurrency condition, the new thread can be triggered to update the cache, the query of the main thread is not influenced, the condition that the result is returned after the cache is updated like the existing scheme is not needed, a user does not need to spend a large amount of time on waiting for the query result, and the response speed of the system under the high concurrency condition is improved.
2. In the embodiment of the invention, no matter the target data required by the concurrent request is the data with high requirement on timeliness or the data with low requirement on timeliness, the result can be returned in time without waiting for the updating of the cache, so that the waiting time of a user is greatly reduced, and the user experience is improved.
3. In the embodiment of the invention, if the requested data has high requirement on timeliness, whether target data exists in the cache is inquired, if so, the target data is returned, and a new thread is triggered to update the data in the cache; if the requested data has low requirement on timeliness, inquiring whether target data exists in the cache and judging whether the target data is overdue, if so, triggering a new thread to update the data in the cache, and the new thread can keep updating the data in the inquiring process, thereby considering response speed and data validity and improving system reliability.
4. In the embodiment of the invention, after the first updating thread is used for updating the data in the cache according to the data in the database, the first updating thread is ended; after the second updating thread is used for updating the data in the cache according to the data in the database, the second updating thread is ended, so that the new updating thread can be ended in time after the updating is finished, system resources cannot be occupied, the query of the main thread is not influenced, and the stability of the system is improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other similar elements in a process, method, article, or apparatus that comprises the element.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (2)

1. A method for improving response speed of system under high concurrency condition is characterized in that,
the method comprises the following steps of dividing data in a database into data with high requirement on timeliness and data with low requirement on timeliness in advance, and further comprising the following steps:
a1: acquiring a current concurrent request;
a2: judging whether the target data requested by the current concurrent request is data with high requirement on the validity, if so, executing A3 and executing A4 at the same time, otherwise, executing A5;
a3: inquiring whether the target data exists in the cache or not, and if so, returning the target data;
a4: triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
a5: inquiring whether the target data exists in the cache, if so, executing A6;
a6: judging whether the target data is invalid, if so, triggering a second updating thread, and updating the data in the cache according to the data in the database by using the second updating thread, otherwise, returning the target data;
further comprising:
in A3, when the target data does not exist in the cache, returning a first query failure result;
further comprising:
in A5, when the target data does not exist in the cache, returning a result of failure of the second query;
further comprising:
in a6, if the target data fails, returning a result of the failure of the third query;
after the updating the data in the cache according to the data in the database by using the first update thread, further comprising:
when the updating is completed, ending the first updating thread;
after the updating the data in the cache according to the data in the database by using the second update thread, further comprising:
when the updating is completed, ending the second updating thread;
the judging whether the target data is invalid includes:
determining the expiration time of the data currently stored in the cache;
judging whether the expiration time exceeds a preset time threshold, if so, determining that the target data is invalid, otherwise, determining that the target data is not invalid;
further comprising before a 1:
adding at least one concurrent request sent by a user into an asynchronous queue;
the acquiring the current concurrent request comprises:
acquiring the current concurrent request from the asynchronous queue;
the returning the target data comprises:
and sending the target data to the asynchronous queue.
2. An apparatus for increasing response speed in high concurrency condition of a system, comprising:
the device comprises a dividing unit, an obtaining unit and a response unit;
the dividing unit is used for dividing the data in the database into data with high requirement on timeliness and data with low requirement on timeliness;
the acquiring unit is configured to acquire a current concurrent request and send the current concurrent request to the responding unit;
the response unit is used for executing:
a2: judging whether the target data requested by the current concurrent request is data with high requirement on the validity, if so, executing A3 and executing A4 at the same time, otherwise, executing A5;
a3: inquiring whether the target data exists in the cache or not, and if so, returning the target data;
a4: triggering a first updating thread, and updating the data in the cache according to the data in the database by using the first updating thread;
a5: inquiring whether the target data exists in the cache, if so, executing A6;
a6: judging whether the target data is invalid, if so, triggering a second updating thread, and updating the data in the cache according to the data in the database by using the second updating thread, otherwise, returning the target data;
the response unit is further configured to perform:
in A3, when the target data does not exist in the cache, returning a first query failure result;
in A5, when the target data does not exist in the cache, returning a result of failure of the second query;
in a6, if the target data fails, returning a result of the failure of the third query;
the response unit is further configured to end the first update thread after the data in the cache is updated according to the data in the database by the first update thread; after the data in the cache is updated according to the data in the database by using the second updating thread, ending the second updating thread;
the response unit is specifically configured to determine expiration time of data currently stored in the cache when the determination on whether the target data is invalid is performed, determine whether the expiration time exceeds a preset time threshold, if so, determine that the target data is invalid, and otherwise, determine that the target data is not invalid;
further comprising: an adding unit;
the adding unit is used for adding at least one concurrent request sent by a user into the asynchronous queue;
the obtaining unit is configured to obtain the current concurrent request from the asynchronous queue;
the response unit is specifically configured to send the target data to the asynchronous queue when the target data is returned.
CN201910940925.7A 2019-09-30 2019-09-30 Method and device for improving response speed of system under high concurrency condition Active CN110704110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910940925.7A CN110704110B (en) 2019-09-30 2019-09-30 Method and device for improving response speed of system under high concurrency condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910940925.7A CN110704110B (en) 2019-09-30 2019-09-30 Method and device for improving response speed of system under high concurrency condition

Publications (2)

Publication Number Publication Date
CN110704110A CN110704110A (en) 2020-01-17
CN110704110B true CN110704110B (en) 2021-09-14

Family

ID=69197978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910940925.7A Active CN110704110B (en) 2019-09-30 2019-09-30 Method and device for improving response speed of system under high concurrency condition

Country Status (1)

Country Link
CN (1) CN110704110B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9009125B2 (en) * 2010-10-13 2015-04-14 International Business Machiness Corporation Creating and maintaining order of a log stream
CN104598563B (en) * 2015-01-08 2018-09-04 北京京东尚科信息技术有限公司 High concurrent date storage method and device
CN108415759A (en) * 2017-02-09 2018-08-17 阿里巴巴集团控股有限公司 Processing method, device and the electronic equipment of message
CN108628891A (en) * 2017-03-21 2018-10-09 北京京东尚科信息技术有限公司 Realize method, apparatus, electronic equipment and the readable storage medium storing program for executing of data buffer storage layer
CN107341054B (en) * 2017-06-29 2020-06-16 广州市百果园信息技术有限公司 Task execution method and device and computer readable storage medium
CN107958018B (en) * 2017-10-17 2021-06-11 北京百度网讯科技有限公司 Method and device for updating data in cache and computer readable medium
CN108595282A (en) * 2018-05-02 2018-09-28 广州市巨硅信息科技有限公司 A kind of implementation method of high concurrent message queue
CN109543080B (en) * 2018-12-04 2020-11-06 北京字节跳动网络技术有限公司 Cache data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110704110A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
US9613122B2 (en) Providing eventual consistency for multi-shard transactions
EP1492027B1 (en) Registering for and retrieving database table change information that can be used to invalidate cache entries
US9350826B2 (en) Pre-fetching data
US20170366609A1 (en) Synchronizing document replication in distributed systems
US5963945A (en) Synchronization of a client and a server in a prefetching resource allocation system
US20030172236A1 (en) Methods and systems for distributed caching in presence of updates and in accordance with holding times
US8495166B2 (en) Optimized caching for large data requests
EP2210177A1 (en) Statistical applications in oltp environment
EP2541423A1 (en) Replacement policy for resource container
CN108111325B (en) Resource allocation method and device
EP4216061A1 (en) Transaction processing method, system, apparatus, device, storage medium, and program product
US20130060810A1 (en) Smart database caching
CN110191168A (en) Processing method, device, computer equipment and the storage medium of online business datum
CN112307119A (en) Data synchronization method, device, equipment and storage medium
CN114116613A (en) Metadata query method, equipment and storage medium based on distributed file system
CN102780603A (en) Web traffic control method and device
CN111221828A (en) Method and terminal for improving consistency of database data and cache data
US9928174B1 (en) Consistent caching
US11016937B2 (en) Updateable distributed file framework
CN113687781A (en) Method, device, equipment and medium for pulling up thermal data
CN110737392A (en) Method, apparatus and computer program product for managing addresses in a storage system
CN110704110B (en) Method and device for improving response speed of system under high concurrency condition
US9317432B2 (en) Methods and systems for consistently replicating data
CN113326146A (en) Message processing method and device, electronic equipment and storage medium
US10503752B2 (en) Delta replication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 271000 Langchao science and Technology Park, 527 Dongyue street, Tai'an City, Shandong Province

Applicant after: INSPUR SOFTWARE Co.,Ltd.

Address before: No. 1036, Shandong high tech Zone wave road, Ji'nan, Shandong

Applicant before: INSPUR SOFTWARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant