CN109150929A - Data request processing method and apparatus under high concurrent scene - Google Patents
Data request processing method and apparatus under high concurrent scene Download PDFInfo
- Publication number
- CN109150929A CN109150929A CN201710451364.5A CN201710451364A CN109150929A CN 109150929 A CN109150929 A CN 109150929A CN 201710451364 A CN201710451364 A CN 201710451364A CN 109150929 A CN109150929 A CN 109150929A
- Authority
- CN
- China
- Prior art keywords
- data
- high concurrent
- virtual machine
- caching
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45504—Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45583—Memory management, e.g. access or allocation
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the invention discloses the data request processing method and apparatus under a kind of high concurrent scene, are related to field of computer technology.Wherein, the method for the embodiment of the present invention includes: that high concurrent data are periodically synchronized to the first virtual machine caching;It is cached according to request of data from first virtual machine and obtains data;If data corresponding with the request of data are not present in the first virtual machine caching, data corresponding with the request of data are obtained from the first memory database.By above method, the interface response time can be shortened under high concurrent request of data scene, reduce database pressure, while reduce the influence that high concurrent request of data requests regular traffic.
Description
Technical field
The present invention relates under field of computer technology more particularly to a kind of high concurrent scene data request processing method and
Device.
Background technique
With the explosive growth of internet data, more and more application server meets Lingao concurrent data requests
Scene.For example, online forum after the generation of certain hot news events, can be potentially encountered the amount of posting and the money order receipt to be signed and returned to the sender amount short time increases sharply
The case where;For example, microblogging, the hot topic of Zhi Hudeng portal website, the amount of access of hot news it is possible that the short time swash
The case where increasing;For example, e-commerce website is likely encountered the feelings of amount of access surge when having Holiday Sale activity, second to kill activity
Condition;For another example, online trade platforms may there is a situation where short time trading volumes to increase sharply for card Securities, bank etc..
In the prior art, it in order to reduce high concurrent request of data to the pressure of MySQL database, more and more applies
Service provider uses NoSQL database technology.In realizing process of the present invention, inventor's discovery at least exists such as in the prior art
Lower problem: since normal data and high concurrent data are likely to assign on same NoSQL database fragment, in high concurrent
It under scene, goes to take high concurrent data and normal data by same place, not only results in that the interface response time is too long, database
Pressure increases, and can request to impact to regular traffic.
Summary of the invention
In view of this, the embodiment of the present invention provides the data request processing method and apparatus under a kind of high concurrent scene, with
Shorten the interface response time under high concurrent request of data scene, reduce database pressure, while reducing high concurrent request of data
Influence to regular traffic request.
To achieve the above object, according to an aspect of an embodiment of the present invention, the number under a kind of high concurrent scene is provided
According to request processing method.
Data request processing method under the high concurrent scene of the embodiment of the present invention includes: periodically that high concurrent data are same
Step to the first virtual machine caches;It is cached according to request of data from first virtual machine and obtains data;If described first is virtual
Data corresponding with the request of data are not present in machine caching, then obtain from the first memory database corresponding with the request of data
Data.
Optionally, it is described periodically high concurrent data are synchronized to the first virtual machine caching before, the method is also wrapped
It includes: periodically preparing high concurrent data.
Optionally, the high concurrent data that periodically prepare include: periodically to obtain high concurrent number from relevant database
According to, and the high concurrent data that will acquire are stored in the form of key-value pair to the second memory database;It is described periodically by high concurrent number
It include: by interface method of calling by the high concurrent data in second memory database according to the first virtual machine caching is synchronized to
It is synchronized to the first virtual machine caching.
Optionally, the high concurrent data that periodically prepare include: periodically to obtain high concurrent number from relevant database
According to, and the high concurrent data that will acquire are stored to the second virtual machine in the form of key-value pair and are cached;It is described periodically by high concurrent number
According to be synchronized to the first virtual machine caching include: by interface method of calling by second virtual machine caching in high concurrent data
It is synchronized to the first virtual machine caching.
Optionally, the first virtual machine caching includes: JVM caching.
Optionally, first memory database includes: Redis memory database.
To achieve the above object, according to another aspect of an embodiment of the present invention, the number under a kind of high concurrent scene is provided
According to request processing unit.
Data request processing device under the high concurrent scene of the embodiment of the present invention includes: SNR detection module, for fixed
When by high concurrent data be synchronized to the first virtual machine caching;Module is obtained, for virtual from described first according to request of data
Machine caching obtains data;If data corresponding with the request of data are not present in the first virtual machine caching, from first
Memory database obtains data corresponding with the request of data.
Optionally, described device further include: data preparation module, for periodically preparing high concurrent data.
Optionally, it includes: periodically from relevant database that the data preparation module, which periodically prepares high concurrent data,
High concurrent data are obtained, and the high concurrent data that will acquire are stored in the form of key-value pair to the second memory database;The timing
It includes: by interface method of calling by the second memory number that high concurrent data are synchronized to the first virtual machine caching by synchronization module
The first virtual machine caching is synchronized to according to the high concurrent data in library.
Optionally, it includes: periodically from relevant database that the data preparation module, which periodically prepares high concurrent data,
High concurrent data are obtained, and the high concurrent data that will acquire are stored to the second virtual machine in the form of key-value pair and cached;The timing
It includes: by interface method of calling by second virtual machine that high concurrent data are synchronized to the first virtual machine caching by synchronization module
High concurrent data in caching are synchronized to the first virtual machine caching.
Optionally, the first virtual machine caching includes: JVM caching.
Optionally, first memory database includes: Redis memory database.
To achieve the above object, another aspect according to an embodiment of the present invention provides a kind of server.
The server of the embodiment of the present invention, comprising: one or more processors;And storage device, for storing one
Or multiple programs;When one or more of programs are executed by one or more of processors, so that one or more of
Processor realizes the data request processing method under the high concurrent scene of the embodiment of the present invention.
To achieve the above object, another aspect according to an embodiment of the present invention provides a kind of computer-readable medium.
The computer-readable medium of the embodiment of the present invention, is stored thereon with computer program, and described program is held by processor
The data request processing method under the high concurrent scene of the embodiment of the present invention is realized when row.
One embodiment in foregoing invention has the following advantages that or the utility model has the advantages that in embodiments of the present invention, by fixed
When high concurrent data are synchronized to the first virtual machine caching, it is first slow from the first virtual machine also, when receiving request of data
Acquisition data are deposited, obtain data from the first memory database again when obtaining less than data.So, high concurrent is being received
When request of data, data needed for capable of directly being obtained by the first virtual machine caching not only shorten the interface response time, reduce
Database pressure, and can reduce the influence that high concurrent request of data requests regular traffic.
Further effect possessed by above-mentioned non-usual optional way adds hereinafter in conjunction with specific embodiment
With explanation.
Detailed description of the invention
Attached drawing for a better understanding of the present invention, does not constitute an undue limitation on the present invention.Wherein:
Fig. 1 is the key step signal of the data request processing method under high concurrent scene according to an embodiment of the present invention
Figure;
Fig. 2 is the flow diagram of the data request processing method under high concurrent scene according to embodiment 1;
Fig. 3 is the main modular signal of the data request processing device under high concurrent scene according to an embodiment of the present invention
Figure;
Fig. 4 is that the embodiment of the present invention can be applied to exemplary system architecture figure therein;
Fig. 5 is adapted for the structural schematic diagram for the computer system for realizing the server of the embodiment of the present invention.
Specific embodiment
Below in conjunction with attached drawing, an exemplary embodiment of the present invention will be described, including the various of the embodiment of the present invention
Details should think them only exemplary to help understanding.Therefore, those of ordinary skill in the art should recognize
It arrives, it can be with various changes and modifications are made to the embodiments described herein, without departing from scope and spirit of the present invention.Together
Sample, for clarity and conciseness, descriptions of well-known functions and structures are omitted from the following description.
It should be pointed out that in the absence of conflict, the feature in embodiment and embodiment in the present invention can be with
It is combined with each other.
Fig. 1 is the key step signal of the data request processing method under high concurrent scene according to an embodiment of the present invention
Figure.As shown in Figure 1, the data request processing method under the high concurrent scene of the embodiment of the present invention mainly comprises the steps that
Step S101, high concurrent data are periodically synchronized to the first virtual machine caching.
In embodiments of the present invention, high concurrent data and normal data are splitted data into advance, and high concurrent data are determined
When be synchronized to the first virtual machine caching.That is, the first virtual machine caching only stores high concurrent data, without storing normal number
According to.Wherein, high concurrent data refer to that the bigger data of a certain moment amount of access, normal data refer to the general number of amount of access
According to.For example, will can generally be talked about using microblogging, the hot topic of Zhi Hudeng portal website, hot news as high concurrent data
Topic, General are as normal data;For example, Holiday Sale activity can will be participated on e-commerce website, the second kills movable quotient
Product and merchant data as high concurrent data, and promotion will be not engaged in, the second kills movable commodity and merchant data is as normal
Data.
Wherein, the first virtual machine caching refers to for handling request of data (high concurrent request of data or normal data request)
A certain functional module where server on native virtual machine caching.First virtual machine caching can preferentially select JVM (Java
Virtual machine) caching.JVM is a kind of virtual machine that can run Java bytecode, is generally realized with stack architecture machine.Example
Such as, for disposing for 300 servers of a certain Java functional module, the first virtual machine caching refers to this 300 servers
On JVM caching, high concurrent data can be synchronized to every 5 minutes in the JVM caching on this 300 servers.
Step S102, it is cached according to request of data from first virtual machine and obtains data.If first virtual machine
There are data corresponding with the request of data in caching, then enter step S103;Otherwise, S104 is entered step.
Step S103, the data corresponding with the request of data in the first virtual machine caching are taken out.
Step S104, data corresponding with the request of data are obtained from the first memory database.
In embodiments of the present invention, the first memory database is stored with normal data and high concurrent data.When it is implemented,
First memory database can preferentially select Redis memory database.For example, in 300 clothes for disposing a certain Java functional module
It is engaged in except device, can individually dispose the first memory database in 16 other servers.It will be appreciated that of the above server
Number, deployment scheme are only exemplary.In the case where not influencing present invention implementation, those skilled in the art can be to server
Number, deployment scheme is modified.For example, the server for disposing a certain Java functional module is 200, the first memory is disposed
The server of database is 10.For another example, the server for disposing a certain Java functional module is 200, and at this 200
The first memory database is disposed on 20 platforms in server.
Further, in a preferred embodiment, other than above step, the data request processing method further include with
Lower step: periodically prepare high concurrent data based on data preparation module;Also, it is periodically that the high concurrent data of preparation are synchronous
It is cached to first virtual machine.
In a preferred embodiment, the step of above preparation high concurrent data include: based on data preparation module, timing
Ground obtains high concurrent data from relevant database, and there are the second memories in the form of key-value pair for the high concurrent data that will acquire
In database.For example, except 300 servers for disposing a certain Java functional module, on 10 other server tops
Data preparation module is affixed one's name to, for obtaining high concurrent data;The second internal storage data is deployed on an other server again
Library, the high concurrent data for will acquire are stored.Also, it can be by interface method of calling periodically by second memory
High concurrent data in database are synchronized in the first virtual machine caching.When it is implemented, the second memory database can be selected preferentially
With Redis memory database.
In another preferred embodiment, the step of above preparation high concurrent data are included: and are determined based on data preparation module
When obtain high concurrent data from relevant database, and to have second in the form of key-value pair empty for the high concurrent data that will acquire
In quasi- machine caching, i.e. the virtual machine of data preparation module institute on the server caches.For example, disposing a certain Java functional module
300 servers except, deploy data preparation module on 10 other servers, for obtaining high concurrent data,
And the high concurrent data that will acquire are stored in virtual on this 10 server (disposing the server of data preparation module)
In machine caching, i.e. in the second virtual machine caching.Also, can by interface method of calling periodically will the second virtual machine caching in
High concurrent data are synchronized in the first virtual machine caching.
According to embodiments of the present invention, it by the way that high concurrent data are periodically synchronized to the first virtual machine caching, also, is connecing
When receiving request of data, data first are obtained from the first virtual machine caching (i.e. native virtual machine caching), when obtaining less than data
Data are obtained from the first memory database again.So, when receiving high concurrent request of data, local can directly be passed through
Data needed for virtual machine caching obtains, not only shorten the interface response time, reduce database pressure, but also can reduce height
The influence that concurrent data requests request regular traffic.Further, by the way that data preparation module is arranged, and with less server
Carry out isolated operation, the pressure of relevant interface and database when can reduce data acquisition;Even if also, when data acquisition
There is exception in relevant interface, will not influence the operation of original Java functional module.
The implementation process of technical solution of the present invention is discussed in detail below in conjunction with specific embodiment one, two.
Fig. 2 is the flow diagram of the data request processing method under high concurrent scene according to embodiment 1.?
It is to introduce the specific of technical solution of the present invention so that the freight charges on e-commerce website calculate application as an example in specific embodiment one
Implementation process.In this embodiment, the participation second movable businessman's freight charges computation rule will be killed as high concurrent data, will not participate in
Second kills movable businessman's freight charges computation rule as normal data, and to calculate center as the functional module of processing request of data
Example.As shown in Fig. 2, the data request processing method under high concurrent scene in specific embodiment one mainly includes to flow down
Journey:
It is cached Step 1: the high concurrent data that data preparation module prepares are synchronized to local JVM by the timing of the center of calculating.
In this embodiment, " local JVM caching " is a kind of specific example of " the first virtual machine caching " shown in Fig. 1.This
JVM caching in ground only has high concurrent data, without depositing normal data.For example, the center of calculating can be every 10 minutes, with interface tune
The high concurrent data that data preparation module prepares are synchronized in local JVM caching by mode.
Step 2: the center of calculating receives the settlement request of clearing page.
In this embodiment, settlement request is the specific example of " request of data " shown in Fig. 1.Settlement request may include:
The information such as merchant identification, commodity sign, dispatching address, the order amount of money.For example, a certain settlement request includes following information: businessman
It is identified as A producer, commodity sign is dry fruit, and dispatching address is Tibet, and the order amount of money is 69 yuan, and commodity weight is 1.5kg.Example
Such as: a certain settlement request includes following information: merchant identification is B producer, and commodity sign is clothes, and dispatching address is Guangzhou, is ordered
Single amount of money is 399 yuan.
Step 3: the center of calculating obtains businessman's freight charges computation rule from local JVM caching according to settlement request.If this
There is freight charges computation rule corresponding with settlement request in ground JVM caching, then take out businessman's freight charges from local JVM caching and calculate rule
Then, and it is directly entered step 5;Otherwise, four are entered step.
In step 3, calculate center can according to merchant identification, and/or, commodity sign searches the freight charges meter bound therewith
Calculate rule.For example, merchant identification is A producer, commodity sign is dry fruit, and the freight charges bound with merchant identification, commodity sign calculate
Rule are as follows: the order amount of money is 99 yuan full, national packet postal;Discontented 99 yuan of the order amount of money, and commodity weight is within 1kg, national packet postal;
Discontented 99 yuan of the order amount of money, and commodity weight is greater than 1kg, 10 yuan of freight charges.In another example merchant identification is B producer, with merchant identification
Binding freight charges computation rule are as follows: full 199 yuan of the order amount of money, national packet postal;The order amount of money is less than 199 yuan, and 5 yuan of freight charges.
Step 4: the center of calculating obtains freight charges computation rule corresponding with settlement request from the first Redis memory database.
In this embodiment, the first Redis memory database is the specific example of " the first memory database " shown in Fig. 1.
First Redis memory database had not only had high concurrent data, but also had normal data.If from the first Redis memory database
Data success is obtained, then enters step five.
Step 5: the center of calculating calculates freight charges according to freight charges computation rule corresponding with settlement request, and freight charges are calculated
As a result it is back to clearing page.
Further, the data request processing method other than step 1 to step 5, in specific embodiment one further include:
Data preparation module is disposed on independently of the server except calculating center, and high concurrent number is prepared by data preparation module
According to specific as follows: the current period is periodically obtained in such a way that interface calls for data preparation module and a upper phase second kills movable quotient
Then product mark obtained the participation current period and a upper phase second kills movable merchant identification;Further according to merchant identification, commodity sign from
The acquisition participation second kills movable businessman's freight charges computation rule in relevant database;Next, the businessman's freight charges that will acquire calculate
Rule is stored in the 2nd Redis memory database in a manner of key-value pair.In addition, data preparation module can also be by adding manually
Mode some participation seconds killed into movable businessman's freight charges computation rule be stored in the 2nd Redis memory database.For example, noon 12
Point, 14 points, 16 points there is the second to kill activity, current time be 13:00 when, data preparation module can get out and 14 points two at 12 points
The corresponding businessman's freight charges computation rule of the field second activity of killing;When current time is 15, data preparation module can get out at 14 points
Corresponding businessman's freight charges computation rule with the activity of killing of 16 points two seconds.
In specific embodiment one, businessman's freight charges computation rule is taken from local JVM caching by elder generation, if do not got
Data, then businessman's freight charges computation rule is taken from the first Redis memory database, reduce network consumption, shortens interface
Response time.Further, prepare high concurrent data by the way that individual data preparation module is arranged, can reduce to relevant interface
Request reduce the influence to regular traffic request and by taking high concurrent data and normal data from different places.
Specific embodiment two: the present invention is introduced by taking the hot news access application on news media's web page server as an example
The specific implementation process of technical solution.Also, in this specific embodiment, using the estimated biggish hot news of amount of access as height
Concurrent data, using the estimated lesser General of amount of access as normal data.Data request processing in specific embodiment two
Method mainly includes following below scheme:
Step 1: high concurrent data are periodically synchronized in local JVM caching by news media's web page server.
In this embodiment, " local JVM caching " is a kind of specific example of " the first virtual machine caching " shown in Fig. 1.This
JVM caching in ground only has hot news, without depositing General.For example, news media's web page server can every 5 minutes,
The hot news that data preparation module prepares is synchronized in local JVM caching in such a way that interface calls.
Step 2: news media's web page server receives the news browsing request of user.In this embodiment, news browsing
Request is the specific example of " request of data " shown in Fig. 1.
Step 3: news media's web page server requests to obtain news from local JVM caching according to news browsing.If
There is news corresponding with news browsing request in local JVM caching, then take out news from local JVM caching, and be directly entered
Step 5;If news corresponding with news browsing request is not present in JVM caching in local, four are entered step.
Step 4: news media's web page server is corresponding with news browsing request from the acquisition of the first Redis memory database
News.
In this embodiment, the first Redis memory database is the specific example of " the first memory database " shown in Fig. 1.
First Redis memory database had not only had high concurrent data, but also had normal data.If from the first Redis memory database
Data success is obtained, then enters step five.
Step 5: news corresponding with news browsing request is back to user by news media's web page server.
Further, the data request processing method in specific embodiment two further include: taken independently of news media's webpage
Data preparation module is disposed on server except business device, and high concurrent data are prepared by data preparation module, specific as follows:
Data preparation module periodically obtains hot news, and the hot news that will acquire is stored in the 2nd Redis in a manner of key-value pair
In memory database.For example, data preparation module can be every the hot news of acquisition in 5 minutes, and the hot topic that will acquire is new
Hear the 2nd Redis memory database of deposit.Also, news media's web page server can be every 5 minutes by the 2nd Redis memory
Hot news in database is synchronized in local JVM caching.
In specific embodiment two, news is taken from local JVM caching by elder generation, if not getting news, then from the
News is taken in one Redis memory database, reduces network consumption, shortens the response time of interface.Further, pass through setting
Individual data preparation module prepares hot news, can reduce the request to relevant interface, and by from different places
High concurrent data and normal data are taken, the influence to regular traffic request is reduced.
Fig. 3 is the main modular signal of the data request processing device under high concurrent scene according to an embodiment of the present invention
Figure.As shown in figure 3, the data request processing device 300 under high concurrent scene specifically includes that SNR detection module 301, obtains mould
Block 302.
SNR detection module 301, for high concurrent data to be periodically synchronized to the first virtual machine caching.Wherein, first
Virtual machine caching can preferentially select JVM (Java Virtual Machine) to cache.For example, for 300 that dispose a certain Java functional module
For server, the JVM that high concurrent data can be synchronized on this 300 servers by SNR detection module 301 every 5 minutes delays
In depositing.
It should be pointed out that in embodiments of the present invention, data are divided into high concurrent data and normal data in advance.Its
In, high concurrent data refer to that the bigger data of a certain moment amount of access, normal data refer to the general data of amount of access.Example
Such as, can will participate in Holiday Sale activity, second on e-commerce website kills movable commodity and merchant data as high concurrent data,
And promotion will be not engaged in, the second kills movable commodity and merchant data as normal data.It will be appreciated that not influencing this hair
In the case where bright implementation, those skilled in the art's class flexibly carries out high concurrent data and normal data according to concrete application scene
It divides.For example, can be using the hot topic on microblogging as high concurrent data, and using general topic as normal data.
Module 302 is obtained, obtains data for caching according to request of data from first virtual machine.If first is virtual
There are data corresponding with the request of data in machine caching, then obtain module 302 from the first virtual machine and cache access evidence;Otherwise,
It obtains module 302 and obtains data corresponding with the request of data from the first memory database.
It should be pointed out that in embodiments of the present invention, the first memory database is stored with normal data and high concurrent number
According to.Wherein, the first memory database can preferentially select Redis memory database.In a specific deployment scheme, 300
It deploys on platform server a certain Java functional module (such as freight charges computing module), and is disposed on 16 other servers
First memory database.It will be appreciated that the number of the above server, deployment scheme are only exemplary.
Further, the data request processing device of the embodiment of the present invention further include: data preparation module.Data preparation mould
Block, for periodically preparing high concurrent data.Also, SNR detection module 301, for periodically preparing data preparation module
High concurrent data be synchronized to the first virtual machine caching.
Specifically, in one embodiment, data preparation module periodically obtains high concurrent number from relevant database
According to, and the high concurrent data that will acquire are stored in the form of key-value pair to the second memory database.Also, SNR detection module 301
The high concurrent data in the second memory database first virtual machine is synchronized to by interface method of calling to cache.At one
In specific deployment scheme, a certain Java functional module is deployed on 300 servers, it is single on 10 other servers
Data preparation module solely is deployed, and deploys the second memory database on an other server.When it is implemented, the
Two memory databases can also preferentially select Redis memory database.It will be appreciated that the number of the above server, deployment scheme
Only it is exemplary.
In another embodiment, data preparation module periodically obtains high concurrent data from relevant database, and will
The high concurrent data of acquisition are stored to the second virtual machine in the form of key-value pair and are cached.Also, SNR detection module 301 passes through interface
High concurrent data in second virtual machine caching are synchronized to first virtual machine and cached by method of calling.Wherein, second is virtual
Machine caching refers to that data preparation module institute virtual machine on the server caches.For example, in a specific deployment scheme,
A certain Java functional module (such as calculating center shown in Fig. 2) is deployed on 300 servers, in 10 other servers
On individually deploy data preparation module, then, the first virtual machine caching refers to the above deployment a certain Java functional module
300 servers on virtual machine caching, the second virtual machine caching refers to 10 of data preparation module services of the above deployment
Virtual machine caching on device.It will be appreciated that the number of the above server, deployment scheme are only exemplary.
According to embodiments of the present invention, high concurrent data are synchronized to by the first virtual machine caching by SNR detection module, and
And acquisition data first are cached from the first virtual machine by obtaining module, when obtaining less than data again from the first memory database
Obtain data.So, when receiving high concurrent request of data, needed for capable of directly being obtained by native virtual machine caching
Data not only shorten the interface response time, reduce database pressure, but also can reduce high concurrent request of data to normal
The influence of service request.Further, by the way that individual data preparation module is arranged, high concurrent data can be further decreased and obtained
To the pressure of database when taking, preparing.
Fig. 4 is shown can be using the data request processing method or high concurrent under the high concurrent scene of the embodiment of the present invention
The exemplary system architecture 400 of data request processing device under scene.
As shown in figure 4, system architecture 400 may include terminal device 401,402,403, network 404 and server 405.
Network 404 between terminal device 401,402,403 and server 405 to provide the medium of communication link.Network 404 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 401,402,403 and be interacted by network 404 with server 405, to receive or send out
Send message etc..Various telecommunication customer end applications, such as the application of shopping class, net can be installed on terminal device 401,402,403
The application of page browsing device, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 401,402,403 can be the various electronic equipments with display screen and supported web page browsing, packet
Include but be not limited to smart phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 405 can be to provide the server of various services, such as utilize terminal device 401,402,403 to user
The shopping class website browsed provides the back-stage management server supported.Back-stage management server can be according to the clearing received
Page request, which obtains, carries out freight charges calculating, and freight charges calculated result is fed back to terminal device.
It should be noted that data request processing method under high concurrent scene provided by the embodiment of the present invention generally by
Server 405 executes, and correspondingly, the data request processing device under high concurrent scene is generally positioned in server 405.
It should be understood that the number of terminal device, network and server in Fig. 4 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
Below with reference to Fig. 5, it illustrates the computer systems 500 for the server for being suitable for being used to realize the embodiment of the present invention
Structural schematic diagram.Server shown in Fig. 5 is only an example, should not function and use scope band to the embodiment of the present invention
Carry out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in
Program in memory (ROM) 502 or be loaded into the program in random access storage device (RAM) 503 from storage section 508 and
Execute various movements appropriate and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.
CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always
Line 504.
I/O interface 505 is connected to lower component: the importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 508 including hard disk etc.;
And the communications portion 509 of the network interface card including LAN card, modem etc..Communications portion 509 via such as because
The network of spy's net executes communication process.Driver 510 is also connected to I/O interface 505 as needed.Detachable media 511, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 510, in order to read from thereon
Computer program be mounted into storage section 508 as needed.
Particularly, disclosed embodiment, the process described above with reference to flow chart may be implemented as counting according to the present invention
Calculation machine software program.For example, embodiment disclosed by the invention includes a kind of computer program product comprising be carried on computer
Computer program on readable medium, the computer program include the program code for method shown in execution flow chart.?
In such embodiment, which can be downloaded and installed from network by communications portion 509, and/or from can
Medium 511 is dismantled to be mounted.When the computer program is executed by central processing unit (CPU) 501, system of the invention is executed
The above-mentioned function of middle restriction.
It should be noted that computer-readable medium shown in the present invention can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the present invention, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In invention, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of various embodiments of the invention, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of above-mentioned module, program segment or code include one or more
Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical
On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants
It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule
The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction
It closes to realize.
Being described in module involved in the embodiment of the present invention can be realized by way of software, can also be by hard
The mode of part is realized.Described module also can be set in the processor, for example, can be described as: a kind of processor packet
It includes time synchronization unit, obtain module.Wherein, the title of these modules is not constituted to the module itself under certain conditions
It limits, for example, SNR detection module is also described as " high concurrent data being periodically synchronized to the mould of the first virtual cache
Block ".
As on the other hand, the present invention also provides a kind of computer-readable medium, which be can be
Included in equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying equipment.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the equipment, makes
It obtains the equipment and executes following below scheme: high concurrent data being periodically synchronized to the first virtual machine caching;According to request of data from institute
It states the first virtual machine caching and obtains data;If number corresponding with the request of data is not present in the first virtual machine caching
According to then from the first memory database acquisition data corresponding with the request of data.
Above-mentioned specific embodiment, does not constitute a limitation on the scope of protection of the present invention.Those skilled in the art should be bright
It is white, design requirement and other factors are depended on, various modifications, combination, sub-portfolio and substitution can occur.It is any
Made modifications, equivalent substitutions and improvements etc. within the spirit and principles in the present invention, should be included in the scope of the present invention
Within.
Claims (14)
1. a kind of data request processing method under high concurrent scene, which is characterized in that the described method includes:
High concurrent data are periodically synchronized to the first virtual machine caching;
It is cached according to request of data from first virtual machine and obtains data;If the first virtual machine caching is not present and institute
The corresponding data of request of data are stated, then obtain data corresponding with the request of data from the first memory database.
2. the method according to claim 1, wherein high concurrent data are periodically synchronized to the first void described
Before quasi- machine caching, the method also includes:
Periodically prepare high concurrent data.
3. according to the method described in claim 2, it is characterized in that, the high concurrent data that periodically prepare include: periodically
High concurrent data are obtained from relevant database, and the high concurrent data that will acquire are stored in the form of key-value pair to the second memory number
According to library;
It is described that high concurrent data are periodically synchronized to the first virtual machine caching includes: by interface method of calling by described second
High concurrent data in memory database are synchronized to the first virtual machine caching.
4. according to the method described in claim 2, it is characterized in that, the high concurrent data that periodically prepare include: periodically
High concurrent data are obtained from relevant database, and the high concurrent data that will acquire are stored in the form of key-value pair to the second virtual machine
Caching;
It is described that high concurrent data are periodically synchronized to the first virtual machine caching includes: by interface method of calling by described second
High concurrent data in virtual machine caching are synchronized to the first virtual machine caching.
5. the method according to claim 1, wherein first virtual machine caching includes: JVM caching.
6. the method according to claim 1, wherein first memory database includes: Redis internal storage data
Library.
7. the data request processing device under a kind of high concurrent scene, which is characterized in that described device includes:
SNR detection module, for high concurrent data to be periodically synchronized to the first virtual machine caching;
Module is obtained, obtains data for caching according to request of data from first virtual machine;If first virtual machine
Data corresponding with the request of data are not present in caching, then obtain from the first memory database corresponding with the request of data
Data.
8. device according to claim 7, which is characterized in that described device further include:
Data preparation module, for periodically preparing high concurrent data.
9. device according to claim 8, which is characterized in that the data preparation module periodically prepares high concurrent data
It include: periodically to obtain high concurrent data from relevant database, and the high concurrent data that will acquire are stored in the form of key-value pair
To the second memory database;
It includes: by interface method of calling by institute that high concurrent data are synchronized to the first virtual machine caching by the SNR detection module
It states the high concurrent data in the second memory database and is synchronized to the first virtual machine caching.
10. device according to claim 8, which is characterized in that the data preparation module periodically prepares high concurrent number
According to including: periodically to obtain high concurrent data from relevant database, and the high concurrent data that will acquire are deposited in the form of key-value pair
Storage to the second virtual machine caches;
It includes: by interface method of calling by institute that high concurrent data are synchronized to the first virtual machine caching by the SNR detection module
It states the high concurrent data in the second virtual machine caching and is synchronized to the first virtual machine caching.
11. device according to claim 7, which is characterized in that the first virtual machine caching includes: JVM caching.
12. device according to claim 7, which is characterized in that first memory database includes: Redis memory number
According to library.
13. a kind of server characterized by comprising
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1 to 6.
14. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that described program is held by processor
The method as described in any in claim 1 to 6 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710451364.5A CN109150929B (en) | 2017-06-15 | 2017-06-15 | Data request processing method and device under high concurrency scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710451364.5A CN109150929B (en) | 2017-06-15 | 2017-06-15 | Data request processing method and device under high concurrency scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109150929A true CN109150929A (en) | 2019-01-04 |
CN109150929B CN109150929B (en) | 2021-11-12 |
Family
ID=64829774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710451364.5A Active CN109150929B (en) | 2017-06-15 | 2017-06-15 | Data request processing method and device under high concurrency scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109150929B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110032571A (en) * | 2019-04-18 | 2019-07-19 | 腾讯科技(深圳)有限公司 | Business flow processing method, apparatus, storage medium and calculating equipment |
CN111522836A (en) * | 2020-04-22 | 2020-08-11 | 杭州海康威视系统技术有限公司 | Data query method and device, electronic equipment and storage medium |
CN111522850A (en) * | 2020-04-23 | 2020-08-11 | 京东数字科技控股有限公司 | Data object storage and query method, device, equipment and storage medium |
CN112015745A (en) * | 2020-08-19 | 2020-12-01 | 北京达佳互联信息技术有限公司 | Data management method and device |
CN112667680A (en) * | 2020-12-17 | 2021-04-16 | 平安消费金融有限公司 | Method and device for querying data and computer equipment |
CN112925851A (en) * | 2021-02-26 | 2021-06-08 | 杭州网易再顾科技有限公司 | Single number processing method and device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100262687A1 (en) * | 2009-04-10 | 2010-10-14 | International Business Machines Corporation | Dynamic data partitioning for hot spot active data and other data |
CN102790784A (en) * | 2011-05-18 | 2012-11-21 | 阿里巴巴集团控股有限公司 | Distributed cache method and system and cache analyzing method and analyzing system |
CN102945251A (en) * | 2012-10-12 | 2013-02-27 | 浪潮电子信息产业股份有限公司 | Method for optimizing performance of disk database by memory database technology |
CN103246696A (en) * | 2013-03-21 | 2013-08-14 | 宁波公众信息产业有限公司 | High-concurrency database access method and method applied to multi-server system |
CN103268321A (en) * | 2013-04-19 | 2013-08-28 | 中国建设银行股份有限公司 | Data processing method and device for high concurrency transaction |
CN104598563A (en) * | 2015-01-08 | 2015-05-06 | 北京京东尚科信息技术有限公司 | High concurrency data storage method and device |
CN104750715A (en) * | 2013-12-27 | 2015-07-01 | 中国移动通信集团公司 | Data elimination method, device and system in caching system and related server equipment |
CN104866531A (en) * | 2015-04-27 | 2015-08-26 | 交通银行股份有限公司 | Method and system for quickly accessing information data of clients of banks |
CN106790422A (en) * | 2016-12-02 | 2017-05-31 | 北京锐安科技有限公司 | A kind of data buffer storage cluster and data retrieval method for WEB application |
-
2017
- 2017-06-15 CN CN201710451364.5A patent/CN109150929B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100262687A1 (en) * | 2009-04-10 | 2010-10-14 | International Business Machines Corporation | Dynamic data partitioning for hot spot active data and other data |
CN102790784A (en) * | 2011-05-18 | 2012-11-21 | 阿里巴巴集团控股有限公司 | Distributed cache method and system and cache analyzing method and analyzing system |
CN102945251A (en) * | 2012-10-12 | 2013-02-27 | 浪潮电子信息产业股份有限公司 | Method for optimizing performance of disk database by memory database technology |
CN103246696A (en) * | 2013-03-21 | 2013-08-14 | 宁波公众信息产业有限公司 | High-concurrency database access method and method applied to multi-server system |
CN103268321A (en) * | 2013-04-19 | 2013-08-28 | 中国建设银行股份有限公司 | Data processing method and device for high concurrency transaction |
CN104750715A (en) * | 2013-12-27 | 2015-07-01 | 中国移动通信集团公司 | Data elimination method, device and system in caching system and related server equipment |
CN104598563A (en) * | 2015-01-08 | 2015-05-06 | 北京京东尚科信息技术有限公司 | High concurrency data storage method and device |
CN104866531A (en) * | 2015-04-27 | 2015-08-26 | 交通银行股份有限公司 | Method and system for quickly accessing information data of clients of banks |
CN106790422A (en) * | 2016-12-02 | 2017-05-31 | 北京锐安科技有限公司 | A kind of data buffer storage cluster and data retrieval method for WEB application |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110032571A (en) * | 2019-04-18 | 2019-07-19 | 腾讯科技(深圳)有限公司 | Business flow processing method, apparatus, storage medium and calculating equipment |
CN110032571B (en) * | 2019-04-18 | 2023-04-18 | 腾讯科技(深圳)有限公司 | Business process processing method and device, storage medium and computing equipment |
CN111522836A (en) * | 2020-04-22 | 2020-08-11 | 杭州海康威视系统技术有限公司 | Data query method and device, electronic equipment and storage medium |
CN111522836B (en) * | 2020-04-22 | 2023-10-10 | 杭州海康威视系统技术有限公司 | Data query method and device, electronic equipment and storage medium |
CN111522850A (en) * | 2020-04-23 | 2020-08-11 | 京东数字科技控股有限公司 | Data object storage and query method, device, equipment and storage medium |
CN112015745A (en) * | 2020-08-19 | 2020-12-01 | 北京达佳互联信息技术有限公司 | Data management method and device |
CN112015745B (en) * | 2020-08-19 | 2024-05-17 | 北京达佳互联信息技术有限公司 | Data management method and device |
CN112667680A (en) * | 2020-12-17 | 2021-04-16 | 平安消费金融有限公司 | Method and device for querying data and computer equipment |
CN112925851A (en) * | 2021-02-26 | 2021-06-08 | 杭州网易再顾科技有限公司 | Single number processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109150929B (en) | 2021-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109150929A (en) | Data request processing method and apparatus under high concurrent scene | |
CN104966214B (en) | A kind of exchange method and device of electronic ticket | |
CN110019125A (en) | The method and apparatus of data base administration | |
CN107845011B (en) | Method and apparatus for handling order | |
CN107844324A (en) | Customer terminal webpage redirects treating method and apparatus | |
CN105812394A (en) | Novel application of cloud computing to cross-border electronic commerce | |
CN110473036A (en) | A kind of method and apparatus generating order number | |
CN110427304A (en) | O&M method, apparatus, electronic equipment and medium for banking system | |
CN108776692A (en) | Method and apparatus for handling information | |
CN110738477A (en) | reconciliation method, device, computer equipment and storage medium | |
CN110019258A (en) | The method and apparatus for handling order data | |
CN109961306A (en) | A kind of inventory allocation method and apparatus of article | |
CN110245030A (en) | A kind of data service providing method, device, medium and electronic equipment | |
CN110135770A (en) | The generation method and device of outbound scheme | |
CN110166507A (en) | More resource regulating methods and device | |
CN109961331A (en) | Page processing method and its system, computer system and readable storage medium storing program for executing | |
CN109583945A (en) | A kind of method and apparatus of advertising resource distribution | |
CN109754272A (en) | The charging method and system of the web advertisement | |
CN109753424A (en) | The method and apparatus of AB test | |
CN109218125A (en) | A kind of method and system of heartbeat data interaction | |
CN110347654A (en) | A kind of method and apparatus of online cluster features | |
CN109901892A (en) | A kind of method and apparatus of dynamic attribute verifying | |
CN108259575A (en) | Advertisement broadcast method, system, self-service device and Advertisement Server | |
US20230388120A1 (en) | Client-Side Device Bloom Filter Mapping | |
CN108564406A (en) | A kind of method and apparatus of excitation push |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |