The content of the invention
The embodiment of the present invention provides a kind of data processing method, can effectively alleviate the performance pressures of server and database
And the modification that same target data is avoided to be obtained and repeated by different user terminals.
In a first aspect, an embodiment of the present invention provides a kind of data processing method, this method includes:
Target data is obtained from database, by target data storage into the data buffer storage pond of cache server;
Receive operation requests of the user terminal to first object data;
It will authorize the operation requests corresponding user terminal the operating right in the data buffer storage pond, the operating rights
Limit cannot authorize two or more user terminals simultaneously;
After the corresponding operation of the operation requests is performed, the operating right is recycled.
As can be seen that target data is stored in the data buffer storage pond of cache server by the embodiment of the present invention, Yong Hutong
It crosses and accesses the data buffer storage pond of cache server to be operated to target data, avoid frequent visit database, mitigate
The performance pressures of database, in addition the data buffer storage pond to cache server add an operating right, when user terminal needs
When being operated to the data in data cache pool, operating right is first obtained, and the operating right every time cannot quilt simultaneously
Two or more user terminals obtain, this avoids target data simultaneously by different user's repetitive operations.Therefore it is logical
Operating efficiency to target data can effectively be improved by crossing the embodiment of the present invention.
Optionally, the corresponding operation of the execution operation requests includes:
Identification information is generated, the identification information includes the information of the first object data and the user terminal
Information, by the identification information feedback to database and the user terminal;
After the operating right is recycled, the method further includes:
Handling result of the user terminal to the first object data is received, the handling result is fed back into data
The identification information is deleted in storehouse.
As can be seen that in embodiments of the present invention, after user terminal gets the operating right to target data, from
It is dynamic that an identification information is generated according to user terminal and target information, represent that the target information is occupied, it is impossible to by other
User terminal operates on it.And after the complete target information of user terminal processes, this identification information can be deleted.So as to
The user's terminal reacquires other target datas.Therefore the embodiment of the present invention can not only prevent target by the identification information
Data repeat to obtain and change by other users terminal, are also prevented from the user's terminal and repeat to obtain target data.
Optionally, which is characterized in that the operating right to the data buffer storage pond is distributed lock;
It is described to authorize the operation requests to the operating right in the data buffer storage pond and include:
In the case of there is no the corresponding identification information of the user terminal, locking operation is asked, if the distribution
It locks no occupied, then authorizes the operation requests the corresponding user terminal distributed lock, the distribution is set
Formula lock exceeds the time limit the time.
Optionally, which is characterized in that described to authorize the operation requests bag to the operating right in the data buffer storage pond
It includes:
In the case of there is no the corresponding identification information of the user terminal, locking operation is asked, if the distribution
Lock occupied, then obtain exceeding the time limit the time for the distributed lock, if current time be more than it is described exceed the time limit the time, ask to lock and grasp
Make, authorize the operation requests the corresponding user terminal distributed lock, when exceeding the time limit of the distributed lock is set
Between.
As can be seen that the embodiment of the present invention, the operating right to data cache pool is realized by distributed lock, if should
When distributed lock does not have occupied, user terminal can directly acquire the distributed lock, and by the distributed lock come to number of targets
According to being operated accordingly.If alternatively, the distributed lock is occupied, the distributed lock is waited to discharge automatically or when this
Distributed lock exceeds the time limit and is forced after release, the acquisition request distributed lock.And obtain the distribution institute in user terminal
Afterwards, set one exceed the time limit the time, after the time time is exceeded the time limit more than this, the forcible aborting distributed lock, therefore can prevent because
Cause the distributed lock that cannot discharge to form deadlock for network or other reasons.
Further, which is characterized in that after the corresponding operation of the execution operation requests, the method is also wrapped
It includes:
Delete the first object data in the data buffer storage pond;
The recycling operating right includes:
Discharge the distributed lock.
Optionally, which is characterized in that the target data that obtained from database includes:
In the case that target data quantity in the data buffer storage pond is zero, target data is obtained from database.
As can be seen that it in embodiments of the present invention, is performed in user terminal to the corresponding of the target data in cache pool
After operation, the target data in cache pool can be deleted automatically, to avoid the committed memory space when the accumulation of target data.
Second aspect, an embodiment of the present invention provides a kind of data handling system, which includes:
Load balancing apparatus, application server, cache server, wherein,
The load balancing apparatus, for receiving operation requests of the user terminal to first object data, and by the behaviour
Make the first application server that request is transmitted to current operation request minimum number;
The application server, for receiving the operation requests of the load balancing apparatus forwarding, by the operation
Request is sent to the cache server;
For obtaining target data from database, target data storage is taken to caching for the cache server
It is engaged in the data buffer storage pond of device;Receive operation requests of the user terminal to first object data;It will be to the data buffer storage pond
Operating right authorizes the operation requests corresponding user terminal, and the operating right cannot authorize multiple user terminals simultaneously;
The corresponding operation of the operation requests is performed, recycles the operating right.
The third aspect, an embodiment of the present invention provides a kind of server, which includes:
Data capture unit, for obtaining target data from database, by target data storage to buffer service
In the data buffer storage pond of device;
Receiving unit, for receiving operation requests of the user terminal to first object data;
Granted unit, it is whole for the corresponding user of the operation requests will to be authorized to the operating right in the data buffer storage pond
End, the operating right cannot authorize two or more user terminals simultaneously;
Operation execution unit, for performing the corresponding operation of the operation requests;
Recovery unit, for aforesaid operations execution unit after the corresponding operation of the operation requests is performed, described in recycling
Operating right.
As can be seen that target data is stored in the data buffer storage pond of cache server by the embodiment of the present invention, Yong Hutong
It crosses and accesses the data buffer storage pond of cache server to be operated to target data, avoid frequent visit database, mitigate
The performance pressures of database, in addition the data buffer storage pond to cache server add an operating right, when user terminal needs
When being operated to the data in data cache pool, operating right is first obtained, and the operating right every time cannot quilt simultaneously
Two or more user terminals obtain, this avoids target data simultaneously by different user's repetitive operations.Therefore it is logical
Operating efficiency to target data can effectively be improved by crossing the embodiment of the present invention.
Optionally, the terminal device further includes:
Generation unit, for generating identification information, the identification information include the first object data information and
The information of the user terminal;
Feedback unit, for by the identification information feedback to database and the user terminal;
The receiving unit is additionally operable to, and receives handling result of the user terminal to the first object data;
The feedback unit is additionally operable to the handling result feeding back to database, the identification information is deleted.
As can be seen that in embodiments of the present invention, after user terminal gets the operating right to target data, from
It is dynamic that an identification information is generated according to user terminal and target information, represent that the target information is occupied, it is impossible to by other
User terminal operates on it.And after the complete target information of user terminal processes, this identification information can be deleted.So as to
The user's terminal reacquires other target datas.Therefore the embodiment of the present invention can not only prevent target by the identification information
Data repeat to obtain and change by other users terminal, are also prevented from the user's terminal and repeat to obtain target data.
Optionally, the operating right to the data buffer storage pond is distributed lock;
The granted unit, in the case of there is no the corresponding identification information of the user terminal, request to lock
The distributed lock if the distributed lock does not have occupied, is authorized the operation requests the corresponding user by operation
Terminal sets exceeding the time limit the time for the distributed lock.
Optionally, the granted unit, in the case of there is no the corresponding identification information of the user terminal, asking
Locking operation is sought, if the distributed lock is occupied, obtains exceeding the time limit the time for the distributed lock, if current time is more than institute
The time of exceeding the time limit is stated, then asks locking operation, authorizes the operation requests the corresponding user terminal distributed lock, if
Put exceeding the time limit the time for the distributed lock.
As can be seen that the embodiment of the present invention, the operating right to data cache pool is realized by distributed lock, if should
When distributed lock does not have occupied, user terminal can directly acquire the distributed lock, and by the distributed lock come to number of targets
According to being operated accordingly.If alternatively, the distributed lock is occupied, the distributed lock is waited to discharge automatically or when this
Distributed lock exceeds the time limit and is forced after release, the acquisition request distributed lock.And obtain the distribution institute in user terminal
Afterwards, set one exceed the time limit the time, after the time time is exceeded the time limit more than this, the forcible aborting distributed lock, therefore can prevent because
Cause the distributed lock that cannot discharge to form deadlock for network or other reasons.
Further, the terminal device further includes:
Unit is deleted, for deleting the first object data in the data buffer storage pond;
The recovery unit, for discharging the distributed lock.
Optionally, the acquiring unit, in the case of being zero for the target data quantity in the data buffer storage pond,
Target data is obtained from database.
As can be seen that it in embodiments of the present invention, is performed in user terminal to the corresponding of the target data in cache pool
After operation, the target data in cache pool can be deleted automatically, to avoid the committed memory space when the accumulation of target data.
Fourth aspect, an embodiment of the present invention provides another server, including processor, memory and communication module,
Wherein, the memory is used for the program code of storage system abnormality processing, and the processor is used to call the system exception
The program code of processing performs the method for above-mentioned first aspect.
5th aspect, an embodiment of the present invention provides a kind of computer readable storage medium, the computer storage media
Computer program is stored with, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor
State the method that processor performs above-mentioned first aspect.
In embodiments of the present invention, the data target data that user terminal needs access being stored in cache server
In cache pool, when substantial amounts of user terminal carries out operation requests, directly read from the data buffer storage pond in cache server
Data alleviate the performance pressures of database, in addition delay to the data of cache server without frequent visit database
It deposits pond and adds an operating right, when user terminal needs to operate the data in data cache pool, first to be operated
Permission, and the operating right cannot be obtained every time by two or more user terminals simultaneously, and this avoids number of targets
According to simultaneously by different user's repetitive operations.By the embodiment of the present invention, access efficiency can be effectively improved, shortens data
Processing time reduces the wasting of resources.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without making creative work
Example, belongs to the scope of protection of the invention.
It should be appreciated that ought use in this specification and in the appended claims, term " comprising " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but it is not precluded from one or more of the other feature, whole
Body, step, operation, element, component and/or its presence or addition gathered.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh for describing specific embodiment
And be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is
Refer to any combinations and all possible combinations of one or more of the associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
Referring to Fig. 1, Fig. 1 is a kind of schematic flow diagram of data processing method provided in an embodiment of the present invention, as shown in Figure 1
This method may include:
101:Target data is obtained from database, by above-mentioned target data storage to the data buffer storage pond of cache server
In.
In embodiments of the present invention, cache server according to business operation request will be operated target data batch from
It is obtained and stored in database in the data buffer storage pond of cache server, so that user terminal directly passes through cache server
Data buffer storage pond operates target data accordingly, avoids user terminal and target data is carried out by accessing database
Operation.
Specifically, redis cachings may be employed in above-mentioned data buffer storage pond, redis is a kind of key-value (key-value) type
Distributed memory system, can support a variety of value types, including character string (string), chained list (list), set
(set) and Hash type (hash) etc., storage and read data rate it is all very high.Above-mentioned target data can be between active stage
Need to be accessed frequently the data of reading or operation, such as the relevant active rule of sweepstake, winnings information etc., by number of targets
According to being stored among redis cachings, family terminal can be used directly to access redis cachings during activity come first number among obtaining
According to without accessing database, increase data-handling efficiency.
102:Receive operation requests of the user terminal to first object data.
Wherein, above-mentioned user terminal can wherein include cell phone, tablet computer, personal digital assistant (Personal
Digital Assistant, PDA), mobile internet device (Mobile Internet Device, MID), intelligence wearing set
Standby (such as smartwatch, Intelligent bracelet) various terminal equipment, the embodiment of the present invention are not construed as limiting.
In embodiments of the present invention, above-mentioned first object data are one in target data in above-mentioned data buffer storage pond.
Specifically, when user terminal participates in prize drawing, ticket is killed or robbed to the commodity second when activity, cache server can receive client to target
The operation requests of data include the information of above-mentioned user terminal and above-mentioned first object data, Ran Hougen in the operation requests
Corresponding first object data are operated according to aforesaid operations request.
103:Aforesaid operations will be authorized to the operating right in above-mentioned data buffer storage pond and ask corresponding user terminal, above-mentioned behaviour
Two or more user terminals cannot be authorized simultaneously by making permission.
In embodiments of the present invention, after above-mentioned cache server receives the operation requests of user terminal, according to this
Operation requests obtain the operating right to data cache pool.Specifically, when cache server receives the operation of user terminal
After request, first determine whether the above-mentioned operating right to data cache pool is occupied, if the above-mentioned behaviour to data cache pool
Do not have as permission occupied, then this authorizes the operating right of data cache pool to aforesaid operations and asks corresponding user terminal;
If the above-mentioned operating right to data cache pool is occupied by other users terminal, the operation to data cache pool is waited
After permission is released or exceeds the time limit, re-request obtains the operating right to data cache pool.
Wherein, the above-mentioned operating right to above-mentioned data buffer storage pond can be that (such as redis is distributed for a distributed lock
Lock) or one connection etc., but the operating right need meet cannot simultaneously be obtained by two or more user terminals.It is logical
Crossing the operating right can be obtained by multiple user terminals simultaneously to avoid target data.It is understood that it ought receive simultaneously
During to multiple operation requests, above-mentioned multiple operation requests obtain aforesaid operations permission, such as redis distributions by competition mechanism
The acquisition of formula lock, when having multiple locking requests simultaneously, then multiple unlocking requests compete distributed lock by setnx orders,
1 is returned if locking successfully, if locking failure, lock is waited to discharge.
For example, when the progress second kills activity on the net, after activity starts, cache server can receive substantial amounts of use
Family terminal operation request, if operating right is not added in the data buffer storage pond at this time in cache server, then user terminal
Operation requests reach cache server when, will some commodity data information directly corresponding to operation requests grasped accordingly
Make, at this point, since operation requests quantity is big, it is possible that being present with some commodity data information simultaneously by different user terminals
The operation (such as a commodity were photographed by multiple people) repeated;After if there is operating right in above-mentioned data buffer storage pond,
User terminal needs to operate the commodity data information of data cache pool after obtaining operating right, and the operating rights
Limit can only be obtained every time by user terminal, therefore, would not occur after operating right is added to data buffer storage pond on
State problem.
104:After performing aforesaid operations and asking corresponding operation, aforesaid operations permission is recycled.
In embodiments of the present invention, after user terminal gets the operating right to data cache pool, above-mentioned behaviour is performed
Make to ask corresponding operation, after performing aforesaid operations and asking corresponding operation, cache server will be to data cache pool
Operating right is recycled, so as to the corresponding user terminal energy quick obtaining of other operation requests to the operation to data cache pool
Permission.
As can be seen that target data is stored in the data buffer storage pond of cache server by the embodiment of the present invention, Yong Hutong
It crosses and accesses the data buffer storage pond of cache server to be operated to target data, avoid frequent visit database, mitigate
The performance pressures of database, in addition the data buffer storage pond to cache server add an operating right, when user terminal needs
When being operated to the data in data cache pool, operating right is first obtained, and the operating right every time cannot quilt simultaneously
Two or more user terminals obtain, this avoids target data simultaneously by different user's repetitive operations.Therefore it is logical
Operating efficiency to target data can effectively be improved by crossing the embodiment of the present invention.
Referring to Fig. 2, Fig. 2 is that the embodiment of the present invention provides a kind of schematic flow diagram of data processing method, as shown in Figure 2 should
Method may include:
201:In the case that target data quantity in data buffer storage pond is zero, target data is obtained from database, it will
Above-mentioned target data storage is into the data buffer storage pond of cache server.
In embodiments of the present invention, when the quantity for detecting the target data in data buffer storage pond be zero when, then cache
Server obtains the target data of default quantity from database, and the target data of above-mentioned default quantity is stored in caching clothes
It is engaged in the data buffer storage pond of device, so that user terminal directly carries out phase by the data buffer storage pond of cache server to target data
The operation answered avoids user terminal and target data is operated by accessing database.
Specifically, redis cachings may be employed in above-mentioned data buffer storage pond, redis is a kind of key-value (key-value) type
Distributed memory system, can support a variety of value types, including character string (string), chained list (list), set
(set) and Hash type (hash) etc., storage and read data rate it is all very high.Above-mentioned target data can be between active stage
Need to be accessed frequently the data of reading or operation, such as the relevant active rule of sweepstake, winnings information etc., by number of targets
According to being stored among redis, family terminal can be used directly to access redis during activity come the metadata among obtaining without
Database is accessed, increases data-handling efficiency.
202:Receive operation requests of the user terminal to first object data.
Wherein, above-mentioned user terminal can wherein include cell phone, tablet computer, personal digital assistant (Personal
Digital Assistant, PDA), mobile internet device (Mobile Internet Device, MID), intelligence wearing set
Standby (such as smartwatch, Intelligent bracelet) various terminal equipment, the embodiment of the present invention are not construed as limiting.
In embodiments of the present invention, above-mentioned first object data are one in target data in above-mentioned data buffer storage pond.
Specifically, when user terminal participates in prize drawing, ticket is killed or robbed to the commodity second when activity, cache server can receive client to target
The operation requests of data include the information of above-mentioned user terminal and above-mentioned first object data, Ran Hougen in the operation requests
Corresponding first object data are operated according to aforesaid operations request.It is understood that receive behaviour in cache server
After asking, if there is no aforesaid operations to ask corresponding target data in above-mentioned data buffer storage pond, cache server first from
It is obtained in database, then performs aforesaid operations and ask corresponding operation.
203:It authorizes the distributed lock in data buffer storage pond to aforesaid operations and asks corresponding user terminal.
It is corresponded to as an alternative embodiment, the above-mentioned distributed lock by data buffer storage pond authorizes aforesaid operations request
User terminal can specifically include:In the case of there is no the corresponding identification information of above-mentioned user terminal, request locks behaviour
Make, if above-mentioned distributed lock does not have occupied, it is whole to authorize above-mentioned distributed lock to the corresponding above-mentioned user of aforesaid operations request
End, sets exceeding the time limit the time for above-mentioned distributed lock.
Alternatively, in the case of there is no the corresponding identification information of above-mentioned user terminal, locking operation is asked, if above-mentioned point
Cloth lock is occupied, then obtains exceeding the time limit the time for above-mentioned distributed lock, if current time is exceeded the time limit the time more than above-mentioned, request adds
Lock operation authorizes above-mentioned distributed lock to aforesaid operations and asks corresponding above-mentioned user terminal, sets the super of above-mentioned distributed lock
Time phase.
In embodiments of the present invention, above-mentioned can be distributed lock to the operating right of data cache pool.Specifically, when slow
When depositing server and receiving the operation requests of user terminal, the request can be given to distribute a thread, then pass through the thread of distribution
To access data buffer storage pond, it is to be understood that when all threads are all occupied, operation requests below then pass through queuing
Mechanism obtains thread.After operation requests are assigned to thread, first check in above-mentioned cache server and whether there is and above-mentioned use
Terminal corresponding identification information in family if there is no the corresponding identification informations of above-mentioned user terminal, asks locking operation, Ran Houjian
It whether occupied surveys above-mentioned distributed lock, if above-mentioned distributed lock does not have occupied, above-mentioned distributed lock is authorized
The corresponding above-mentioned user terminal of operation requests is stated, and exceeding the time limit the time for above-mentioned distributed lock is set, then performing aforesaid operations please
Ask corresponding operation;If above-mentioned distributed lock is occupied, exceeding the time limit the time for above-mentioned distributed lock is obtained, in current time
After the above-mentioned time of exceeding the time limit, locking operation is asked, it is corresponding above-mentioned then to authorize above-mentioned distributed lock to aforesaid operations request
User terminal sets exceeding the time limit the time for above-mentioned distributed lock, then performs aforesaid operations and asks corresponding operation.
By taking redis distributed locks as an example, mainly by issue orders to realize above-mentioned distributed lock:
(1)setnx(lockkey,expires)
Wherein, lockkey is the key of lock, and all threads are shared, and expires is the expiration time of lock;If setnx is returned
1 is returned, illustrates that the thread is locked, the value of key lockkey is arranged to the time-out time of lock by setnx;If setnx returns to 0, say
Bright lock is obtained by other threads, and process cannot enter critical zone, and lock can only be waited to be released or lock failure.
(2)getset(lockkey,expires)
A process lock expiration time is obtained, and current lock expiration time is set;Only there are one threads to carry out
Getset operates the expiration time for getting a thread.
For example, it is assumed that tri- threads of A, B, C, it is 30 seconds to set time-out time:A threads have obtained process lock, and set
Time-out time for 9 points 30 seconds 30 minutes, A threads have been not carried out in 30 seconds, and two threads of B, C wait and judge whether current time big
In the time-out time of A threads, if A is overtime, B, C are carried out at the same time getset operations, and because getset is synchronous, B threads exist
9 points of the time-out time of A is got during getset 30 minutes and 30 seconds, and 9 points of new time-out time is set 31 minutes and 0 second, C threads exist
The time-out time got during getset be not just 9 points 30 seconds 30 minutes, but B threads set time-out point 9 points 0 second 31 minutes, only
The B threads such as energy have performed release lock or when 9 points compete lock in 0 second 31 minutes again afterwards.
Specifically, after cache server receives the operation requests of user terminal, attempt to set by setnx
The value of lockkey returns to 1 if success (being locked currently without this), successfully obtains lock;Lock is obtained if lock has existed
Expiration time, compared with current time, if time-out, then obtained and locked by getset, and new time-out time is set.
As can be seen that the embodiment of the present invention, the operating right to data cache pool is realized by distributed lock, if should
When distributed lock does not have occupied, user terminal can directly acquire the distributed lock, and by the distributed lock come to number of targets
According to being operated accordingly.If alternatively, the distributed lock is occupied, the distributed lock is waited to discharge automatically or when this
Distributed lock exceeds the time limit and is forced after release, the acquisition request distributed lock.And obtain the distribution institute in user terminal
Afterwards, set one exceed the time limit the time, after the time time is exceeded the time limit more than this, the forcible aborting distributed lock, therefore can prevent because
Cause the distributed lock that cannot discharge to form deadlock for network or other reasons.
204:Identification information is generated, above-mentioned identification information includes the information of above-mentioned first object data and above-mentioned user end
The information at end by above-mentioned identification information feedback to database and above-mentioned user terminal, discharges above-mentioned distributed lock.
Specifically, when cache server by the above-mentioned operating right to data cache pool authorize aforesaid operations request it is corresponding
After user terminal, above-mentioned first object data are got, then generate identification information according to above-mentioned first object data,
In, the information of first object data and the information of above-mentioned user terminal are included in above-mentioned identification information.Then the mark is believed
Breath feeds back to user terminal and database.Then above-mentioned distributed lock is discharged, so that other users terminal is allowed to obtain the operation
Request.
For example, in the second kills activity, killed when the user terminal second to a commodity, then backstage cache server is according to the commodity
Information and user terminal information generate an identification information, represent that the commodity are obtained by above-mentioned user terminal, other
Second, which kills request, cannot then obtain the commodity.
As can be seen that in embodiments of the present invention, after user terminal gets the operating right to target data, from
It is dynamic that an identification information is generated according to user terminal and target information, represent that the target information is occupied, it is impossible to by other
User terminal operates on it.So that the user's terminal reacquires other target datas.Therefore the embodiment of the present invention passes through
The identification information can prevent target data from repeating to obtain and change by other users terminal.
As an alternative embodiment, by above-mentioned identification information feedback to above-mentioned user terminal and database it
Afterwards, the first object data in above-mentioned data buffer storage pond are deleted, then discharges above-mentioned distributed lock.
205:Handling result of the above-mentioned user terminal to above-mentioned first object data is received, above-mentioned handling result is fed back to
Database deletes above-mentioned identification information.
As an alternative embodiment, identification information feedback is discharged above-mentioned to user terminal when cache server
After distributed lock, handling result of the above-mentioned user terminal to target data can be received, handling result is then fed back into number
It deletes according to storehouse, and by corresponding with above-mentioned user terminal identification information, above-mentioned is not being handled with terminal with this to identify
Target data can obtain target data from data buffer storage pond again.
As can be seen that it in embodiments of the present invention, is performed in user terminal to the corresponding of the target data in cache pool
After operation, the target data in cache pool can be deleted automatically, to avoid the committed memory space when the accumulation of target data.
Referring to Fig. 3, Fig. 3 is that the embodiment of the present invention provides a kind of network architecture schematic diagram of data handling system, as schemed institute
Show that the data handling system includes:User terminal 301, load balancing apparatus 302, application server 303, cache server 304,
Database 305, wherein,
Above-mentioned load balancing apparatus 302, for receiving operation requests of the user terminal 301 to first object data, and will
Aforesaid operations ask to be transmitted to the first application server 303 of current operation request minimum number;
Above application server 303, will be upper for receiving the aforesaid operations request that above-mentioned load balancing apparatus 302 forwards
It states operation requests and is sent to above-mentioned cache server 304;
For obtaining target data from database 305, above-mentioned target data storage is arrived for above-mentioned cache server 304
In the data buffer storage pond of cache server 304;Receive operation requests of the user terminal 301 to first object data;It will be to above-mentioned
The operating right in data buffer storage pond authorizes aforesaid operations and asks corresponding user terminal 301, and aforesaid operations permission cannot award simultaneously
Give multiple user terminals 301;It performs aforesaid operations and asks corresponding operation, recycle aforesaid operations permission.
Specifically, user terminal 301 sends the operation requests to target data.Load balancing apparatus 302 receives to use by oneself
Aforesaid operations request is transmitted to current operation by family terminal 301 to the operation requests of target data, then load balancing apparatus 302
The minimum application server 303 of number of requests.Application server 303 receives the operation requests forwarded by load balancing apparatus 302,
And aforesaid operations request is sent to cache server 304.After cache server 304 receives aforesaid operations request, first
Check for 301 corresponding identification information of the user's terminal, if not depositing, illustrate the user's terminal 301 currently without
The data handled, can obtain target data, and acquisition request is to the operating right of data cache pool, if the operating right does not have
Have occupied, then obtain the operating right, perform aforesaid operations and ask corresponding operation, then discharge aforesaid operations permission;If
Aforesaid operations permission is occupied, then aforesaid operations permission is waited to be released or exceed the time limit, and then re-request obtains above-mentioned behaviour
Make permission.Wherein, aforesaid operations permission cannot be obtained by two or more user terminals 301 simultaneously.
As can be seen that in embodiments of the present invention, it is logical first when there is substantial amounts of user terminal 301 to carry out operation requests
Substantial amounts of 301 operation requests of user terminal are balanced to multiple application servers 303 by overload balancer 302, are then passed through
Application server 303 accesses above-mentioned cache server 304.Application server 303 is alleviated by load balancing apparatus 302
Performance pressures, when an application server 303 when something goes wrong, other application server 303 can also continue to work.In addition
Target data is stored in the data buffer storage pond of cache server 304 by the embodiment of the present invention, and user passes through access cache service
The data buffer storage pond of device 304 operates target data, avoids frequent visit database 305, alleviates database
305 performance pressures, in addition the data buffer storage pond to cache server 304 add an operating right, when user terminal 301 needs
When being operated to the data in data cache pool, operating right is first obtained, and the operating right every time cannot quilt simultaneously
Two or more user terminals 301 obtain, this avoids target data simultaneously by different user's repetitive operations.Therefore
Operating efficiency to target data can effectively be improved by the embodiment of the present invention.
The embodiment of the present invention also provides a kind of server, which is used to perform the list of any one of foregoing above-mentioned method
Member.Specifically, specifically, referring to Fig. 4, Fig. 4 is a kind of schematic block diagram of server provided in an embodiment of the present invention.The present embodiment
Terminal include:Data capture unit 410, receiving unit 420, granted unit 430, operation execution unit 440, recovery unit
450。
For obtaining target data from database, the storage of above-mentioned target data is taken to caching for data capture unit 410
It is engaged in the data buffer storage pond of device;
Receiving unit 420, for receiving operation requests of the user terminal to first object data;
Granted unit 430 asks corresponding use for will authorize aforesaid operations to the operating right in above-mentioned data buffer storage pond
Family terminal, aforesaid operations permission cannot authorize two or more user terminals simultaneously;
Operation execution unit 440 asks corresponding operation for performing aforesaid operations;
Recovery unit 450 for aforesaid operations execution unit 440 after performing aforesaid operations and asking corresponding operation, returns
Receive aforesaid operations permission.
As can be seen that target data is stored in the data buffer storage pond of cache server by the embodiment of the present invention, Yong Hutong
It crosses and accesses the data buffer storage pond of cache server to be operated to target data, avoid frequent visit database, mitigate
The performance pressures of database, in addition the data buffer storage pond to cache server add an operating right, when user terminal needs
When being operated to the data in data cache pool, operating right is first obtained, and the operating right every time cannot quilt simultaneously
Two or more user terminals obtain, this avoids target data simultaneously by different user's repetitive operations.Therefore it is logical
Operating efficiency to target data can effectively be improved by crossing the embodiment of the present invention.
Optionally, above-mentioned terminal device further includes:
Generation unit 441, for generating identification information, above-mentioned identification information includes the information of above-mentioned first object data
With the information of above-mentioned user terminal;
Feedback unit 442, for by above-mentioned identification information feedback to database and above-mentioned user terminal;
Above-mentioned receiving unit 420 is additionally operable to, and receives handling result of the above-mentioned user terminal to above-mentioned first object data;
Above-mentioned feedback unit 442 is additionally operable to above-mentioned handling result feeding back to database, above-mentioned identification information is deleted.
As can be seen that in embodiments of the present invention, after user terminal gets the operating right to target data, from
It is dynamic that an identification information is generated according to user terminal and target information, represent that the target information is occupied, it is impossible to by other
User terminal operates on it.And after the complete target information of user terminal processes, delete this identification information.So as to this
User terminal reacquires other target datas.Therefore the embodiment of the present invention can not only prevent number of targets by the identification information
It repeats to obtain and change according to by other users terminal, is also prevented from the user's terminal and repeats to obtain target data.
Optionally, the above-mentioned operating right to above-mentioned data buffer storage pond is distributed lock;
Above-mentioned granted unit 430, in the case of there is no the corresponding identification information of above-mentioned user terminal, request to add
Lock operation if above-mentioned distributed lock does not have occupied, authorizes above-mentioned distributed lock to aforesaid operations and asks corresponding above-mentioned use
Family terminal sets exceeding the time limit the time for above-mentioned distributed lock.
Optionally, above-mentioned granted unit 430, for there is no the situations of the corresponding identification information of above-mentioned user terminal
Under, locking operation is asked, if above-mentioned distributed lock is occupied, exceeding the time limit the time for above-mentioned distributed lock is obtained, if current time
Exceed the time limit the time more than above-mentioned, then ask locking operation, authorize above-mentioned distributed lock to aforesaid operations and ask corresponding above-mentioned user
Terminal sets exceeding the time limit the time for above-mentioned distributed lock.
As can be seen that the embodiment of the present invention, the operating right to data cache pool is realized by distributed lock, if should
When distributed lock does not have occupied, user terminal can directly acquire the distributed lock, and by the distributed lock come to number of targets
According to being operated accordingly.If alternatively, the distributed lock is occupied, the distributed lock is waited to discharge automatically or when this
Distributed lock exceeds the time limit and is forced after release, the acquisition request distributed lock.And obtain the distribution institute in user terminal
Afterwards, set one exceed the time limit the time, after the time time is exceeded the time limit more than this, the forcible aborting distributed lock, therefore can prevent because
Cause the distributed lock that cannot discharge to form deadlock for network or other reasons.
Further, above-mentioned terminal device further includes:
Unit 460 is deleted, for deleting the above-mentioned first object data in above-mentioned data buffer storage pond;
Above-mentioned recovery unit 450, for discharging above-mentioned distributed lock.
Optionally, above-mentioned acquiring unit, in the case of being zero for the target data quantity in above-mentioned data buffer storage pond,
Target data is obtained from database.
As can be seen that it in embodiments of the present invention, is performed in user terminal to the corresponding of the target data in cache pool
After operation, the target data in cache pool can be deleted automatically, to avoid the committed memory space when the accumulation of target data.
Referring to Fig. 5, Fig. 5 is a kind of equipment provided in an embodiment of the present invention, which can be server, as shown in Figure 5
Equipment includes:One or more processors 501;One or more input equipments 502, one or more output equipments 503 and are deposited
Reservoir 504.Above-mentioned processor 501, input equipment 502, output equipment 503 and memory 504 are connected by bus 505.Storage
For storing instruction, processor 501 is used to perform the instruction of the storage of memory 502 to device 502.
Wherein, in the case which uses as server, processor 501 is used for:Number of targets is obtained from database
According to by the storage of above-mentioned target data into the data buffer storage pond of cache server;User terminal is received to first object data
Operation requests;Aforesaid operations will be authorized to the operating right in above-mentioned data buffer storage pond and ask corresponding user terminal, aforesaid operations
Permission cannot authorize two or more user terminals simultaneously;After performing aforesaid operations and asking corresponding operation, recycling
Aforesaid operations permission.
It should be appreciated that in embodiments of the present invention, alleged processor 501 can be central processing unit (Central
Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital
Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
It can also be any conventional processor etc. to manage device.
Input equipment 502 can include Trackpad, fingerprint and adopt sensor (for gathering the finger print information of user and fingerprint
Directional information), microphone etc., output equipment 503 can include display (LCD etc.), loud speaker etc..
The memory 504 can include read-only memory and random access memory, and to processor 501 provide instruction and
Data.The a part of of memory 504 can also include nonvolatile RAM.For example, memory 504 can also be deposited
Store up the information of device type.
In the specific implementation, processor 501, input equipment 502, the output equipment 503 described in the embodiment of the present invention can
Perform the realization method and the 3rd reality described in the first embodiment of the method for data synchronization provided in an embodiment of the present invention
The realization method in example and fourth embodiment is applied, also can perform the realization method of the described server of the embodiment of the present invention,
This is repeated no more.
In the specific implementation, processor 501, input equipment 502, the output equipment 503 described in the embodiment of the present invention can
Perform the realization method and second embodiment described in the first embodiment of data processing method provided in an embodiment of the present invention
In realization method, details are not described herein.
A kind of computer readable storage medium, above computer readable storage medium are provided in another embodiment of the invention
Matter is stored with computer program, and above computer program is realized when being executed by processor:Target data is obtained from database, it will
Above-mentioned target data storage is into the data buffer storage pond of cache server;Receiving user terminal please to the operation of first object data
It asks;Aforesaid operations will be authorized to the operating right in above-mentioned data buffer storage pond and ask corresponding user terminal, aforesaid operations permission is not
Two or more user terminals can be authorized simultaneously;After performing aforesaid operations and asking corresponding operation, above-mentioned behaviour is recycled
Make permission.
Above computer readable storage medium storing program for executing can be the internal storage unit of the above-mentioned terminal of foregoing any embodiment, example
Such as the hard disk or memory of terminal.Above computer readable storage medium storing program for executing can also be the External memory equipment of above-mentioned terminal, such as
The plug-in type hard disk being equipped in above-mentioned terminal, intelligent memory card (Smart Media Card, SMC), secure digital (Secure
Digital, SD) card, flash card (Flash Card) etc..Further, above computer readable storage medium storing program for executing can also be wrapped both
Including the internal storage unit of above-mentioned terminal also includes External memory equipment.Above computer readable storage medium storing program for executing is above-mentioned for storing
Other programs and data needed for computer program and above-mentioned terminal.Above computer readable storage medium storing program for executing can be also used for temporarily
When store the data that has exported or will export.
Fig. 6 is a kind of server architecture schematic diagram provided in an embodiment of the present invention, which can be because of configuration or performance
It is different and generate bigger difference, one or more central processing units (central processing can be included
Units, CPU) 622 (for example, one or more processors) and memory 632, one or more storages are using journey
The storage medium 630 of sequence 642 or data 644 (such as one or more mass memory units).Wherein, 632 He of memory
Storage medium 630 can be of short duration storage or persistent storage.Can be included by being stored in the program of storage medium 630 by one or one
With upper module (diagram does not mark), each module can include operating the series of instructions in server.Further, in
Central processor 622 could be provided as communicating with storage medium 630, be performed on server 600 a series of in storage medium 630
Command operating.
Server 600 can also include one or more power supplys 626, one or more wired or wireless networks
Interface 650, one or more input/output interfaces 658 and/or, one or more operating systems 641, such as
Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
It can be based on the server architecture shown in the Fig. 6 as the step performed by server in above-described embodiment.
Those of ordinary skill in the art may realize that each exemplary lists described with reference to the embodiments described herein
Member and algorithm steps can be realized with the combination of electronic hardware, computer software or the two, in order to clearly demonstrate hardware
With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This
A little functions are performed actually with hardware or software mode, specific application and design constraint depending on technical solution.Specially
Industry technical staff can realize described function to each specific application using distinct methods, but this realization is not
It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is
System, server, the specific work process of terminal device and unit, may be referred to the corresponding process in preceding method embodiment,
This is repeated no more.
In several embodiments provided herein, it should be understood that disclosed system, server, terminal device
And method, it can realize by another way.For example, the apparatus embodiments described above are merely exemplary, for example,
The division of said units is only a kind of division of logic function, can there is other dividing mode in actual implementation, such as multiple
Unit or component may be combined or can be integrated into another system or some features can be ignored or does not perform.In addition,
Shown or discussed mutual coupling, direct-coupling or communication connection can be by some interfaces, device or unit
INDIRECT COUPLING or communication connection or electricity, mechanical or other forms connections.
The above-mentioned unit illustrated as separating component may or may not be physically separate, be shown as unit
The component shown may or may not be physical location, you can be located at a place or can also be distributed to multiple
In network element.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs
Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that unit is individually physically present or two or more units integrate in a unit.It is above-mentioned integrated
The form that hardware had both may be employed in unit is realized, can also be realized in the form of SFU software functional unit.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and is independent production marketing or use
When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part to contribute in other words to the prior art or all or part of the technical solution can be in the form of software products
It embodies, which is stored in a storage medium, is used including some instructions so that a computer
Equipment (can be personal computer, server or the network equipment etc.) performs the complete of each embodiment above method of the present invention
Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace
It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right
It is required that protection domain subject to.