CN107992517A - A kind of data processing method, server and computer-readable medium - Google Patents

A kind of data processing method, server and computer-readable medium Download PDF

Info

Publication number
CN107992517A
CN107992517A CN201711030551.2A CN201711030551A CN107992517A CN 107992517 A CN107992517 A CN 107992517A CN 201711030551 A CN201711030551 A CN 201711030551A CN 107992517 A CN107992517 A CN 107992517A
Authority
CN
China
Prior art keywords
data
data processing
redis
processing request
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201711030551.2A
Other languages
Chinese (zh)
Inventor
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201711030551.2A priority Critical patent/CN107992517A/en
Publication of CN107992517A publication Critical patent/CN107992517A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a kind of data processing method, server and computer-readable medium, wherein method includes:Receive at least one data processing request;At least one data processing request is stored in target key value redis cachings in a manner of message queue;The data processing request is responded according to the message queue, the target data in being cached to the redis is handled.The embodiment of the present invention is handled by the target data in being cached to the redis, the problem of the problem of avoiding database inventory data caused by database concurrency performs data processing request and inconsistent database physical holding of stock data and concurrent processing cause the load of database larger, as it can be seen that the embodiment of the present invention improves the accuracy of data processing by the high-performance of redis and alleviates the pressure of database.

Description

A kind of data processing method, server and computer-readable medium
Technical field
The present invention relates to field of computer technology, more particularly to a kind of data processing method, server and computer-readable Medium.
Background technology
With the development of Internet technology, the application in Internet technology is more and more extensive, the pass especially on internet In various popularization activities, it is more and more common that second of e-commerce website such as kills, rushes to purchase at the application.At present, answering on these internets With the change for being required to control data by database, wherein, which belongs to shared resource, is usually executed concurrently more A data processing request.
However, when database receives the data processing request of multiple terminals at the same time, and it is executed concurrently the number of each terminal When being asked according to processing, the problem of database inventory data and database physical holding of stock data may be caused inconsistent.For example, service 100 prizes of device database stock, first terminal and second terminal read database stock as 100, and send out at the same time Send 1 prize to ask, to database, to perform the request of first terminal and second terminal database concurrency, i.e., change data at the same time Kuku saves as 100-1=99.
Two prize requests are performed due to database concurrency, while send 1 each prize, the reality of database to two terminals Border inventory data should be 98, and the inventory data of database is 99, the inventory data of database and physical holding of stock number occurs According to it is inconsistent the problem of, so that multiple prize can be caused.Further, since database concurrency perform task, to server data Storehouse produces larger load, causes response speed slow.
The content of the invention
The embodiment of the present invention provides a kind of data processing method, it can be achieved that please with the data cached processing of the mode of message queue Ask, and each data processing request is handled according to the message queue, perform data processing request with avoiding database concurrency and led The inventory data of cause and physical holding of stock data are inconsistent and the problem of database loads are larger.
In a first aspect, an embodiment of the present invention provides a kind of data processing method, this method includes:
Receive at least one data processing request;
At least one data processing request is stored in target key value redis cachings in a manner of message queue;
The data processing request is responded according to the message queue, the target data in being cached to the redis carries out Processing.
Second aspect, an embodiment of the present invention provides a kind of server, which includes being used to perform above-mentioned first party The unit of the method in face.
The third aspect, an embodiment of the present invention provides another server, including processor, input equipment, output equipment And memory, the processor, input equipment, output equipment and memory are connected with each other, wherein, the memory is used to store Support server to perform the computer program of the above method, the computer program includes programmed instruction, the processor by with Put for calling described program to instruct, the method for performing above-mentioned first aspect.
Fourth aspect, an embodiment of the present invention provides a kind of computer-readable recording medium, the computer-readable storage medium Computer program is stored with, the computer program includes programmed instruction, and described program instruction makes institute when being executed by a processor State the method that processor performs above-mentioned first aspect.
The embodiment of the present invention is by the way that in a manner of message queue, at least one data processing request storage received is arrived In target database, and the data processing request is responded according to the message queue, the number of targets in being cached to the redis According to being handled, database inventory data caused by database concurrency performs data processing request and database actual library are avoided The problem of the problem of deposit data is inconsistent and concurrent processing cause the load of database larger, it is seen then that the embodiment of the present invention is led to The high-performance for crossing target key value redis improves the accuracy of data processing and alleviates the pressure of database.
Brief description of the drawings
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area For logical technical staff, without creative efforts, other attached drawings can also be obtained according to these attached drawings.
Fig. 1 is a kind of schematic flow diagram of data processing method provided in an embodiment of the present invention;
Fig. 2 is the schematic flow diagram of another data processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic block diagram of server provided in an embodiment of the present invention;
Fig. 4 is a kind of server schematic block diagram provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without making creative work Example, belongs to the scope of protection of the invention.
It should be appreciated that ought use in this specification and in the appended claims, term " comprising " and "comprising" instruction Described feature, entirety, step, operation, the presence of element and/or component, but it is not precluded from one or more of the other feature, whole Body, step, operation, element, component and/or its presence or addition for gathering.
It is also understood that the term used in this description of the invention is only in order at the purpose of description specific embodiment And it is not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless up and down Text clearly indicates other situations, and otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is Refer to any combinations and all possible combinations of one or more of the associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, the server described in the embodiment of the present invention is including but not limited to such as with touch sensitive surface The mobile phone, laptop computer or tablet PC of (for example, touch-screen display and/or touch pad) etc it is other just Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but with tactile Touch the desktop computer of sensing surface (for example, touch-screen display and/or touch pad).
In discussion below, the server including display and touch sensitive surface is described.It is, however, to be understood that , server can include such as physical keyboard, mouse and/or control-rod one or more of the other physical user interface set It is standby.
Server supports various application programs, such as one or more of following:Drawing application program, demonstration application journey Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application Program, telephony application, videoconference application, email application, instant messaging applications, exercise Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application Program, digital music player application and/or video frequency player application program.
The various application programs that can be performed on the server can use at least one public affairs of such as touch sensitive surface Physical user-interface device altogether.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitivity The corresponding information shown in the one or more functions and server on surface.In this way, the public physical structure (example of server Such as, touch sensitive surface) it can support the various application programs with user interface directly perceived and transparent for a user.
The embodiment of the present invention is directed to load excessive during database processing data processing request, causes data processing speed slow And database inventory data and physical holding of stock data it is inconsistent the problem of, it is proposed that a kind of data processing method, server and Computer-readable medium.
At present, for database, when carrying out data processing, the database inventory data of appearance differs with physical holding of stock data Show and the problem of database loads are excessive, mainly solved by reducing the inventory data of database.It is this to pass through renewal The mode of database, can bring data certain impact, and some even ignores the influence concurrently produced, directly carries out data The processing such as granting.For example, when applied to the problem of prize drawing application prize drawing, it is higher for prize value, if encountering specialty , then easily there is the situation of prize super generating in Shua Jiang team, the economic loss that the company to award prizes may be given to cause number not wait, feelings Condition is serious may to cause very huge loss, or even can produce economic dispute.
The embodiment of the present invention is for database loads are excessive, data processing speed is slow and database concurrency is performed at data A kind of the problem of database inventory data caused by reason request and inconsistent database physical holding of stock data, it is proposed that data processing Method, server and computer-readable medium.Data processing method provided in an embodiment of the present invention, server and computer-readable Medium can be applied to server, which can be in terminal, be illustrated below by taking server as an example.
In the embodiment of the present invention, server can store the first data of target data in redis cachings, and will The second data initialization of target data is 0 described in the redis cachings, wherein, the target data includes the first data With the second data, first data refer to the quantity of the target data to be treated prestored in the server, described Second data refer to the server send the target data quantity, second data with first data by Step is successively decreased and is progressively incremented by.Server can obtain the database of server when receiving at least one data processing request Receive the temporal information of each data processing request, and according to the sequencing of the temporal information, will be received each A data processing request is lined up storage into redis cachings in a manner of message queue;And according to the message queue, obtain and ring The response times of the data processing request are answered, with the increase of the response times, the institute stored in being cached to the redis The first data are stated progressively to be successively decreased processing.
In one embodiment, server can detect the stored in redis cachings when response data processing is asked Whether one data are more than 0, if testing result is yes, it is determined that respond the data processing request, progressively successively decrease described in execution Processing, if testing result is no, stops responding the data processing request.Server is in the redis is cached , can be true if detecting that the second data of the target data are equal to first data after target data is handled The fixed data processing success, can be with if detecting that the second data of the target data are not equal to first data Determine the data processing failure.
Below in conjunction with Fig. 1 to Fig. 4 to data processing method provided in an embodiment of the present invention, server and computer-readable Medium is specifically described respectively.
Fig. 1 is referred to, Fig. 1 is a kind of schematic flow diagram of data processing method provided in an embodiment of the present invention, such as Fig. 1 institutes Show, this method may include:
S101:Receive at least one data processing request.
In the embodiment of the present invention, server can receive at least one data processing request, at least one data Reason request can be that identical terminal is sent or different terminals is sent, and the embodiment of the present invention does not limit.
In one embodiment, server can be before at least one data processing request be received, in the server The first data of the target data are stored in redis cachings, and second of target data described in redis cachings is counted According to being initialized as 0.Wherein, the target data includes the first data and the second data, and first data refer in server In the quantity of target data to be treated that prestores, second data refer to that the server sends the target data Quantity, second data with progressively the successively decreasing for the first data and are progressively incremented by.It can specifically illustrate, it is assumed that clothes Before at least one request (data processing request) of getting the winning number in a bond is received during prize drawing is applied, server can cache business device in redis The quantity of the awards (target data) of middle storage is 100, and the quantity for going out prize during the redis is cached is initialized as 0.Can See, which can reduce the load of database, so as to alleviate the pressure of database.
S102:At least one data processing request is stored in target key value redis cachings in a manner of message queue In.
At least one data processing request can be stored in mesh by the embodiment of the present invention, server in a manner of message queue Mark in key assignments redis cachings.Specifically, server is when receiving at least one data processing request, can be with message queue Mode, which is stored in a manner of waiting in line in redis cachings.
In one embodiment, server can obtain and receive respectively when receiving at least one data processing request The temporal information of a data processing request, according to the sequencing of the temporal information, at the data received Reason request is sequentially stored in a manner of message queue in the redis cachings.It can specifically illustrate, it is assumed that server receives To 10 requests (data processing request) of getting the winning number in a bond, the temporal information that the server receives this 10 requests of getting the winning number in a bond is obtained respectively, According to the sequencing of the time of each request of getting the winning number in a bond, request is got the winning number in a bond in a manner of message queue by this 10 received, is pressed It is medium pending that the sequencing of time is sequentially stored in the redis cachings.As it can be seen that the embodiment passes through with message queue Mode it is data cached processing request queue processing, can differ to avoid database inventory data and database physical holding of stock data The problem of cause and the accuracy for improving data processing.
S103:The data processing request is responded according to the message queue, at the target data in being cached to the redis Reason.
In the embodiment of the present invention, server can respond the data processing request according to the message queue, which is delayed Target data in depositing is handled.Specifically, data processing request of the server in the message queue, response successively should Each data processing request in message queue, and place is adjusted to the first data in redis cachings and the second data Reason.As it can be seen that the embodiment can improve the accuracy of data processing.
In one embodiment, server can be obtained according to the message queue and respond the data processing request Response times, with the increase of the response times, first data stored in being cached to the redis are progressively passed Subtract processing.Specifically, the server can respond the data processing request according to the message queue, at one data of response During reason request, processing of progressively being successively decreased to first data stored in redis cachings, while counted to described second According to progressively incrementally being handled.
Can specifically illustrate, it is assumed that server receive 10 get the winning number in a bond request, and in a manner of message queue by this 10 Each prize-winning request is stored into redis cachings, according to the message queue, each request of getting the winning number in a bond is responded successively, whenever response one A request of getting the winning number in a bond, it is 100 that terminal, which can read first data stored in the redis cachings, and to first data 100 subtract 1, i.e.,:100-1=99, and second data are carried out to add 1 processing, i.e.,:0+1=1, after terminal successively decreases this The second data 1 after first data 99 and increase are respectively written into the redis cachings of server.Disappear when described in server response When ceasing second prize-winning request in queue, terminal can read the first data 99 in presently described redis cachings, and to this First data 99 subtract 1, i.e.,:99-1=98, and current second data 1 are carried out to add 1 processing, i.e.,:1+1=2, terminal The second data 2 after the first data 98 and increase after this is successively decreased are respectively written into the redis cachings of server.
In one embodiment, first data that server stores in being cached to the redis are progressively passed When subtracting processing, it can detect whether first data stored in the redis cachings are more than 0, if testing result is yes, Then determine processing of progressively successively decreasing described in execution, if testing result is no, stop responding the data processing request.Specifically may be used Illustrate, it is assumed that first data that server stores in redis cachings are 100, and server is to described In redis cachings first data that store progressively successively decreased processing when, can be with the redis cachings after detection process Whether first data of middle storage are more than 0, if testing result be yes, prove that data also to have handled, can be with Response data processing request, handles the target data, and therefore, server can determine place of progressively successively decreasing described in execution Reason, if testing result is no, proves that data have been handled, and stops responding the data processing request.
In one embodiment, server can be after the target data in being cached to the redis be handled, inspection Whether the second data for surveying the target data are equal to first data, if testing result is yes, it is determined that the data Handle successfully, if testing result is no, it is determined that the data processing failure.It can specifically illustrate, it is assumed that should in prize drawing With middle server after the awards in being cached to the redis are drawn a lottery, detect that the first data after prize drawing are decremented to 0, and the second data after prize drawing are detected, if detecting that the second data increase after prize drawing is 100, before processing of successively decreasing First data 100 are equal, then can determine this time to draw a lottery successfully, if detecting that the second data increase after prize drawing is 102, with The first data 100 successively decreased before handling are unequal, it is determined that this time prize drawing failure.
In the embodiment of the present invention, server, please by least one data processing received by way of message queue During storage of seeking survival is cached to redis, and each data processing request is responded according to the message queue successively, the redis is cached In the requested target data of the data processing request handled, avoid database concurrency perform data processing request cause Database inventory data and database physical holding of stock data it is inconsistent the problem of and concurrent processing cause the load of database The problem of larger, it is seen then that the embodiment of the present invention improves the accuracy of data processing by the high-performance of redis and alleviates The pressure of database.
Refer to Fig. 2, Fig. 2 is the schematic flow diagram of another data processing method provided in an embodiment of the present invention, such as Fig. 2 Shown, this method may include:
S201:The first data of target data are stored in redis cachings.
In the embodiment of the present invention, server can store the first data of the target data, example in redis cachings Such as, it is assumed that server receives at least one request (data processing request) of getting the winning number in a bond in prize drawing is applied, please receiving the prize-winning Before asking, the quantity for the awards (target data) that server can store in redis cachings is 100.
S202:The second data initialization of the target data is 0 during the redis is cached.
In the embodiment of the present invention, the second data initialization of the target data is during server can cache the redis 0.For example, it is assumed that server receives at least one request (data processing request) of getting the winning number in a bond in prize drawing is applied, in this is received Before prize request, server goes out prize quantity during can the redis be cached is initialized as 0.
S203:Receive at least one data processing request.
In the embodiment of the present invention, server can receive at least one data processing request, at least one data Reason request can be that identical terminal is sent or different terminals is sent, and the embodiment of the present invention does not limit.
S204:Obtain the temporal information for receiving each data processing request.
In the embodiment of the present invention, server can obtain and receive respectively when receiving at least one data processing request The temporal information of a data processing request.
S205:According to the sequencing of the temporal information, by the data processing request received with the side of message queue Formula is sequentially stored in redis cachings.
In the embodiment of the present invention, server can be according to the sequencing of the temporal information, at the data received Reason request is sequentially stored in a manner of message queue in redis cachings.Specifically, server can receive at least one During a data processing request, the temporal information that server receives each data processing request is obtained, according to the temporal information Sequencing, the data processing request received is sequentially stored in the redis in a manner of message queue and is cached In.It can specifically illustrate, it is assumed that server receives 10 requests (data processing request) of getting the winning number in a bond, then can obtain the service Device is respectively received the temporal information of this 10 requests of getting the winning number in a bond, according to the sequencing of the time, will receive this 10 Request get the winning number in a bond in a manner of message queue, it is medium pending that sequencing temporally is sequentially stored in the redis cachings.Can See, which, can be to avoid database stock the data cached processing request queue processing in a manner of message queue The problem of data and database physical holding of stock data are inconsistent and the accuracy for improving data processing.
S206:According to the message queue, the response times for responding the data processing request are obtained.
In the embodiment of the present invention, server can be obtained according to the message queue and respond the data processing request Response times.
S207:With the increase of the response times, first data stored in being cached to the redis are progressively successively decreased Processing.
In the embodiment of the present invention, server can be with the response times of the response data processing request got Increase, processing of progressively being successively decreased to first data stored in redis cachings.Specifically, server can be according to institute Message queue is stated, the response times for responding the data processing request are obtained, with the increase of the response times, to described First data that store progressively are successively decreased processing in redis cachings, and second data are progressively incrementally located Reason.
Can specifically illustrate, it is assumed that server receive 10 get the winning number in a bond request, and in a manner of message queue by this 10 Each prize-winning request is stored into redis cachings, according to the message queue, each request of getting the winning number in a bond is responded successively, whenever response one A request of getting the winning number in a bond, it is 100 that terminal, which can read first data stored in the redis cachings, and to first data 100 subtract 1, i.e.,:100-1=99, and second data are carried out to add 1 processing, i.e.,:0+1=1, after terminal successively decreases this The second data 1 after first data 99 and increase are respectively written into the redis cachings of server.Disappear when described in server response When ceasing second prize-winning request in queue, terminal can read the first data 99 in presently described redis cachings, and to this First data 99 subtract 1, i.e.,:99-1=98, and current second data 1 are carried out to add 1 processing, i.e.,:1+1=2, terminal The second data 2 after the first data 98 and increase after this is successively decreased are respectively written into the redis cachings of server.
In one embodiment, first data that server stores in being cached to the redis are progressively passed When subtracting processing, it can detect whether first data stored in the redis cachings are more than 0, if testing result is yes, Then determine processing of progressively successively decreasing described in execution, if testing result is no, stop responding the data processing request.Specifically may be used Illustrate, it is assumed that first data that server stores in redis cachings are 100, and server is to described In redis cachings first data that store progressively successively decreased processing when, can be with the redis cachings after detection process Whether first data of middle storage are more than 0, if testing result be yes, prove that data also to have handled, can be with Response data processing request, handles the target data, and therefore, server can determine place of progressively successively decreasing described in execution Reason, if testing result is no, proves that data have been handled, and stops responding the data processing request.
In one embodiment, server can be after the target data in being cached to the redis be handled, inspection Whether the second data for surveying the target data are equal to first data, if testing result is yes, can determine described Data processing success, if testing result is no, can determine the data processing failure.It can specifically illustrate, it is assumed that After the awards during server during prize drawing is applied caches the redis are drawn a lottery, if detecting first after prize drawing Data are decremented to 0, and detect that the second data increase after prize drawing is 100, with 100 phase of the first data before processing of successively decreasing Deng, then can determine this time draw a lottery successfully;If detect the second data increase after prize drawing for before 102, with processing of successively decreasing The first data 100 it is unequal, then can determine this time prize drawing failure.
The embodiment of the present invention is by storing the first data in being cached in redis, and initialising second data is 0, with message The mode of queue, during at least one data processing request received storage is cached to redis, and according to the message queue The data processing request is responded, the target data in being cached to the redis is handled, and avoids database concurrency execution The problem of database inventory data caused by data processing request and inconsistent database physical holding of stock data and concurrent processing The problem of causing the load of database larger, so as to improve the accuracy of data processing and alleviate the pressure of database.
The embodiment of the present invention additionally provides a kind of server, which is used to perform foregoing any one of them method Unit.Specifically, Fig. 3 is referred to, Fig. 3 is a kind of schematic block diagram of server provided in an embodiment of the present invention.The present invention is implemented The server that example provides includes:Receiving unit 301, the first storage unit 302 and processing unit 303.
Receiving unit 301, for receiving at least one data processing request;
First storage unit 302, target is stored in a manner of message queue by least one data processing request In key assignments redis cachings;
Processing unit 303, responds the data processing request, in being cached to the redis according to the message queue Target data is handled.
Specifically, the server further includes the second storage unit 304, wherein,
Second storage unit 304, for storing the first data of the target data in redis cachings; The second data initialization by target data described in redis cachings is 0;Wherein, the target data includes the first number According to the second data.
Specifically, second data are progressively incremented by with progressively the successively decreasing for the first data.
Specifically, first storage unit 302, is additionally operable to obtain the time for receiving each data processing request Information;According to the sequencing of the temporal information, by the data processing request received in a manner of message queue according to It is secondary to be stored in the redis cachings.
Specifically, the processing unit 303, is additionally operable to according to the message queue, and obtaining the response data processing please The response times asked;With the increase of the response times, first data that are stored in being cached to the redis carry out by Walk processing of successively decreasing.
Specifically, the processing unit 303, being additionally operable to detect first data stored in the redis cachings is It is no to be more than 0;If testing result is yes, it is determined that processing of progressively successively decreasing described in performing;If testing result is no, stop ringing Answer the data processing request.
Specifically, whether the processing unit 303, the second data for being additionally operable to detect the target data are equal to described the One data;If testing result is yes, it is determined that the data processing success;If testing result is no, it is determined that the number According to processing failure.
The embodiment of the present invention, at least one data processing request, the first storage unit 302 are received by receiving unit 301 In a manner of message queue, at least one data processing request being stored to redis in caching, storage unit 303 The data processing request is responded according to the message queue, the target data in being cached to the redis is handled, and is avoided Database inventory data caused by database concurrency performs data processing request and database physical holding of stock data are inconsistent The problem of problem and concurrent processing cause the load of database larger, so as to improve the accuracy of data processing and delay Solve the pressure of database.
Fig. 4 is referred to, Fig. 4 is a kind of server schematic block diagram provided in an embodiment of the present invention.The present invention as shown in Figure 4 The server in embodiment can include:One or more processors 401;One or more input equipments 402, one or more A output equipment 403 and memory 404.Above-mentioned processor 401, input equipment 402, output equipment 403 and memory 404 pass through Bus 405 connects.Memory 404 is used to store computer program, and the computer program includes programmed instruction, processor 401 For performing the programmed instruction of the storage of memory 404.Wherein, processor 401 is arranged to call described program instruction to perform:
Receive at least one data processing request;
At least one data processing request is stored in target key value redis cachings in a manner of message queue;
The data processing request is responded according to the message queue, the target data in being cached to the redis carries out Processing.
Further, processor 401 is additionally configured to call described program instruction to perform following steps:
The first data of the target data are stored in redis cachings;
The second data initialization by target data described in redis cachings is 0;
Wherein, the target data includes the first data and the second data.
Further, second data are progressively incremented by with progressively the successively decreasing for the first data.
Further, processor 401 is additionally configured to call described program instruction to perform following steps:
Obtain the temporal information for receiving each data processing request;
According to the sequencing of the temporal information, by the data processing request received in a manner of message queue It is sequentially stored in the redis cachings.
Further, processor 401 is additionally configured to call described program instruction to perform following steps:
According to the message queue, the response times for responding the data processing request are obtained;
With the increase of the response times, first data stored in being cached to the redis are progressively passed Subtract processing.
Further, processor 401 is additionally configured to call described program instruction to perform following steps:
Detect whether first data stored in the redis cachings are more than 0;
If testing result is yes, it is determined that processing of progressively successively decreasing described in performing;
If testing result is no, stop responding the data processing request.
Further, processor 401 is additionally configured to call described program instruction to perform following steps:
Whether the second data for detecting the target data are equal to first data;
If testing result is yes, it is determined that the data processing success;
If testing result is no, it is determined that the data processing failure.
In the embodiment of the present invention, server can be by way of message queue, at least one data received During reason request storage is cached to redis, and each data processing request is responded according to the message queue successively, to the redis The requested target data of the data processing request is handled in caching, is realized with the data cached place of the mode of message queue Reason request, and handle the data that each data processing request in caching asks processing successively in a manner of queue, avoid data The problem of database inventory data caused by storehouse concurrently performs data processing request and database physical holding of stock data are inconsistent with And concurrent processing the problem of causing the load of database larger, so as to improve the accuracy of data processing and alleviate data The pressure in storehouse.
It should be appreciated that in embodiments of the present invention, alleged processor 401 can be central processing unit (Central Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at It can also be any conventional processor etc. to manage device.
Input equipment 402 can include Trackpad, fingerprint adopt sensor (finger print information that is used to gathering user and fingerprint Directional information), microphone etc., output equipment 403 can include display (LCD etc.), loudspeaker etc..
The memory 404 can include read-only storage and random access memory, and to processor 401 provide instruction and Data.The a part of of memory 404 can also include nonvolatile RAM.For example, memory 404 can also be deposited Store up the information of device type.
In the specific implementation, processor 401, input equipment 402, the output equipment 403 described in the embodiment of the present invention can The implementation described in Fig. 1 embodiments and Fig. 2 embodiments of the method for data processing provided in an embodiment of the present invention is performed, Also the implementation of the described server of the embodiment of the present invention is can perform, details are not described herein.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, the computer-readable recording medium storage There is computer program, the computer program includes programmed instruction, and described program instruction realizes the present invention when being executed by processor Implementation described in embodiment described in Fig. 1 or Fig. 2, also can perform the realization side of the described servers of Fig. 3 of the present invention The implementation of the described server of embodiment described in formula or Fig. 4, details are not described herein.
The computer-readable recording medium can be the internal storage unit of the server described in foregoing any embodiment, Such as the hard disk or memory of server.The computer-readable recording medium can also be that the external storage of the server is set Plug-in type hard disk that is standby, such as being equipped with the server, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) blocks, flash card (Flash Card) etc..Further, the computer-readable recording medium is also The internal storage unit of the server can both be included or including External memory equipment.The computer-readable recording medium is used In other programs and data needed for the storage computer program and the server.The computer-readable recording medium It can be also used for temporarily storing the data that has exported or will export.
Those of ordinary skill in the art may realize that each exemplary list described with reference to the embodiments described herein Member and algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This A little functions are performed with hardware or software mode actually, application-specific and design constraint depending on technical solution.Specially Industry technical staff can realize described function to each specific application using distinct methods, but this realization is not It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience of description and succinctly, the clothes of foregoing description The specific work process of business device and unit, may be referred to the corresponding process in preceding method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed server and method, can pass through Other modes are realized.For example, device embodiment described above is only illustrative, for example, the division of the unit, only For a kind of division of logic function, there can be other dividing mode when actually realizing, such as multiple units or component can combine Or another system is desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, device or unit Connect or electricity, mechanical or other forms connections.
The unit illustrated as separating component may or may not be physically separate, be shown as unit The component shown may or may not be physical location, you can with positioned at a place, or can also be distributed to multiple In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also It is that unit is individually physically present or two or more units integrate in a unit.It is above-mentioned integrated Unit can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part to contribute in other words to the prior art, or all or part of the technical solution can be in the form of software product Embody, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment the method for the present invention Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replaced Change, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection domain subject to.

Claims (10)

  1. A kind of 1. data processing method, it is characterised in that including:
    Receive at least one data processing request;
    At least one data processing request is stored in target key value redis cachings in a manner of message queue;
    The data processing request is responded according to the message queue, the target data in being cached to the redis is handled.
  2. 2. according to the method described in claim 1, it is characterized in that, before at least one data processing request of reception, wrap Include:
    The first data of the target data are stored in redis cachings;
    The second data initialization by target data described in redis cachings is 0;
    Wherein, the target data includes the first data and the second data.
  3. 3. according to the method described in claim 2, it is characterized in that, second data progressively passing with first data Subtract and be progressively incremented by.
  4. 4. according to the method described in claim 1, it is characterized in that, it is described in a manner of message queue by least one number It is stored according to processing request in target key value redis cachings, including:
    Obtain the temporal information for receiving each data processing request;
    According to the sequencing of the temporal information, by the data processing request received in a manner of message queue successively It is stored in the redis cachings.
  5. 5. according to claim 1-4 any one of them methods, it is characterised in that described according to message queue response Data processing request, the target data in being cached to the redis are handled, including:
    According to the message queue, the response times for responding the data processing request are obtained;
    With the increase of the response times, place of progressively being successively decreased to first data stored in redis cachings Reason.
  6. 6. according to the method described in claim 5, it is characterized in that, stored in the caching to the redis described first Data are progressively successively decreased processing, including:
    Detect whether first data stored in the redis cachings are more than 0;
    If testing result is yes, it is determined that processing of progressively successively decreasing described in performing;
    If testing result is no, stop responding the data processing request.
  7. 7. according to the method described in claim 6, it is characterized in that, the target data in the caching to the redis carries out After processing, including:
    Whether the second data for detecting the target data are equal to first data;
    If testing result is yes, it is determined that the data processing success;
    If testing result is no, it is determined that the data processing failure.
  8. 8. a kind of server, it is characterised in that including for performing the method as described in claim 1-7 any claims Unit.
  9. A kind of 9. server, it is characterised in that including processor, input equipment, output equipment and memory, the processor, Input equipment, output equipment and memory are connected with each other, wherein, the memory is used to store computer program, the calculating Machine program includes programmed instruction, and the processor is arranged to call described program instruction, performs as claim 1-7 is any Method described in.
  10. A kind of 10. computer-readable recording medium, it is characterised in that the computer-readable storage medium is stored with computer program, The computer program includes programmed instruction, and described program instruction makes the processor perform such as right when being executed by a processor It is required that 1-7 any one of them methods.
CN201711030551.2A 2017-10-26 2017-10-26 A kind of data processing method, server and computer-readable medium Withdrawn CN107992517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711030551.2A CN107992517A (en) 2017-10-26 2017-10-26 A kind of data processing method, server and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711030551.2A CN107992517A (en) 2017-10-26 2017-10-26 A kind of data processing method, server and computer-readable medium

Publications (1)

Publication Number Publication Date
CN107992517A true CN107992517A (en) 2018-05-04

Family

ID=62030612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711030551.2A Withdrawn CN107992517A (en) 2017-10-26 2017-10-26 A kind of data processing method, server and computer-readable medium

Country Status (1)

Country Link
CN (1) CN107992517A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109669791A (en) * 2018-12-22 2019-04-23 网宿科技股份有限公司 Exchange method, server and computer readable storage medium
CN110427386A (en) * 2019-08-05 2019-11-08 广州华多网络科技有限公司 Data processing method, device and computer storage medium
CN111176850A (en) * 2020-01-03 2020-05-19 中国建设银行股份有限公司 Data pool construction method, device, server and medium
CN111292028A (en) * 2018-12-06 2020-06-16 北京京东尚科信息技术有限公司 Inventory information processing method and system, computer system and readable storage medium
CN111414389A (en) * 2020-03-19 2020-07-14 北京字节跳动网络技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111488366A (en) * 2020-04-09 2020-08-04 百度在线网络技术(北京)有限公司 Relational database updating method, device, equipment and storage medium
CN111506475A (en) * 2020-04-15 2020-08-07 北京字节跳动网络技术有限公司 Data processing method, device and system, readable medium and electronic equipment
CN112711624A (en) * 2020-12-25 2021-04-27 北京达佳互联信息技术有限公司 Data packaging control method and device, electronic equipment and storage medium
CN112948485A (en) * 2019-12-11 2021-06-11 中移(苏州)软件技术有限公司 Question-answer data synchronization method, device, system, server and storage medium
CN113032587A (en) * 2019-12-25 2021-06-25 北京达佳互联信息技术有限公司 Multimedia information recommendation method, system, device, terminal and server
CN113138798A (en) * 2020-01-18 2021-07-20 佛山市云米电器科技有限公司 Instruction execution method, device and equipment under multiple scenes and storage medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292028A (en) * 2018-12-06 2020-06-16 北京京东尚科信息技术有限公司 Inventory information processing method and system, computer system and readable storage medium
CN109669791A (en) * 2018-12-22 2019-04-23 网宿科技股份有限公司 Exchange method, server and computer readable storage medium
CN110427386A (en) * 2019-08-05 2019-11-08 广州华多网络科技有限公司 Data processing method, device and computer storage medium
CN110427386B (en) * 2019-08-05 2023-09-19 广州方硅信息技术有限公司 Data processing method, device and computer storage medium
CN112948485A (en) * 2019-12-11 2021-06-11 中移(苏州)软件技术有限公司 Question-answer data synchronization method, device, system, server and storage medium
CN113032587B (en) * 2019-12-25 2023-07-28 北京达佳互联信息技术有限公司 Multimedia information recommendation method, system, device, terminal and server
CN113032587A (en) * 2019-12-25 2021-06-25 北京达佳互联信息技术有限公司 Multimedia information recommendation method, system, device, terminal and server
CN111176850B (en) * 2020-01-03 2023-08-22 中国建设银行股份有限公司 Data pool construction method, device, server and medium
CN111176850A (en) * 2020-01-03 2020-05-19 中国建设银行股份有限公司 Data pool construction method, device, server and medium
CN113138798A (en) * 2020-01-18 2021-07-20 佛山市云米电器科技有限公司 Instruction execution method, device and equipment under multiple scenes and storage medium
CN111414389A (en) * 2020-03-19 2020-07-14 北京字节跳动网络技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111414389B (en) * 2020-03-19 2023-09-22 北京字节跳动网络技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111488366A (en) * 2020-04-09 2020-08-04 百度在线网络技术(北京)有限公司 Relational database updating method, device, equipment and storage medium
CN111506475A (en) * 2020-04-15 2020-08-07 北京字节跳动网络技术有限公司 Data processing method, device and system, readable medium and electronic equipment
CN112711624A (en) * 2020-12-25 2021-04-27 北京达佳互联信息技术有限公司 Data packaging control method and device, electronic equipment and storage medium
CN112711624B (en) * 2020-12-25 2024-06-11 北京达佳互联信息技术有限公司 Data packaging control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107992517A (en) A kind of data processing method, server and computer-readable medium
CN107786328A (en) A kind of method, service node device and computer-readable medium for generating key
CN107392055A (en) A kind of dual system safety chip control method, terminal, computer-readable recording medium and the dual system framework based on safety chip
CN107508860A (en) One kind service current-limiting method, server and terminal
US20140181834A1 (en) Load balancing method for multicore mobile terminal
CN107844189A (en) A kind of method, system, terminal and computer-readable recording medium for reducing blank screen power consumption
CN107846511A (en) A kind of method, terminal and computer-readable recording medium for accessing moving advertising
CN107818467A (en) A kind of method of payment and terminal
CN107371146A (en) A kind of method and terminal for selecting short message channel
CN108108216A (en) A kind of method, terminal and computer readable storage medium for managing message
CN107656966A (en) The method and server of a kind of processing data
CN107291459A (en) A kind of method and server for arranging information
CN107506494B (en) Document handling method, mobile terminal and computer readable storage medium
CN110244963A (en) Data-updating method, device and terminal device
CN108366091A (en) Network request processing method, terminal and computer-readable medium
CN107479806A (en) The method and terminal of a kind of changing interface
CN107390969A (en) A kind of method and terminal for controlling suspended window
CN107527192A (en) It is a kind of to identify the method for repeating to pay and server
CN115345464A (en) Service order dispatching method and device, computer equipment and storage medium
CN107766708A (en) Nullify method, terminal and the computer-readable recording medium of account Entered state
CN108289028A (en) A kind of signature authentication method, relevant device and computer readable storage medium
CN107770281A (en) A kind of method, server and computer-readable recording medium for notifying trade company's reimbursement information
CN114186259A (en) Authority control method and device, electronic equipment and storage medium
CN108366298A (en) Video broadcasting method, mobile terminal and computer readable storage medium
CN108920704A (en) File access pattern method, file restoring device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180504

WW01 Invention patent application withdrawn after publication