CN108897615B - Second killing request processing method, application server cluster and storage medium - Google Patents

Second killing request processing method, application server cluster and storage medium Download PDF

Info

Publication number
CN108897615B
CN108897615B CN201810547915.2A CN201810547915A CN108897615B CN 108897615 B CN108897615 B CN 108897615B CN 201810547915 A CN201810547915 A CN 201810547915A CN 108897615 B CN108897615 B CN 108897615B
Authority
CN
China
Prior art keywords
killing
preset
inventory
cache
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810547915.2A
Other languages
Chinese (zh)
Other versions
CN108897615A (en
Inventor
陈仁义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kangjian Information Technology Shenzhen Co Ltd
Original Assignee
Kangjian Information Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kangjian Information Technology Shenzhen Co Ltd filed Critical Kangjian Information Technology Shenzhen Co Ltd
Priority to CN201810547915.2A priority Critical patent/CN108897615B/en
Publication of CN108897615A publication Critical patent/CN108897615A/en
Application granted granted Critical
Publication of CN108897615B publication Critical patent/CN108897615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a second killing request processing method, which comprises the following steps: respectively determining virtual inventory corresponding to each cache server; receiving a second killing request submitted by a user, and distributing the second killing request to a plurality of servers in the server cluster through load balancing; reading the specific information of the second killing request, and performing qualification verification on the second killing request; when qualification verification is successful, deducting virtual inventory of a cache server corresponding to the second killing request, generating preset token information corresponding to the second killing request, and storing the preset token information into a preset first-level cache; and receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, checking the real-time token information, and deducting the real inventory in the database when the token checking is passed, wherein the second killing is successful. The invention also provides an application server cluster and a storage medium. The invention can improve the second killing request processing efficiency.

Description

Second killing request processing method, application server cluster and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method for processing a second killing request, an application server cluster, and a computer readable storage medium.
Background
In the e-commerce field, some activities are often performed in order to attract users, for example: the killing activity is performed for a second. The second killing activity is to push out some commodities at a lower price, and the number of the commodities is limited, so that the user is required to make second killing and robbery purchase to attract the user. However, the traffic at the moment of starting the activity is very large and the same resource is requested, so that serious database concurrency read-write conflicts and resource lock request conflicts are caused.
Disclosure of Invention
In view of the foregoing, the present invention provides a method for processing a second killing request, an application server cluster and a computer readable storage medium, and the main purpose of the present invention is to improve the second killing request processing efficiency.
In order to achieve the above object, the present invention provides a method for processing a second killing request, the method comprising:
s1, uniformly distributing the inventory of the second killing commodity to a plurality of cache servers in a cache server cluster according to a preset rule, and respectively determining virtual inventory corresponding to each cache server;
s2, receiving a second killing request of a gateway server through balanced load distribution, wherein when the gateway server reaches a preset moment, receiving a plurality of second killing requests submitted by a user through a client;
S3, reading specific information of each second killing request, and performing qualification verification on each second killing request according to a preset verification rule;
s4, when qualification verification of the second killing request is passed, deducting virtual inventory of a cache server corresponding to the second killing request, generating preset token information corresponding to the second killing request according to user qualification information, and storing the preset token information into a preset first-level cache; a kind of electronic device with high-pressure air-conditioning system
S5, receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, generating real-time token information according to the transaction request, checking the real-time token information, and deducting real inventory in a database when the real-time token information passes the check, wherein second killing is successful.
In addition, the invention also provides an application server cluster, which is characterized in that the application server cluster comprises a plurality of application servers, and the servers comprise: the second killing request processing program can be executed by the processor, and any step in the second killing request processing method can be realized.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium including therein a second killing request processing program which, when executed by a processor, can implement any of the steps in the second killing request processing method as described above.
According to the second killing request processing method, the application server cluster and the computer readable storage medium, the second killing request is distributed to each server in the server cluster by using the gateway server through load balancing by carrying out inventory hashing on the inventory of the second killing commodity, the second killing request is loaded into the corresponding cache server, the second killing request is dispersed, and the response capability of the system is improved; by performing qualification verification on the second killing request, invalid requests are filtered, and the pressure of system core service is relieved; the first-level cache and the second-level cache are constructed, and the server acquires corresponding information from the second-level cache, the first-level cache and the database in sequence, so that the second killing request processing efficiency is improved.
Drawings
FIG. 1 is a flow chart of a method for processing a second killing request according to a preferred embodiment of the present invention;
FIG. 2 is a schematic view of an application environment of a preferred embodiment of an application server cluster according to the present invention;
FIG. 3 is a schematic diagram of a preferred embodiment of the application server of FIG. 2;
fig. 4 is a schematic diagram of a program module of the second killing request processing program in fig. 3.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a second killing request processing method. Referring to FIG. 1, a flow chart of a preferred embodiment of the second killing request processing method of the present invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the second killing request processing method includes steps S1 to S5:
s1, uniformly distributing the inventory of the second killing commodity to a plurality of cache servers in a cache server cluster according to a preset rule, and respectively determining virtual inventory corresponding to each cache server;
a cache server cluster is predetermined, wherein the cache server cluster comprises m cache servers, and the cache server cluster is used as a first-level cache.
Before the second killing activity, the second killing commodity, commodity number, stock, activity number and other information are determined in advance and stored in the database. Assuming that 1 commodity is contained in the second killing activity, reading the actual stock of the commodity from the database to 10000, dividing the commodity stock into 100 parts according to the last two digits (00-99) of the user number, wherein each part contains 100 stocks, and uniformly distributing the 100 stocks to m cache servers in the cache server cluster according to a hash algorithm.
It should be noted that, the inventory distributed to each cache server in the cache server cluster is an actual inventory, but the inventory actually stored on each cache server is a virtual inventory. Generally, the virtual inventory is slightly larger than the actual inventory, and can be set according to the requirements of operators. And then, respectively determining the virtual inventory corresponding to each cache server, and storing the virtual inventory information into the primary inventory.
Assuming that inventory of item i in the second killing campaign is 10000, then the keys (keys) of item i in each cache server are defined as: the method comprises the steps of (1) setting up an item frame, wherein the item frame is composed of (i) 00-item frame, (i) 99, and (ii) setting up a resource prefix (stock), wherein r is a scene number of second inactivity, i is a commodity number of second inactivity, and (iii) setting up a value (value) corresponding to each key as 100, wherein ' 00 ' -99 ' is two bits after the user number. That is, the corresponding keys may be different for different occasions and/or different merchandise in the second killing event.
It can be understood that the number of cache servers in the cache server cluster determines the digestion capability of the cache server cluster, if the throughput of a single cache server is 2 tens of thousands TPS, the throughput of m cache servers is 2m tens of thousands TPS, so in this embodiment, the value of m needs to be determined by estimating the concurrency in the second killing activity.
S2, receiving a second killing request of a gateway server through balanced load distribution, wherein when the gateway server reaches a preset moment, receiving a plurality of second killing requests submitted by a user through a client;
wherein the preset time is the time when the second killing activity starts. When the second killing activity starts, the gateway server receives a second killing request submitted by a user through the client, wherein the second killing request comprises: user number, merchandise information, coupon information, etc. It will be appreciated that at the beginning of the second killing, a large number of second killing requests, for example 10000 second killing requests, are received simultaneously, and the 10000 second killing requests need to be distributed to each application server in the application server cluster for processing.
In this embodiment, the gateway server distributes 10000 seconds killing requests to each application server according to the DNS polling mechanism. Wherein the gateway server uses an F5 physical machine. Assuming that four application servers form an application server cluster, the IP addresses are 172.28.20.1, 172.28.20.2, 172.28.20.3, 172.28.20.4, respectively, when a user requests access, the gateway server will schedule the IP addresses in a round robin order, will respond to the second killing request of the first user in the order of 172.28.20.1, 172.28.20.2, 172.28.20.3, and 172.28.20.4, and the second killing request of the next user will respond with the order of 172.28.20.2, 172.28.20.3, 172.28.20.4, and 172.20.20.1 after rotation, and the rotation process will continue, successfully implementing DNS load balancing.
After receiving the second killing request distributed by the gateway server, the application server loads the second killing request into a cache server corresponding to the user number information according to the user number information carried in the second killing request. For example, the second killing request is loaded into the cache server for subsequent acquisition of virtual inventory information on the cache server, wherein the last two digits in the user number information carried in the second killing request are "52", and the cache server with the last two digits of the user number "52" in the key information of the inventory in the cache server cluster is determined.
The second killing requests are distributed to each application server in the application server cluster through load balancing, the cache server cluster is utilized to effectively guide the concurrent second killing requests to different cache servers in the cache server cluster, the problem that the concurrency capacity of the independent servers is limited is avoided, the effect of dispersing the second killing requests is achieved, and the response capacity of the system is improved.
S3, reading specific information of each second killing request, and performing qualification verification on each second killing request according to a preset verification rule;
it can be appreciated that, in order to reduce the consumption of the application server cluster performance, when the actual inventory of the commodity is "0", the second killing request sent by the user is received, and the sold information is directly returned. Specifically, the method comprises the following steps:
Judging whether a preset stock identifier exists in the preset secondary cache;
generating first early warning information when the preset inventory identification exists in the secondary cache;
when the preset inventory identification does not exist in the secondary cache, judging whether the preset inventory identification exists in the primary cache;
when the preset stock identification exists in the first-level cache, generating first early warning information, and storing the preset stock identification into the second-level cache;
and when the preset inventory identification does not exist in the first-level cache, the plurality of second killing requests are distributed to a plurality of application servers in the application server cluster through load balancing.
The preset second-level cache is a memory of each application server in the application server cluster, and the preset stock mark is no stock/stock is 0. And reading the inventory marks of 'no inventory', '0' according to the sequence of the second-level cache and the first-level cache, when the inventory marks of 'no inventory', '0' exist in the second-level cache/the first-level cache, indicating that the commodity is sold, generating first early warning information and feeding back to a client corresponding to the second killing request, for example, the commodity which is purchased by you is sold.
It should be noted that, when the "no inventory"/"inventory is 0" inventory identifier does not exist in the second level buffer memory, and the "no inventory"/"inventory is 0" inventory identifier exists in the first level buffer memory, the first early warning information is generated, and meanwhile, the inventory identifier is also placed into the second level buffer memory, so that the "no inventory"/"inventory is 0" inventory identifier is directly read from the second level buffer memory when the second killing request is received next time, the invalid request is released, the system pressure is relieved, and meanwhile, the commodity overstock phenomenon is effectively avoided.
The time period of the data stored in the second level buffer is shorter than the time period of the data stored in the first level buffer. But the speed at which the application server reads data from the secondary cache is much greater than the speed at which it reads data from the primary cache. Therefore, the inventory identification of 'no inventory', '0 inventory' stored in the first-level cache is stored in the second-level cache, so that the speed of reading the inventory identification of 'no inventory', '0 inventory' later can be improved, an invalid request can be released more quickly, and the stability of the system can be improved.
It will be appreciated that in the tens of seconds, inventory of merchandise participating in the second kill campaign is sold, all second kill requests thereafter are invalid requests, and each second kill request consumes server performance per verification step performed and is an invalid check, thus requiring interception of subsequent majority requests.
In this embodiment, the preset stock identifier is obtained by going to the secondary cache/the primary cache every time a second killing request is received, so that the flow of the subsequent verification step is blocked, and the effect of protecting the system is achieved. In other embodiments, after the inventory identifier of "no inventory"/"inventory is 0" is generated, the channel for the user to submit the second killing request is directly closed, the user cannot submit the second killing request, and the number of invalid requests is reduced to the greatest extent.
When the stock identification of 'no stock', '0 stock' does not exist in the first-level cache and the second-level cache, the commodity is not robbed, the received second killing request is initially indicated to be effective, and part of invalid requests are filtered through user qualification verification, so that only the effective requests are processed. Specifically, the step of performing qualification verification on the second killing request according to a preset verification rule includes:
randomly reading a piece of verification data from a preset secondary cache, wherein the verification data comprises a verification question and a preset answer, sending the verification question to a client corresponding to the second killing request, receiving answer data input by a user, and comparing the answer data with the preset answer;
When the answer data is inconsistent with the preset answer, generating second early warning information, and returning to execute the previous step; or alternatively
And when the answer data is consistent with the preset answer, generating user qualification information according to the second killing request, inquiring the user qualification information from the first-level cache, and when the user qualification information does not exist in the first-level cache, successfully checking the user qualification, storing the user qualification information into the first-level cache, or when the user qualification information exists in the first-level cache, generating third early warning information.
It should be noted that, when the second killing activity is activated during operation, a preset number (for example, 10000 pieces) of verification data are read from a preset verification code library and stored in the secondary cache, and the verification data in the secondary caches corresponding to different application servers in the application server cluster are not necessarily the same. Each piece of verification data comprises a verification question and a preset answer.
Taking an application server A in an application server cluster as an example, after receiving a second killing request distributed by a gateway server, when a commodity is not robbed, the application server A randomly retrieves a piece of verification data from a second-level cache, sends a verification problem of the verification data to a client corresponding to the second killing request, generates a key according to a resource prefix, a commodity number, a scene number and a user number in the second killing request, and stores a preset answer of the piece of verification data into the first-level cache according to parameters of the key.
After answer data input by a user through a client is received, acquiring a preset answer from the first-level cache according to parameters of keys, and comparing the answer with the answer data: when the answer data case is inconsistent with the preset answer, generating second early warning information and feeding back the second early warning information to the client corresponding to the second killing request, for example, verification code errors. Then, a piece of verification data is read again and a verification code verification step is performed.
The aim of the step is to further increase the difficulty of machine identification on the basis of not increasing the identification rate of human identification by introducing verification data in a verification code library, thereby effectively blocking the pressure of a machine request on a system.
After verification by the verification code, the keys generated according to the resource prefix, the commodity number, the session number and the user number in the second killing request are obtained to be used as user qualification information, and the user qualification information is judged. Specifically, the user qualification information is queried from the first-level cache, if the user qualification information does not exist in the first-level cache, the current user is considered to be not participated in the commodity, and the second of the scene is killed, that is, the current user has participation qualification, and meanwhile, the user qualification information judged by the current request is saved in the first-level cache. When the user qualification information exists in the first-level cache, the current user is informed of participating in the second killing activity of the commodity and the session, third early warning information is generated and fed back to the client corresponding to the second killing request, for example, the commodity cannot be killed for a second time repeatedly, and the commodity cannot be killed for a second time.
It will be appreciated that in order to prevent an individual user from surmounting the second-order sterilization, the user qualification information stored in the primary cache is set with a preset time interval (which corresponds to the effective time paid by the user, for example, 15 minutes) that is much longer than the time the product was preempted during the second-order sterilization activity, and then the user cannot pass the user qualification verification when he wants to preempt the second-order sterilization again. In addition, if the user qualification is verified and the subsequent verification is passed, the user selects to pay not or pay not in a preset time interval, the order system closes the order after the preset time interval, and simultaneously releases the participation record corresponding to the user qualification information from the primary cache. That is, the user may still participate in the second killing of the commodity again after 15 minutes, provided that the commodity is in stock.
The steps further filter out invalid requests by verifying the qualification of the user, solve the problem of single-user bill refreshing and slow down the pressure of the system.
S4, when qualification verification of the second killing request is passed, deducting virtual inventory of a cache server corresponding to the second killing request, generating preset token information corresponding to the second killing request according to user qualification information, and storing the preset token information into a preset first-level cache;
When the user corresponding to the second killing request has second killing qualification, the virtual inventory on the cache server corresponding to the second killing request is obtained according to the last two digits of the user number, and one deduction operation is carried out on the virtual inventory. Then, preset token information corresponding to the user qualification information is generated according to the user qualification information, for example, according to a second deactivation number a, a second deactivation scene number r, a second deactivation commodity number i, a user number u and a token which is a token and is generated by the method: and a, i and u, and then storing the preset token information into a first-level cache.
It will be appreciated that, similar to user qualification, if the user finally chooses not to pay or does not pay within a preset time interval, the order system will close after the preset time interval, and at the same time, the preset token information corresponding to the second killing request needs to be released from the first level cache. Therefore, it is also necessary to set the valid time of the preset time interval to the preset token time, for example, 15 minutes.
It should be noted that, when the virtual inventory is not zero, the token is issued, so as to realize the qualification control of releasing to participate in deduction of the real inventory. When the virtual inventory is zero, the virtual inventory is subtracted once to obtain a negative number, and because the virtual inventory is larger than the real inventory, when the virtual inventory is insufficient, the real inventory of the commodity is insufficient, the token is failed to be issued, and first early warning information is generated and fed back to a client corresponding to the second killing request, for example, the commodity is sold.
Further, to reduce consumption of system performance by subsequent second kill requests, a preset inventory identification ("no inventory"/"inventory is 0") is generated and maintained in the level one cache.
S5, receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, generating real-time token information according to the transaction request, checking the real-time token information, and deducting real inventory in a database when the real-time token information passes the check, wherein second killing is successful.
The user can perform subsequent payment operations within a preset time after passing the qualification verification. Receiving a transaction request submitted by a user based on a second killing request, wherein the transaction request comprises: and generating real-time token information of the transaction request according to information contained in the transaction request and a preset token generation rule. Specifically, the step of verifying the real-time token information includes:
acquiring preset token information corresponding to the second killing request from the first-level cache, and judging that the implementation token information is invalid when the preset token information does not exist in the first-level cache;
When the preset token information exists in the primary cache, judging whether the real-time token information is consistent with the preset token information or not;
if the real-time token information is inconsistent, judging that the real-time token information is invalid, and prompting that the token check fails; or alternatively
And if the real-time token information is consistent, judging that the real-time token information is effective, and prompting that the token check is successful.
And when the preset token information does not exist in the first-level cache, the preset token information is released, namely the effective time limit of payment is exceeded. When the implementation token information is consistent with the preset token information, the second killing request corresponding to the current transaction request is indicated to pass the user qualification verification and is a valid request.
The method comprises the steps of acquiring a token required by a user when the user places an order according to parameters contained in a request when the link is requested from the downstream, so as to prevent bypassing of system verification and directly request to a core link.
After the token is successfully checked, the real inventory of the commodity needs to be deducted, when the real inventory of the commodity in the database is not zero, the real inventory is successfully deducted, the second killing is successful, the transaction is successful, so far, when the real inventory is deducted for some reason and fails, the preset token information is invalid, and after a preset time interval (for example, 15 minutes), the user is qualified for second killing again.
When the real inventory of the commodity is zero, the real inventory is subtracted once to obtain a negative number, which indicates that the real inventory of the commodity is insufficient, the commodity cannot be transacted, and first early warning information is generated and fed back to a client corresponding to the transaction request, for example, the commodity is sold.
Further, to reduce consumption of system performance by subsequent second kill requests, a preset inventory identification ("no inventory"/"inventory is 0") is generated and maintained in the level one cache.
In this step, by checking the token when the user places a payment, the request for invalid tokens is filtered out, preventing bypassing the link request.
According to the second killing request processing method provided by the embodiment, the inventory of the second killing commodity is hashed, the hashed inventory is stored on each cache server in the cache server cluster, the second killing request is distributed to each application server in the application server cluster through load balancing by using the gateway server, the second killing request is loaded into the corresponding cache server, the second killing request is dispersed, and the response capability of the system is improved; by performing qualification verification on the second killing request, invalid requests are filtered, and the pressure of system core service is relieved; the first-level cache and the second-level cache are constructed, and the server acquires corresponding information from the second-level cache, the first-level cache and the database in sequence, so that the second killing request processing efficiency is improved.
The invention also provides an application server cluster. Referring to fig. 2, a schematic application environment of the application server cluster 1 according to the present invention is shown.
In this embodiment, the application server cluster 1 includes a plurality of application servers 2, where the application servers 2 receive a second killing request allocated by a gateway server 4, where the gateway server is configured to receive a data request, such as a second killing request, sent by a user through a client 5, the application servers 2 perform data transmission, such as storage and reading, with the cache server cluster 3, and the application servers 2 perform data transmission, such as receiving an authentication code input by the user, or feeding back early warning information to the client 5.
Wherein, each cache server 31 in the cache server cluster 3 serves as a first level cache, and the memory of the application server 2 serves as a second level cache.
Referring to FIG. 3, a schematic diagram of a preferred embodiment of the application server 2 in FIG. 2 is shown.
In the present embodiment, the application server 2 may be a rack server, a blade server, a tower server, or a cabinet server.
The application server 2 comprises a memory 11, a processor 12, a communication bus 13, and a network interface 14.
The memory 11 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the application server 2, such as a hard disk of the application server 2. The memory 11 may in other embodiments also be an external storage device of the application server 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the application server 2. Further, the memory 11 may also include both an internal storage unit and an external storage device of the application server 2.
The memory 11 may be used not only for storing application software installed in the application server 2 and various types of data, such as the killing-by-seconds request processing program 10, but also for temporarily storing data that has been output or is to be output.
Processor 12 may in some embodiments be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chip for running program code or processing data stored in memory 11, such as the kill request processing program 10, etc.
The communication bus 13 is used to enable connection communication between these components.
The network interface 14 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used to establish a communication connection between the application server 2 and other electronic devices.
Fig. 3 shows only the application server 2 with components 11-14, it will be appreciated by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the application server 2, and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
Alternatively, the application server 2 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and a standard wired interface, a wireless interface.
Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch, or the like. The display may also be referred to as a display screen or a display unit for displaying information processed in the application server 2 and for displaying a visualized user interface.
In the embodiment of the application server 2 shown in fig. 3, as a program code of the kill-request processing program 10 is stored in the memory 11 of a computer storage medium, and when the processor 12 executes the program code of the kill-request processing program 10, the following steps are realized:
a1, uniformly distributing the inventory of the second killing commodity to a plurality of cache servers 31 in a cache server cluster 3 according to a preset rule, and respectively determining virtual inventory corresponding to each cache server 31;
a cache server cluster 3 is predetermined, wherein m cache servers 31 are included in the cache server cluster 3, and the cache server cluster 3 is used as a level one cache.
Before the second killing activity, information such as a second killing commodity, a commodity number, an inventory, an activity occasion and the like in the second killing activity is predetermined, and the information is stored in a database (not shown). Assuming that 1 commodity is contained in the second killing activity, the actual stock of the commodity is read from the database to be 10000, the commodity stock is divided into 100 parts according to the last two digits (00-99) of the user number, each part contains 100 stocks, and the 100 stocks are uniformly distributed to m cache servers 31 in the cache server cluster 3 according to a hash algorithm.
It should be noted that, the inventory on each cache server 31 distributed to the cache server cluster 3 is an actual inventory, but the inventory actually stored on each cache server 31 is a virtual inventory. Generally, the virtual inventory is slightly larger than the actual inventory, and can be set according to the requirements of operators. Then, the virtual inventory corresponding to each cache server 31 is determined, and the virtual inventory information is saved to the primary inventory.
Assuming that the inventory of item i in the second killing activity is 10000, the keys (keys) of item i in each cache server 31 are defined as: the method comprises the steps of (1) setting up an item frame, wherein the item frame is composed of (i) 00-item frame, (i) 99, and (ii) setting up a resource prefix (stock), wherein r is a scene number of second inactivity, i is a commodity number of second inactivity, and (iii) setting up a value (value) corresponding to each key as 100, wherein ' 00 ' -99 ' is two bits after the user number. That is, the corresponding keys may be different for different occasions and/or different merchandise in the second killing event.
It can be understood that the number of cache servers 31 in the cache server cluster 3 determines the digestion capability of the cache server cluster 3, and if the throughput of a single cache server 31 is 2 tens of thousands of TPS, the throughput of m cache servers is 2m tens of thousands of TPS, so in this embodiment, the value of m needs to be determined by estimating the concurrency in the second killing activity.
A2, receiving a second killing request of the gateway server 4 through balanced load distribution, wherein when the gateway server 4 reaches a preset moment, receiving a plurality of second killing requests submitted by a user through the client 5;
wherein the preset time is the time when the second killing activity starts. At the beginning of the second killing activity, the gateway server 4 receives a second killing request submitted by the user through the client 5, wherein the second killing request comprises: user number, merchandise information, coupon information, etc. It will be appreciated that at the moment when the second killing starts, a large number of second killing requests, for example 10000 second killing requests, are received simultaneously, and the 10000 second killing requests need to be distributed to each application server 2 in the application server cluster 1 for processing.
In the present embodiment, the gateway server 4 distributes 10000 seconds killing requests to each application server 2 according to the DNS polling mechanism. Wherein, the gateway server 4/gateway server 4 applies an F5 physical machine. Assuming that four application servers 2 form an application server cluster 1, the IP addresses are 172.28.20.1, 172.28.20.2, 172.28.20.3, 172.28.20.4, respectively, when a user requests access, the gateway server 4 will cyclically schedule the order of the IP addresses, will respond to the second killing request of the first user in the order of 172.28.20.1, 172.28.20.2, 172.28.20.3, and 172.28.20.4, and the second killing request of the next user will respond with the order of 172.28.20.2, 172.28.20.3, 172.28.20.4, and 172.20.20.1 after rotation, and the rotation process will continue, so as to successfully realize DNS load balancing.
After receiving the second killing request allocated by the gateway server 4, the application server 2 loads the second killing request into a cache server corresponding to the user number information according to the user number information carried in the second killing request. For example, the second killing request is loaded into the cache server 31 for subsequent acquisition of virtual inventory information on the cache server 31, and the last two digits in the user number information carried in the second killing request are "52", so that the cache server 31 with the last two digits of the user number in the key information in inventory in the cache server cluster 3 is determined.
The second killing requests are distributed to each application server 2 in the application server cluster 1 through load balancing, and the cache server cluster 3 is utilized to effectively guide the concurrent second killing requests to different cache servers 31 in the cache server cluster 3, so that the problem that the concurrency capacity of the independent servers is limited is avoided, the effect of dispersing the second killing requests is achieved, and the response capacity of the system is improved.
A3, reading specific information of each second killing request, and performing qualification verification on each second killing request according to a preset verification rule;
it can be appreciated that, in order to reduce the consumption of the performance of the application server cluster 1, when the actual inventory of the commodity is "0", the second killing request sent by the user is received, and the sold information is directly returned. Specifically, the method comprises the following steps:
Judging whether a preset stock identifier exists in the preset secondary cache;
generating first early warning information when the preset inventory identification exists in the secondary cache;
when the preset inventory identification does not exist in the secondary cache, judging whether the preset inventory identification exists in the primary cache;
when the preset stock identification exists in the first-level cache, generating first early warning information, and storing the preset stock identification into the second-level cache;
and when the preset inventory identification does not exist in the first-level cache, the plurality of second killing requests are distributed to a plurality of application servers 2 in the application server cluster 1 through load balancing.
The preset second-level cache is a memory of each application server 2 in the application server cluster 1, and the preset stock identifier is "no stock"/"stock is 0". And reading the inventory marks of 'no inventory', '0' according to the sequence of the second-level cache and the first-level cache, when the inventory marks of 'no inventory', '0' exist in the second-level cache/the first-level cache, indicating that the commodity is sold, generating first early warning information and feeding back to a client 5 corresponding to the second killing request, for example, the commodity which is purchased for urgent purchase is sold.
It should be noted that, when the "no inventory"/"inventory is 0" inventory identifier does not exist in the second level buffer memory, and the "no inventory"/"inventory is 0" inventory identifier exists in the first level buffer memory, the first early warning information is generated, and meanwhile, the inventory identifier is also placed into the second level buffer memory, so that the "no inventory"/"inventory is 0" inventory identifier is directly read from the second level buffer memory when the second killing request is received next time, the invalid request is released, the system pressure is relieved, and meanwhile, the commodity overstock phenomenon is effectively avoided.
The time period of the data stored in the second level buffer is shorter than the time period of the data stored in the first level buffer. But the speed at which the application server 2 reads data from the secondary cache is much greater than the speed at which it reads data from the primary cache. Therefore, the inventory identification of 'no inventory', '0 inventory' stored in the first-level cache is stored in the second-level cache, so that the speed of reading the inventory identification of 'no inventory', '0 inventory' later can be improved, an invalid request can be released more quickly, and the stability of the system can be improved.
When the stock identification of 'no stock', '0 stock' does not exist in the first-level cache and the second-level cache, the commodity is not robbed, the received second killing request is initially indicated to be effective, and part of invalid requests are filtered through user qualification verification, so that only the effective requests are processed.
It should be noted that, when the second killing activity is activated during operation, a preset number (for example, 10000 pieces) of verification data are read from a preset verification code library and stored in the secondary cache, and the verification data in the secondary caches corresponding to different application servers 2 in the application server cluster 1 are not necessarily the same. Each piece of verification data comprises a verification question and a preset answer.
Taking an application server A in the application server cluster 1 as an example, after receiving a second killing request distributed by the gateway server 4, when a commodity is not robbed, the application server A randomly invokes a piece of verification data from a second-level cache, sends a verification problem of the verification data to a client 5 corresponding to the second killing request, generates a key according to a resource prefix, a commodity number, a session number and a user number in the second killing request, and stores a preset answer of the piece of verification data into the first-level cache according to parameters of the key.
After receiving answer data input by a user through the client 5, acquiring a preset answer from the first-level cache according to parameters of keys, and comparing the answer with the answer data: when the answer data case is inconsistent with the preset answer, generating second early warning information and feeding back the second early warning information to the client 5 corresponding to the second killing request, for example, verification code errors. Then, a piece of verification data is read again and a verification code verification step is performed.
The aim of the step is to further increase the difficulty of machine identification on the basis of not increasing the identification rate of human identification by introducing verification data in a verification code library, thereby effectively blocking the pressure of a machine request on a system.
After verification by the verification code, the keys generated according to the resource prefix, the commodity number, the session number and the user number in the second killing request are obtained to be used as user qualification information, and the user qualification information is judged. Specifically, the user qualification information is queried from the first-level cache, if the user qualification information does not exist in the first-level cache, the current user is considered to be not participated in the commodity, and the second of the scene is killed, that is, the current user has participation qualification, and meanwhile, the user qualification information judged by the current request is saved in the first-level cache. When the user qualification information exists in the first-level cache, the current user is informed of participating in the second killing activity of the commodity and the session, third early warning information is generated and fed back to the client 5 corresponding to the second killing request, for example, the commodity cannot be killed for a second time repeatedly, and the commodity cannot be killed for a second time.
It will be appreciated that in order to prevent an individual user from surmounting the second-order sterilization, the user qualification information stored in the primary cache is set with a preset time interval (which corresponds to the effective time paid by the user, for example, 15 minutes) that is much longer than the time the product was preempted during the second-order sterilization activity, and then the user cannot pass the user qualification verification when he wants to preempt the second-order sterilization again. In addition, if the user qualification is verified and the subsequent verification is passed, the user selects to pay not or pay not in a preset time interval, the order system closes the order after the preset time interval, and simultaneously releases the participation record corresponding to the user qualification information from the primary cache. That is, the user may still participate in the second killing of the commodity again after 15 minutes, provided that the commodity is in stock.
The steps further filter out invalid requests by verifying the qualification of the user, solve the problem of single-user bill refreshing and slow down the pressure of the system.
A4, deducting the virtual inventory of the cache server corresponding to the second killing request when the qualification verification of the second killing request is passed, generating preset token information corresponding to the second killing request according to the user qualification information, and storing the preset token information into a preset first-level cache;
when the user corresponding to the second killing request has second killing qualification, the virtual inventory on the cache server 31 corresponding to the second killing request is obtained according to the last two digits of the user number, and a deduction operation is carried out on the virtual inventory. Then, preset token information corresponding to the user qualification information is generated according to the user qualification information, for example, according to a second deactivation number a, a second deactivation scene number r, a second deactivation commodity number i, a user number u and a token which is a token and is generated by the method: and a, i and u, and then storing the preset token information into a first-level cache.
It will be appreciated that, similar to user qualification, if the user finally chooses not to pay or does not pay within a preset time interval, the order system will close after the preset time interval, and at the same time, the preset token information corresponding to the second killing request needs to be released from the first level cache. Therefore, it is also necessary to set the valid time of the preset time interval to the preset token time, for example, 15 minutes.
It should be noted that, when the virtual inventory is not zero, the token is issued, so as to realize the qualification control of releasing to participate in deduction of the real inventory. When the virtual inventory is zero, the virtual inventory is subtracted once to obtain a negative number, and because the virtual inventory is larger than the real inventory, when the virtual inventory is insufficient, the real inventory of the commodity is insufficient, the token is failed to be issued, and first early warning information is generated and fed back to the client 5 corresponding to the second killing request, for example, the commodity is sold.
Further, to reduce consumption of system performance by subsequent second kill requests, a preset inventory identification ("no inventory"/"inventory is 0") is generated and maintained in the level one cache.
And A5, receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, generating real-time token information according to the transaction request, checking the real-time token information, and deducting real inventory in a database when the real-time token information passes the check, wherein second killing is successful.
The user can perform subsequent payment operations within a preset time after passing the qualification verification. Receiving a transaction request submitted by a user based on a second killing request, wherein the transaction request comprises: and generating real-time token information of the transaction request according to information contained in the transaction request and a preset token generation rule.
Specifically, the preset token information corresponding to the second killing request is obtained from the first-level cache, and when the preset token information does not exist in the first-level cache, the preset token information is released, namely the effective time limit of payment is exceeded. When the implementation token information is consistent with the preset token information, the second killing request corresponding to the current transaction request is indicated to pass the user qualification verification and is a valid request.
The method comprises the steps of acquiring a token required by a user when the user places an order according to parameters contained in a request when the link is requested from the downstream, so as to prevent bypassing of system verification and directly request to a core link.
After the token is successfully checked, the real inventory of the commodity needs to be deducted, when the real inventory of the commodity in the database is not zero, the real inventory is successfully deducted, the second killing is successful, the transaction is successful, so far, when the real inventory is deducted for some reason and fails, the preset token information is invalid, and after a preset time interval (for example, 15 minutes), the user is qualified for second killing again.
When the real inventory of the commodity is zero, the real inventory is subtracted once to obtain a negative number, which indicates that the real inventory of the commodity is insufficient, the commodity cannot be transacted, and the first early warning information is generated and fed back to the client 5 corresponding to the transaction request, for example, "the current commodity is sold".
Further, to reduce consumption of system performance by subsequent second kill requests, a preset inventory identification ("no inventory"/"inventory is 0") is generated and maintained in the level one cache.
In this step, by checking the token when the user places a payment, the request for invalid tokens is filtered out, preventing bypassing the link request.
The application server 2 in the application server cluster 1 provided in the above embodiment performs inventory hashing on the inventory of the second killing commodity, saves the hashed inventory to each cache server in the cache server cluster, distributes the second killing request to each application server in the application server cluster by using the gateway server through load balancing, loads the second killing request into the corresponding cache server, disperses the second killing request, and improves the response capability of the system; by performing qualification verification on the second killing request, invalid requests are filtered, and the pressure of system core service is relieved; the first-level cache and the second-level cache are constructed, and the server acquires corresponding information from the second-level cache, the first-level cache and the database in sequence, so that the second killing request processing efficiency is improved.
Alternatively, in other embodiments, the second killing request processing program 10 may be further divided into one or more modules, where one or more modules are stored in the memory 11 and executed by one or more processors (the processor 12 in this embodiment) to perform the present invention, and the modules referred to herein are a series of instruction segments of a computer program capable of performing a specific function. For example, referring to FIG. 4, which is a schematic block diagram of the second killing request processing program 10 in FIG. 3, in this embodiment, the second killing request processing program 10 may be divided into an inventory hash module 110, a request distribution module 120, a qualification module 130, a token issuing module 140, and a token checking module 150, where the functions or operation steps implemented by the modules 110-150 are similar to those described above, and are not described in detail herein, for example, in which:
The inventory hashing module 110 is configured to uniformly distribute the inventory of the second killing commodity to a plurality of cache servers in the cache server cluster according to a preset rule, and determine a virtual inventory corresponding to each cache server respectively;
the request distribution module 120 is configured to receive a second killing request of the gateway server through balanced load distribution, where when the gateway server reaches a preset moment, the gateway server receives a plurality of second killing requests submitted by a user through a client;
the qualification verification module 130 is configured to read specific information of each second killing request, and perform qualification verification on each second killing request according to a preset verification rule;
the token issuing module 140 is configured to deduct the virtual inventory of the cache server corresponding to the second killing request when the qualification of the second killing request passes, generate preset token information corresponding to the second killing request according to the user qualification information, and store the preset token information into a preset first-level cache; a kind of electronic device with high-pressure air-conditioning system
The token checking module 150 is configured to receive a transaction request submitted by a user for a second killing request passing qualification verification, generate real-time token information according to the transaction request, check the real-time token information, and deduct real inventory in a database when the real-time token information passes verification, wherein second killing is successful.
In addition, an embodiment of the present invention further proposes a computer readable storage medium, where the computer readable storage medium includes a second killing request processing program 10, where the second killing request processing program 10 implements the following operations when executed by a processor:
a1, uniformly distributing the inventory of the second killing commodity to a plurality of cache servers in a cache server cluster according to a preset rule, and respectively determining virtual inventory corresponding to each cache server;
a2, receiving a second killing request of a gateway server through balanced load distribution, wherein when the gateway server reaches a preset moment, receiving a plurality of second killing requesters submitted by a user through a client;
a3, reading specific information of each second killing request, and performing qualification verification on each second killing request according to a preset verification rule;
a4, deducting the virtual inventory of the cache server corresponding to the second killing request when the qualification verification of the second killing request is passed, generating preset token information corresponding to the second killing request according to the user qualification information, and storing the preset token information into a preset first-level cache; a kind of electronic device with high-pressure air-conditioning system
And A5, receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, generating real-time token information according to the transaction request, checking the real-time token information, and deducting real inventory in a database when the real-time token information passes the check, wherein second killing is successful.
The embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiment of the second killing request processing method described above, and will not be described herein.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description of the preferred embodiments of the present invention should not be taken as limiting the scope of the invention, but rather should be understood to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the following description and drawings, or by direct or indirect application to other relevant art(s).

Claims (9)

1. A method for processing a second killing request, which is applied to an application server cluster, and is characterized in that the method comprises the following steps:
s1, a cache server cluster is determined in advance to serve as a first-level cache, the inventory of second killing commodities is uniformly distributed to a plurality of cache servers in the cache server cluster according to a preset rule, and virtual inventory corresponding to each cache server is determined respectively;
s2, receiving a second killing request of a gateway server through balanced load distribution, wherein when the gateway server reaches a preset moment, receiving a plurality of second killing requests submitted by a user through a client;
s3, reading specific information of each second killing request, and performing qualification verification on each second killing request according to a preset verification rule;
s4, when qualification verification of the second killing request is passed, deducting virtual inventory of a cache server corresponding to the second killing request, generating preset token information corresponding to the second killing request according to user qualification information, and storing the preset token information into the first-level cache; a kind of electronic device with high-pressure air-conditioning system
S5, receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, generating real-time token information according to the transaction request, checking the real-time token information, and deducting real inventory in a database when the real-time token information passes the check, wherein second killing is successful;
wherein, the step of verifying the real-time token information includes: judging whether preset token information corresponding to the second killing request exists in the first-level cache, and judging that the real-time token information is invalid when the preset token information does not exist in the first-level cache; when the preset token information exists in the primary cache, judging whether the real-time token information is consistent with the preset token information, if not, judging that the real-time token information is invalid, prompting that the token check fails, or if so, judging that the real-time token information is valid, prompting that the token check is successful.
2. A method of second kill request processing as in claim 1, the method comprising:
prompting that qualification verification fails when the second killing request fails; a kind of electronic device with high-pressure air-conditioning system
And when the real-time token information verification fails, prompting that the token verification fails.
3. The method for processing a second killing request according to claim 2, wherein the memory of each server in the application server cluster is used as a secondary cache, and before step S3, the method further comprises:
judging whether a preset inventory identification exists in the secondary cache, wherein the preset inventory identification is 'no inventory'/'inventory is 0';
generating first early warning information when the preset inventory identification exists in the secondary cache;
when the preset inventory identification does not exist in the secondary cache, judging whether the preset inventory identification exists in the primary cache;
when the preset stock identification exists in the first-level cache, generating first early warning information, and storing the preset stock identification into the second-level cache; or alternatively
And when the preset inventory identification does not exist in the first-level cache, the plurality of second killing requests are distributed to a plurality of servers in the application server cluster through load balancing.
4. A second killing request processing method according to any one of claims 1 to 3, wherein said S4 further includes:
when the virtual inventory is zero, generating first early warning information, and generating and storing a preset inventory mark of no inventory and 0 inventory into the first-level cache.
5. The second killing request processing method according to claim 4, wherein said S5 further comprises:
when the real inventory is zero, generating first early warning information, and generating and storing the preset inventory identification into the first-level cache.
6. A method of processing a second killing request according to claim 3, wherein said step of qualifying each second killing request according to a preset validation rule includes:
randomly reading a piece of verification data from the secondary cache, wherein the verification data comprises a verification question and a preset answer, sending the verification question to a client corresponding to the second killing request, receiving answer data input by a user, and comparing the answer data with the preset answer;
when the answer data is inconsistent with the preset answer, generating second early warning information, and returning to execute the previous step; or alternatively
And when the answer data is consistent with the preset answer, generating user qualification information according to the second killing request, inquiring the user qualification information from the first-level cache, and when the user qualification information does not exist in the first-level cache, successfully checking the user qualification, storing the user qualification information into the first-level cache, or when the user qualification information exists in the first-level cache, generating third early warning information.
7. An application server cluster comprising a plurality of application servers, said servers comprising: the second killing request processing program can be executed by the processor, and the following steps can be realized:
a1, a cache server cluster is determined in advance to serve as a first-level cache, the inventory of the second killing commodity is uniformly distributed to a plurality of cache servers in the cache server cluster according to a preset rule, and virtual inventory corresponding to each cache server is determined respectively;
a2, receiving a second killing request of a gateway server through balanced load distribution, wherein when the gateway server reaches a preset moment, receiving a plurality of second killing requests submitted by a user through a client;
a3, reading specific information of each second killing request, and performing qualification verification on each second killing request according to a preset verification rule;
a4, deducting virtual inventory of a cache server corresponding to the second killing request when qualification verification of the second killing request is passed, generating preset token information corresponding to the second killing request according to user qualification information, and storing the preset token information into the first-level cache; a kind of electronic device with high-pressure air-conditioning system
A5, receiving a transaction request submitted by a user aiming at a second killing request passing qualification verification, generating real-time token information according to the transaction request, checking the real-time token information, and deducting real inventory in a database when the real-time token information passes the check, wherein second killing is successful;
wherein, the step of verifying the real-time token information includes: judging whether preset token information corresponding to the second killing request exists in the first-level cache, and judging that the real-time token information is invalid when the preset token information does not exist in the first-level cache; when the preset token information exists in the primary cache, judging whether the real-time token information is consistent with the preset token information, if not, judging that the real-time token information is invalid, prompting that the token check fails, or if so, judging that the real-time token information is valid, prompting that the token check is successful.
8. The application server cluster of claim 7, wherein the kill request handler, when executed by the processor, is further operable to:
prompting that qualification verification fails when the second killing request fails; a kind of electronic device with high-pressure air-conditioning system
And when the real-time token information verification fails, prompting that the token verification fails.
9. A computer readable storage medium, wherein a second killing request processing program is included in the computer readable storage medium, and the second killing request processing program, when executed by a processor, can implement the steps of the second killing request processing method according to any one of claims 1 to 6.
CN201810547915.2A 2018-05-31 2018-05-31 Second killing request processing method, application server cluster and storage medium Active CN108897615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810547915.2A CN108897615B (en) 2018-05-31 2018-05-31 Second killing request processing method, application server cluster and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810547915.2A CN108897615B (en) 2018-05-31 2018-05-31 Second killing request processing method, application server cluster and storage medium

Publications (2)

Publication Number Publication Date
CN108897615A CN108897615A (en) 2018-11-27
CN108897615B true CN108897615B (en) 2023-06-13

Family

ID=64343425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810547915.2A Active CN108897615B (en) 2018-05-31 2018-05-31 Second killing request processing method, application server cluster and storage medium

Country Status (1)

Country Link
CN (1) CN108897615B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435346B (en) * 2019-01-14 2023-12-19 阿里巴巴集团控股有限公司 Offline data processing method, device and equipment
CN111752957B (en) * 2019-03-28 2022-11-11 苏宁易购集团股份有限公司 Sale locking method and system based on caching
CN110245153A (en) * 2019-05-20 2019-09-17 平安银行股份有限公司 Product data processing method, system, computer equipment and storage medium
CN111091405B (en) * 2019-09-12 2023-08-08 达疆网络科技(上海)有限公司 Implementation scheme for solving high concurrency of second killing promotion
CN110909978A (en) * 2019-10-15 2020-03-24 京东数字科技控股有限公司 Resource processing method, device, server and computer readable storage medium
CN111260272A (en) * 2019-12-02 2020-06-09 泰康保险集团股份有限公司 Method, device, equipment and storage medium for responding to user request based on inventory
CN113537852A (en) * 2020-04-14 2021-10-22 成都鼎桥通信技术有限公司 Second killing processing method and system
CN111506445A (en) * 2020-04-21 2020-08-07 北京思特奇信息技术股份有限公司 Method and system for preventing repeated malicious ordering of commodities based on REDIS cache
CN111782391A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Resource allocation method, device, electronic equipment and storage medium
CN111930786B (en) * 2020-08-14 2023-09-26 中国工商银行股份有限公司 Resource acquisition request processing system, method and device
CN112132662B (en) * 2020-09-28 2023-06-20 广州立白企业集团有限公司 Commodity second killing method and device, computer equipment and storage medium
CN112153158B (en) * 2020-09-29 2022-10-18 中国银行股份有限公司 Information processing method and device
CN112184326A (en) * 2020-10-14 2021-01-05 深圳市欢太科技有限公司 Method for processing high-concurrency killing activity, high-concurrency system, terminal and computer-readable storage medium
CN114520808A (en) * 2020-11-19 2022-05-20 南京亚信软件有限公司 Request processing method and device, electronic equipment and computer readable storage medium
CN113762857A (en) * 2020-11-24 2021-12-07 北京沃东天骏信息技术有限公司 Inventory deduction method, device, equipment and storage medium
CN112511316B (en) * 2020-12-08 2023-04-07 深圳依时货拉拉科技有限公司 Single sign-on access method and device, computer equipment and readable storage medium
CN112669058A (en) * 2020-12-21 2021-04-16 上海多维度网络科技股份有限公司 Data processing method and device for application program, storage medium and electronic device
CN113315825A (en) * 2021-05-24 2021-08-27 康键信息技术(深圳)有限公司 Distributed request processing method, device, equipment and storage medium
CN113435931A (en) * 2021-06-29 2021-09-24 未鲲(上海)科技服务有限公司 Service data processing method and device, computer equipment and storage medium
CN114445200B (en) * 2022-04-08 2022-07-26 中国光大银行股份有限公司 Second killing activity processing method and device
CN115826875B (en) * 2023-01-05 2023-04-28 摩尔线程智能科技(北京)有限责任公司 Cache data invalidation verification method, device and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213443A1 (en) * 2014-01-30 2015-07-30 Apple Inc. Tokenizing authorizations
CN106331772A (en) * 2015-06-17 2017-01-11 阿里巴巴集团控股有限公司 Data verification method and apparatus and smart television system
CN106997546A (en) * 2016-01-26 2017-08-01 中国移动通信集团安徽有限公司 A kind of order processing method and device
CN106060130A (en) * 2016-05-25 2016-10-26 乐视控股(北京)有限公司 Verification method and system of merchandise inventory
CN106170016A (en) * 2016-07-28 2016-11-30 深圳市创梦天地科技有限公司 A kind of method and system processing high concurrent data requests
CN107220878A (en) * 2017-05-26 2017-09-29 努比亚技术有限公司 Transaction processing system, second kill order processing method and apparatus

Also Published As

Publication number Publication date
CN108897615A (en) 2018-11-27

Similar Documents

Publication Publication Date Title
CN108897615B (en) Second killing request processing method, application server cluster and storage medium
CN105096172B (en) The generation of electronic invoice based on e-commerce platform and processing method and system
CN108111554B (en) Control method and device for access queue
US20170351852A1 (en) Identity authentication method, server, and storage medium
CN108112038B (en) Method and device for controlling access flow
CN113342498A (en) Concurrent request processing method, device, server and storage medium
CN112084486A (en) User information verification method and device, electronic equipment and storage medium
US9866587B2 (en) Identifying suspicious activity in a load test
CN110851298A (en) Abnormality analysis and processing method, electronic device, and storage medium
CN111930786B (en) Resource acquisition request processing system, method and device
KR101351435B1 (en) Protection of series data
CN114116802A (en) Data processing method, device, equipment and storage medium of Flink computing framework
CN113419856A (en) Intelligent current limiting method and device, electronic equipment and storage medium
WO2020073661A1 (en) Dynamic code synchronization process capacity expansion method, dynamic code generator, and storage medium
EP3750098B1 (en) Privacy preserving data collection and analysis
CN110930161A (en) Method for determining operation time of business operation and self-service business operation equipment
CN113656497A (en) Data verification method and device based on block chain
US20210306330A1 (en) Authentication server, and non-transitory storage medium
JP2020518067A (en) System, method, and computer program for providing a card-linked offer network that allows consumers to link the same payment card to the same offer at multiple issuer sites.
US20200342460A1 (en) User identity verification
CA2960914C (en) Method for detecting a risk of substitution of a terminal, corresponding device, program and recording medium
CN111311102A (en) Resource ratio adjusting method, device, equipment and computer readable storage medium
CN106878369B (en) Service processing method and device
JP6659229B2 (en) POS system, information processing method, and program
CN117171235B (en) Data analysis method based on industrial identification and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant