CN113887935A - High-concurrency service scene processing method, system and storage medium - Google Patents

High-concurrency service scene processing method, system and storage medium Download PDF

Info

Publication number
CN113887935A
CN113887935A CN202111152540.8A CN202111152540A CN113887935A CN 113887935 A CN113887935 A CN 113887935A CN 202111152540 A CN202111152540 A CN 202111152540A CN 113887935 A CN113887935 A CN 113887935A
Authority
CN
China
Prior art keywords
subsystem
user
inventory
users
payment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111152540.8A
Other languages
Chinese (zh)
Inventor
傅鹏斌
梁训虎
宗超
刘琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongtong Service Kexin Information Technology Co ltd
Original Assignee
Zhongtong Service Kexin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongtong Service Kexin Information Technology Co ltd filed Critical Zhongtong Service Kexin Information Technology Co ltd
Priority to CN202111152540.8A priority Critical patent/CN113887935A/en
Publication of CN113887935A publication Critical patent/CN113887935A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0222During e-commerce, i.e. online transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0223Discounts or incentives, e.g. coupons or rebates based on inventory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders

Abstract

The application provides a high concurrency service scene processing method, a system and a storage medium, wherein the system comprises the following steps: the system comprises a current limiting subsystem, a cache inventory inquiry subsystem, a real inventory inquiry subsystem, an order subsystem, a payment subsystem and a real inventory processing subsystem, wherein high concurrency services are reasonably divided according to a service flow, the whole service flow is divided into a plurality of subsystems, each subsystem independently completes own service, and the subsystem of one service flow is called after the own service is completed, so that the whole service flow is completed. The service volume processed by each subsystem is small, the response time is long and short, the processing speed is high, the concurrence volume of the instantaneous high concurrence service scene is improved through the cooperation of the subsystems, and the user experience is good. Moreover, the subsystems are relatively independent, so that the fault tolerance of the system is improved, when one subsystem is changed, the use of other subsystems is not influenced, the method is applied to different scenes, and is high in flexibility and applicability.

Description

High-concurrency service scene processing method, system and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, a system, and a storage medium for processing a high-concurrency service scene.
Background
With the rapid development of the information age, the large-scale application and service of super-large users are more and more, and as an important sales promotion means of modern electric commerce, the demand for second killing of commodities and red envelope robbing is more and more. The killing action of the second time has the characteristics of instantaneous high concurrency, limited stock and no over-sale.
The second killing system is used for processing intense competition of a large number of users on limited commodities instantly, high concurrence points of the second killing system are a link of killing commodities by front-end users, and the core of the second killing system is efficient processing of high concurrence inventory. Therefore, in the second killing, all links of the second killing need to be matched, and the efficient performance of the second killing is guaranteed.
And the second system of killing of prior art, when a large amount of users rush in simultaneously in the short time and snatch goods, because the second system of killing needs the task of handling more in the short time, the second system of killing is processed speed slowly to, make mistakes easily, thereby influence the going on smoothly of second activity of killing, reduce user experience.
Disclosure of Invention
The application provides a high-concurrency service scene processing method, a high-concurrency service scene processing system and a storage medium, which are used for supporting an instantaneous high-concurrency second killing scene and improving user experience.
In a first aspect, the present application provides a method for processing a high concurrent service scenario, which is applied to a high concurrent service scenario processing system, where the high concurrent service scenario processing system includes:
the system comprises a current limiting subsystem, a cache inventory inquiry subsystem, a real inventory inquiry subsystem, an order subsystem, a payment subsystem and a real inventory processing subsystem;
the method comprises the following steps:
when the current limiting subsystem receives ordering requests of N users, the current caching inventory query subsystem is called by utilizing nginx, the inventory of the current cache in the redis is queried, and the ordering requests of M users are accepted according to the inventory of the current cache, wherein the ordering requests of the users comprise user identifications, M is a positive integer less than or equal to N, and N is an integer greater than or equal to 1;
when the cache inventory query subsystem receives a user ordering request, querying the inventory of the current cache in the redis, sending tokens to the M users, storing the tokens in the redis, and enabling the M users to enter a message queue, wherein the message queue adopts a first-in first-out rule;
the real inventory inquiry subsystem inquires real inventory in a database according to an order placing request of a user currently positioned at the first position in the message queue, and calls the order subsystem when the inventory in the database is greater than a preset inventory;
the order subsystem allows the user to place an order according to an order placing request of the user currently positioned at the first position in the message queue, acquires an order request submitted by the user, and calls the payment subsystem;
the payment subsystem acquires a token of a user currently positioned at the first position in the message queue, enters a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis, deletes the token of the user currently positioned at the first position in the message queue in the redis after the user payment is successful, and calls the real inventory processing subsystem;
and the real inventory processing subsystem deducts the inventory in the database according to the order quantity of the user currently positioned at the first position in the message queue.
Optionally, before the current limiting subsystem receives the order placing requests of M users according to the current cached inventory amount, the method further includes:
the current limiting subsystem judges whether a blacklist comprises the user identification according to the user identification corresponding to the ordering request;
and when the blacklist does not include the user identification, receiving order placing requests of M users according to the current cached inventory.
Optionally, before the current limiting subsystem receives the order placing requests of M users according to the current cached inventory amount, the method further includes:
judging whether the frequency of the order placing request submitted by the user is less than or equal to the preset frequency or not according to the user identification;
and when the frequency of the order placing requests submitted by the users is less than or equal to the preset frequency, receiving the order placing requests of M users according to the stock of the current cache.
Optionally, the obtaining, by the payment subsystem, a token of a user currently located at the first position in the message queue, and entering a payment page according to a payment logic to perform payment when it is determined that the token of the user currently located at the first position in the message queue exists in the redis includes:
judging whether the user is a user with high payment priority or not according to the user identification;
and if the user is a user with high payment priority, obtaining the token of the user currently positioned at the first position in the message queue, and directly entering the payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis.
Optionally, the current limiting subsystem accepts ordering requests of M users according to the current cached inventory amount, and includes:
acquiring request time when each user in the N users submits a ordering request;
and receiving the order placing requests of the users with the request time positioned in the first M users when the order placing requests are submitted from the N users according to the sequence of the request time when each user submits the order placing requests.
In a second aspect, the present application provides a high concurrency service scenario processing system, including:
the system comprises a current limiting subsystem, a cache inventory inquiry subsystem, a real inventory inquiry subsystem, an order subsystem, a payment subsystem and a real inventory processing subsystem;
the system comprises a current limiting subsystem and a cache inventory query subsystem, wherein the current limiting subsystem is used for calling the cache inventory query subsystem by utilizing nginx when receiving ordering requests of N users, querying the inventory of a current cache in redis, and receiving ordering requests of M users according to the inventory of the current cache, wherein the ordering requests of the users comprise user identifications, M is a positive integer less than or equal to N, and N is an integer greater than or equal to 1;
the cache inventory query subsystem is used for querying the inventory of the current cache in the redis when receiving a user ordering request, sending tokens to the M users, storing the tokens in the redis, and enabling the M users to enter a message queue, wherein the message queue adopts a first-in first-out rule;
the real inventory inquiry subsystem is used for inquiring the real inventory in a database according to the order placing request of the user currently positioned at the first position in the message queue, and calling the order subsystem when the inventory in the database is greater than the preset inventory;
the order subsystem is used for allowing the user to place an order according to an order placing request of the user who is currently at the first position in the message queue, acquiring the order request submitted by the user, and calling the payment subsystem;
the payment subsystem is used for acquiring the token of the user currently positioned at the first position in the message queue, entering a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis, deleting the token of the user currently positioned at the first position in the message queue existing in the redis after the user payment is successful, and calling the real inventory processing subsystem;
and the real inventory processing subsystem is used for deducting the inventory in the database according to the list placing quantity of the user currently positioned at the first position in the message queue.
Optionally, before the current limiting subsystem receives the order placing requests of M users according to the current cached inventory amount, the current limiting subsystem is further configured to:
the current limiting subsystem judges whether a blacklist comprises the user identification according to the user identification corresponding to the ordering request;
and when the blacklist does not include the user identification, receiving order placing requests of M users according to the current cached inventory.
Optionally, before the current limiting subsystem receives the order placing requests of M users according to the current cached inventory amount, the current limiting subsystem is further configured to:
judging whether the frequency of the order placing request submitted by the user is less than or equal to the preset frequency or not according to the user identification;
and when the frequency of the order placing requests submitted by the users is less than or equal to the preset frequency, receiving the order placing requests of M users according to the stock of the current cache.
Optionally, the payment subsystem obtains a token of a user currently located at the first position in the message queue, and when it is determined that the token of the user currently located at the first position in the message queue exists in the redis, enters a payment page according to a payment logic to perform payment, specifically configured to:
judging whether the user is a user with high payment priority or not according to the user identification;
and if the user is a user with high payment priority, obtaining the token of the user currently positioned at the first position in the message queue, and directly entering the payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis.
Optionally, the current limiting subsystem receives order placing requests of M users according to the current cached inventory amount, and is specifically configured to:
acquiring request time when each user in the N users submits a ordering request;
and receiving the order placing requests of the users with the request time positioned in the first M users when the order placing requests are submitted from the N users according to the sequence of the request time when each user submits the order placing requests.
In a third aspect, the present application provides an electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the method of any of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer-executable instruction or a program is stored, and when the computer-executable instruction or the program is executed by a processor, the method according to any one of the first aspect is implemented.
In a fifth aspect, the present application provides a computer program, wherein the computer program is characterized in that when being executed by a processor, the computer program implements the method according to any one of the first aspect.
According to the high concurrency service scene processing method, the high concurrency service scene processing system and the storage medium, the high concurrency service scene processing system comprises a current limiting subsystem, a cache inventory inquiring subsystem, a real inventory inquiring subsystem, an order subsystem, a payment subsystem and a real inventory processing subsystem, high concurrency services are reasonably divided according to service flows, the whole service flow is divided into a plurality of subsystems, each subsystem independently completes own service, and a subsystem of the service flow is called after the own service is completed, so that the whole service flow is completed. Therefore, the service volume processed by each subsystem is small, the response time is long and short, the processing speed is high, and the concurrency volume of the instantaneous high-concurrency service scene is improved. Moreover, the fault tolerance of the system is improved through the operation of each subsystem and the calling among the subsystems, and the user experience is good.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of a second killing service flow provided in an embodiment of the present application;
fig. 2 is a high concurrency service scenario processing system according to an embodiment of the present application;
fig. 3 is a flowchart of a method for processing a high concurrency service scenario according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application are clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present application, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the rapid development of the information age, the large-scale application and service of super-large users are more and more, and as an important sales promotion means of modern electric commerce, the demand for second killing of commodities and red envelope robbing is more and more. The killing action of the second time has the characteristics of instantaneous high concurrency, limited stock and no over-sale. A typical E-commerce second killing system is developed around a merchant, an inventory, a user and an order, firstly, the merchant can add and adjust commodity inventory, and the inventory can reflect the shipping and accounting information of the merchant; the user successfully kills the commodities in seconds, stock is subjected to subtraction operation, meanwhile, the user order is subjected to addition operation, and a second killing record detail is inserted; the payment and return actions of the user are also reflected in the inventory and order.
The second killing system is used for processing intense competition of a large number of users on limited commodities instantly, high concurrence points of the second killing system are a link of killing commodities by front-end users, and the core of the second killing system is efficient processing of high concurrence inventory. Therefore, in the second killing, all links of the second killing need to be matched, and the efficient performance of the second killing is guaranteed.
And the second system of killing of prior art, when a large amount of users rush in simultaneously in the short time and snatch goods, because the second system of killing needs the task of handling more in the short time, the second system of killing is processed speed slowly to, make mistakes easily, thereby influence the going on smoothly of second activity of killing, reduce user experience.
Therefore, in order to solve the technical problems in the prior art, the application provides a method, a system and a storage medium for processing a high-concurrency service scene, the high-concurrency second-killing service scene is planned and split according to a service flow, the whole service flow is split into a current-limiting subsystem, a cache stock inquiry subsystem, a real stock inquiry subsystem, an order subsystem, a payment subsystem and a real stock processing subsystem, each subsystem executes corresponding services, and calling is carried out among the systems according to the service flow. Therefore, the service volume processed by each subsystem is small, the response time is long and short, the processing speed is high, and the concurrency volume of the instantaneous high-concurrency service scene is improved. And according to the service process, after the service process corresponding to the current subsystem is processed, the subsystem corresponding to the next service process is called, so that the fault tolerance of the system is improved, and the user experience is good.
And the high-concurrency second-killing service scene is planned and split according to the service flow, the whole service flow is split into a current-limiting subsystem, a cache stock inquiry subsystem, a real stock inquiry subsystem, an order subsystem, a payment subsystem and a real stock processing subsystem, all the subsystems are relatively independent, and when one subsystem is changed, the use of other subsystems is not influenced.
Fig. 1 is a schematic view of a second deactivation service flow provided in an embodiment of the present application. According to the schematic flow diagram of the second killing service shown in fig. 1, fig. 2 is a high concurrency service scenario processing system provided in an embodiment of the present application. As shown in fig. 2, the high concurrency service scenario processing system includes: a current limiting subsystem 201, a cache inventory query subsystem 202, a real inventory query subsystem 203, an order subsystem 204, a payment subsystem 205, and a real inventory processing subsystem 206.
The current limiting subsystem 201 is configured to, when receiving ordering requests of N users, invoke the cache inventory query subsystem 202 by using nginx, query the inventory amount of the current cache in the redis, and accept ordering requests of M users according to the inventory amount of the current cache, where the ordering requests of the users include identifiers of the users, M is a positive integer less than or equal to N, and N is an integer greater than or equal to 1.
The cache inventory query subsystem 202 is configured to query an inventory amount of a current cache in the redis when receiving a user ordering request, send tokens to the M users, store the tokens in the redis, and enable the M users to enter a message queue, where the message queue adopts a first-in first-out rule.
And the real inventory inquiry subsystem 203 is used for inquiring the real inventory in the database according to the order placing request of the user currently positioned at the first position in the message queue, and calling the order subsystem 204 when the inventory in the database is greater than the preset inventory.
And the order subsystem 204 is used for allowing the user to place an order according to the order placing request of the user at the first position in the message queue and calling the payment subsystem 205.
The payment subsystem 205 is configured to obtain a token of a user currently located at the first position in the message queue, enter a payment page to perform payment when it is determined that the token of the user currently located at the first position in the message queue exists in the redis, delete the token of the user currently located at the first position in the message queue existing in the redis, and invoke the real inventory processing subsystem 206 after the user successfully performs payment.
And the real inventory processing subsystem 206 is used for deducting the inventory in the database according to the order quantity of the user currently positioned at the first position in the message queue.
Optionally, before the current throttling subsystem 201 accepts the ordering requests of M users according to the current cached inventory amount, the current throttling subsystem is further configured to:
the current limiting subsystem 201 judges whether the blacklist includes the user identifier according to the user identifier corresponding to the order placing request;
and when the blacklist does not include the user identification, receiving order placing requests of M users according to the current cached inventory.
Optionally, before the current throttling subsystem 201 accepts the ordering requests of M users according to the current cached inventory amount, the current throttling subsystem is further configured to:
judging whether the frequency of a ordering request submitted by a user is less than or equal to the preset frequency or not according to the identification of the user;
and when the frequency of the order placing requests submitted by the users is less than or equal to the preset frequency, receiving the order placing requests of the M users according to the stock of the current cache.
Optionally, the payment subsystem 205 obtains the token of the user currently located at the first position in the message queue, and when determining that the token of the user currently located at the first position in the message queue exists in the redis, enters the payment page according to the payment logic to perform payment, specifically configured to:
judging whether the user is a user with high payment priority or not according to the identification of the user;
and if the user is the user with high payment priority, obtaining the token of the user currently positioned at the first position in the message queue, and directly entering a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis.
Optionally, the current limiting subsystem 201 accepts ordering requests of M users according to the current cached inventory amount, and is specifically configured to:
acquiring request time when each user in the N users submits a ordering request;
and receiving the order placing requests of the users with the request time positioned in the first M users when the order placing requests are submitted from the N users according to the sequence of the request time when each user submits the order placing requests.
For the high concurrency service scene processing system described in this embodiment, the specific implementation process may refer to the following method embodiments, which implement principles and technical effects are similar, and this embodiment is not described herein again.
Fig. 3 is a flowchart of a method for processing a high concurrency service scenario according to an embodiment of the present application.
The execution subject of the high concurrency service scenario processing method may be, for example, a device with a processing function, such as a server or a computer device.
Optionally, the current limiting subsystem, the cache inventory query subsystem, the real inventory query subsystem, the order subsystem, the payment subsystem and the real inventory processing subsystem in the high concurrency service scene processing system are relatively independent, and each subsystem may be set in a different server, so that each server only needs to execute the service process of one subsystem, and when other service processes need to be executed, the corresponding subsystem is called, thereby reducing the processing amount of each server and improving the instantaneous concurrency amount.
As shown in fig. 3, the method includes:
s301, when receiving order requests of N users, the current limiting subsystem calls the cache inventory query subsystem by utilizing nginx to query the inventory of the current cache in the redis and receives the order requests of M users according to the inventory of the current cache.
The order placing request of the user comprises the identification of the user, M is a positive integer smaller than or equal to N, and N is an integer larger than or equal to 1.
In this embodiment, the second-killing of the commodities by the e-commerce is taken as an example for explanation, wherein the number of the second-killing commodities set by the e-commerce is 1000.
When the second killing activity starts, a large number of users enter a second killing page to carry out limited commodity robbery, and click a purchase control set by the second killing page in a short time to send an ordering request. Wherein the order placing request includes an identification of the user, e.g., an ID name of the user.
Since a large number of users submit order requests in a short time, the server is prevented from being excessively crashed in terms of processing amount of the users, and the number of commodities killed in seconds is limited, if the order requests submitted by each user are processed, the processing efficiency of the server is too low. Therefore, the number of users needs to be limited.
When a large number of users submit orders, the nginx is used for calling the cache inventory query subsystem to query the inventory of the current cache in the redis, and order placing requests of the users with the preset proportion are received, for example, if the current inventory is 1000, the order placing requests of 1500 users are received.
Optionally, in S301, "the current limiting subsystem receives order placing requests of M users according to the current cached inventory amount", and a possible implementation manner is:
s3011, obtaining request time when each user of the N users submits ordering requests.
S3012, according to the sequence of the request time when each user submits the order request, receiving the order request of the user whose request time when the order request is submitted in the first M users in the N users.
Specifically, the second killing rule is first obtained and first obtained, so that a large number of users are filtered and limited by the time when the users submit order requests. And determining that the ranking of the N users is M users before the ranking according to the request time by the request time carried in the order placing request when the user submits the order placing request, and receiving the order placing request of the M users.
S302, when receiving a user ordering request, the cache inventory inquiry subsystem inquires the inventory of the current cache in the redis, sends tokens to M users, stores the tokens in the redis, and enables the M users to enter a message queue.
The message queue adopts a first-in first-out rule.
In this embodiment, the cache inventory query subsystem reads the inventory of the current second-killing commodities from the redis cache, and sends tokens to the M users according to the inventory of the current second-killing commodities in the redis cache, and if the users receive the tokens, it indicates that the users have an opportunity to rush to buy the second-killing commodities, and the users with the tokens are placed in the message queue for queuing. The second killing action is carried out, the first-in first-out buying rule is first-in first-out buying, and therefore the first-in first-out rule is first-in first-out, and therefore the front users can obtain commodities preferentially.
For example, when the inventory of the current second-killing commodity in the redis cache is 1000, tokens are sent for 1500 users, and the 1500 users enter a message queue to wait.
When the token is sent to the user, the token is stored in the redis at the same time, and the use is paid for the subsequent user.
In this embodiment, through S301 and S302, current limiting for a large number of users is implemented, and the amount of concurrence is reduced.
And S303, the real inventory inquiry subsystem inquires the real inventory in the database according to the order placing request of the user currently positioned at the first position in the message queue, and the order subsystem is called when the inventory in the database is more than the preset inventory.
In this embodiment, for a user in the message queue, for a user located first at the exit of the message queue at present, the real inventory querying subsystem queries the real inventory in the current database according to an order placing request of the user, where the order placing request carries the number of the second goods that are to be robustly purchased by the user, and if the inventory in the current database is greater than a preset inventory, that is, the inventory in the current database can satisfy the number of the second goods that are to be robustly purchased by the user, it is indicated that the user can robustly purchase the second goods. Thus, the order subsystem is invoked.
It should be noted that the preset inventory amount is related to the number of second-killing products in the order placing request submitted by the user, for example, if the number of second-killing products in the order placing request submitted by the user is 1, and the inventory amount in the current database is only greater than or equal to 1, the user robs for the second-killing products. If the number of second-killing commodities in the order placing request submitted by the user is 2, if the stock in the current database is 1, prompting that the robbery is failed, or displaying the stock of the second-killing commodities which can be robbed by the current user on a robbery page of the user, inquiring whether the user needs to be robbed, and if the user determines to be robbed, calling an order subsystem.
S304, the order subsystem allows the user to place an order according to the order placing request of the user who is currently at the first position in the message queue, obtains the order request submitted by the user, and calls the payment subsystem.
In this embodiment, the user determines to purchase the second-killing goods, submits the order, and the order subsystem receives the order request submitted by the user and calls the payment subsystem.
S305, the payment subsystem acquires the token of the user currently positioned at the first position in the message queue, enters a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis, deletes the token of the user currently positioned at the first position in the message queue existing in the redis after the user payment is successful, and calls the real inventory processing subsystem.
In this embodiment, after entering the payment subsystem, a token of a user is acquired, the payment subsystem checks whether the token of the user exists in the redis, and if it is determined that the token of the user exists in the redis, it indicates that the user initiates payment for the first time, and enters a payment page to enable the user to perform payment. If the token of the user does not exist in the redis, the user is not allowed to pay if the token indicates that the user repeatedly pays or the payment request of the user is an illegal request.
And after the user successfully pays, deleting the token of the user in the redis, and calling the real inventory processing subsystem.
In order to improve the user experience and the processing capacity of the payment subsystem, the payment process of VIP users, prepaid users, users who store funds in electronic purses of electronic merchants, and the like can be simplified, and optionally, one possible implementation manner of S305 is:
s3051, judging whether the user is a user with high payment priority or not according to the identification of the user.
S3052, if the user is a user with high payment priority, obtaining the token of the user currently positioned at the first position in the message queue, and directly entering a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis.
Specifically, for e-commerce, identities of users with high payment priority, such as a VIP user, a prepaid user, a user who stores funds in an e-wallet of the e-commerce, are stored, and in the second killing activity, whether the user is the user with high payment priority is judged according to the identity of the user carried in a ordering request submitted by the user. And if the user is the user with high payment priority, obtaining the token of the user, and directly jumping to a payment page for payment after determining that the token of the user exists in the redis. Therefore, the time for the user with high payment priority to rush purchase is shortened, the user experience is improved, the processing amount of the payment subsystem is reduced, and the efficiency of the payment subsystem for processing other users is improved.
S306, the real inventory processing subsystem deducts the inventory in the database according to the list amount of the user currently positioned at the first position in the message queue.
In this embodiment, after the user pays successfully, the real inventory processing subsystem obtains the actual inventory amount of the user, that is, the number of the purchased second killing commodities, and updates the real inventory amount in the database according to the actual inventory amount of the user, that is, the actual inventory amount of the user is subtracted from the actual inventory amount of the current database, so as to obtain the updated real inventory amount.
In S303-S306, the real stock in the database is prevented from generating errors and the over-selling phenomenon is prevented from generating through the matching and calling among the real stock inquiry subsystem, the order subsystem, the payment subsystem and the real stock processing subsystem.
In this embodiment, the high concurrency service scene processing system includes a current limiting subsystem, a cache inventory query subsystem, a real inventory query subsystem, an order subsystem, a payment subsystem, and a real inventory processing subsystem, and reasonably divides the high concurrency service according to the service flow, divides the entire service flow into a plurality of subsystems, and each subsystem independently completes its own service and calls a subsystem of the service flow after its own service is completed, thereby completing the entire service flow. Therefore, the service volume processed by each subsystem is small, the response time is long and short, the processing speed is high, and the concurrency volume of the instantaneous high-concurrency service scene is improved. Moreover, the fault tolerance of the system is improved through the operation of each subsystem and the calling among the subsystems, and the user experience is good.
Optionally, in S302, before the current limiting subsystem receives order placing requests of M users according to the current cached inventory amount, the method further includes:
s401, the current limiting subsystem judges whether the blacklist includes the user identification according to the user identification corresponding to the order placing request.
S402, when the blacklist does not include the user identification, receiving order placing requests of M users according to the current cache stock.
Specifically, some users in the e-commerce purchase commodities of merchants and obtain compensation of the merchants by means of poor evaluation and the like, or some cattle users kill the commodities in seconds through professional shopping software, and for the users, account numbers of the users are set to be blacklists, so that the users cannot participate in commodity second killing activities.
Therefore, when an order placing request submitted by a user is received, whether the blacklist includes the user identification or not is searched in the blacklist according to the user identification carried in the order placing request. And if the blacklist does not include the user identification, receiving order placing requests of M users according to the current cached inventory.
In the embodiment, the blacklist is set to exclude illegal users, so that a fair second killing activity is provided for common users, and the current is limited through the blacklist, so that the concurrency is reduced, and the performance of the high-concurrency service scene processing system is improved.
In the second killing activity, in order to kill the goods in seconds, the user can carry out the first-aid purchase through some non-artificial means, for example, writing a first-aid purchase program, and submitting a placing request for multiple times through first-aid purchase software, so that the concurrency is improved, the processing capacity of a high-concurrency business scene processing system is increased, and the normal second killing order is disturbed. Therefore, optionally, in S302, before the current limiting subsystem accepts the order placing requests of M users according to the current cached inventory amount, the method further includes:
s501, judging whether the frequency of the order placing request submitted by the user is less than or equal to the preset frequency or not according to the user identification.
S502, when the frequency of the order placing requests submitted by the users is less than or equal to the preset frequency, the order placing requests of the M users are accepted according to the stock of the current cache.
Specifically, for a user who performs the emergency purchase through the program or the emergency purchase software, the frequency of submitting the order placing request is higher than the frequency of submitting the order placing request by a general user, so that by judging the frequency of submitting the order placing request by the user, whether the user is the general user or the user who performs the emergency purchase through the program or the emergency purchase software can be determined.
The order placing request carries the user identification, whether the frequency of the order placing request submitted by the user is less than or equal to the preset frequency is judged through the user identification, wherein the preset frequency can be 20 times/1 s, and the order placing requests of M users are accepted according to the current cached inventory quantity for the users submitting the order placing request with the frequency less than or equal to the preset frequency. Therefore, a fair second killing activity is provided for common users, and the current is limited in the mode, so that the concurrency is reduced, and the performance of the high-concurrency service scene processing system is improved.
Optionally, the page for killing the commodities in seconds is not changed frequently in the process of killing the commodities in seconds, so that the web cache is started, the page for killing the commodities in seconds is set to be a static page, and the page is directly loaded when a user accesses the page, so that the processing amount of the user when the user accesses the page is reduced, and the access speed is improved.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 4, the electronic apparatus includes: a processor 41 and a memory 42.
Wherein the memory 42 stores computer-executable instructions.
The processor 41 executes the computer-executable instructions stored in the memory 42, so that the processor 41 executes the method according to any of the above embodiments.
For the electronic device provided in the embodiment of the present application, specific implementation processes of the electronic device may refer to the method embodiments, and implementation principles and technical effects of the electronic device are similar, which are not described herein again.
In the embodiment shown in fig. 4, it is understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The embodiment of the present application further provides a computer-readable storage medium, where a computer-executable instruction is stored in the computer-readable storage medium, and when a processor executes the computer-executable instruction, the video stream format conversion method according to the embodiment of the foregoing method is implemented.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A high concurrency service scene processing method is applied to a high concurrency service scene processing system, and the high concurrency service scene processing system comprises: the system comprises a current limiting subsystem, a cache inventory inquiry subsystem, a real inventory inquiry subsystem, an order subsystem, a payment subsystem and a real inventory processing subsystem;
the method comprises the following steps:
when the current limiting subsystem receives ordering requests of N users, the current caching inventory query subsystem is called by utilizing nginx, the inventory of the current cache in the redis is queried, and the ordering requests of M users are accepted according to the inventory of the current cache, wherein the ordering requests of the users comprise user identifications, M is a positive integer less than or equal to N, and N is an integer greater than or equal to 1;
when the cache inventory query subsystem receives a user ordering request, querying the inventory of the current cache in the redis, sending tokens to the M users, storing the tokens in the redis, and enabling the M users to enter a message queue, wherein the message queue adopts a first-in first-out rule;
the real inventory inquiry subsystem inquires real inventory in a database according to an order placing request of a user currently positioned at the first position in the message queue, and calls the order subsystem when the inventory in the database is greater than a preset inventory;
the order subsystem allows the user to place an order according to an order placing request of the user currently positioned at the first position in the message queue, acquires an order request submitted by the user, and calls the payment subsystem;
the payment subsystem acquires a token of a user currently positioned at the first position in the message queue, enters a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis, deletes the token of the user currently positioned at the first position in the message queue in the redis after the user payment is successful, and calls the real inventory processing subsystem;
and the real inventory processing subsystem deducts the inventory in the database according to the order quantity of the user currently positioned at the first position in the message queue.
2. The method of claim 1, wherein before the throttling subsystem accepts ordering requests from M users based on the current cached inventory level, the method further comprises:
the current limiting subsystem judges whether a blacklist comprises the user identification according to the user identification corresponding to the ordering request;
and when the blacklist does not include the user identification, receiving order placing requests of M users according to the current cached inventory.
3. The method of claim 1, wherein before the throttling subsystem accepts ordering requests from M users based on the current cached inventory level, the method further comprises:
judging whether the frequency of the order placing request submitted by the user is less than or equal to the preset frequency or not according to the user identification;
and when the frequency of the order placing requests submitted by the users is less than or equal to the preset frequency, receiving the order placing requests of M users according to the stock of the current cache.
4. The method according to any one of claims 1 to 3, wherein the payment subsystem obtains the token of the user currently at the first position in the message queue, and determines that the token of the user currently at the first position in the message queue exists in the redis, and enters a payment page for payment according to a payment logic, and the method comprises the following steps:
judging whether the user is a user with high payment priority or not according to the user identification;
and if the user is a user with high payment priority, obtaining the token of the user currently positioned at the first position in the message queue, and directly entering the payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis.
5. The method according to any one of claims 1-3, wherein the current limiting subsystem accepts orders requests from M users based on current cached inventory levels, comprising:
acquiring request time when each user in the N users submits a ordering request;
and receiving the order placing requests of the users with the request time positioned in the first M users when the order placing requests are submitted from the N users according to the sequence of the request time when each user submits the order placing requests.
6. A high concurrency service scenario processing system, comprising:
the system comprises a current limiting subsystem, a cache inventory inquiry subsystem, a real inventory inquiry subsystem, an order subsystem, a payment subsystem and a real inventory processing subsystem;
the system comprises a current limiting subsystem and a cache inventory query subsystem, wherein the current limiting subsystem is used for calling the cache inventory query subsystem by utilizing nginx when receiving ordering requests of N users, querying the inventory of a current cache in redis, and receiving ordering requests of M users according to the inventory of the current cache, wherein the ordering requests of the users comprise user identifications, M is a positive integer less than or equal to N, and N is an integer greater than or equal to 1;
the cache inventory query subsystem is used for querying the inventory of the current cache in the redis when receiving a user ordering request, sending tokens to the M users, storing the tokens in the redis, and enabling the M users to enter a message queue, wherein the message queue adopts a first-in first-out rule;
the real inventory inquiry subsystem is used for inquiring the real inventory in a database according to the order placing request of the user currently positioned at the first position in the message queue, and calling the order subsystem when the inventory in the database is greater than the preset inventory;
the order subsystem is used for allowing the user to place an order according to an order placing request of the user who is currently at the first position in the message queue, acquiring the order request submitted by the user, and calling the payment subsystem;
the payment subsystem is used for acquiring the token of the user currently positioned at the first position in the message queue, entering a payment page for payment when the token of the user currently positioned at the first position in the message queue is determined to exist in the redis, deleting the token of the user currently positioned at the first position in the message queue existing in the redis after the user payment is successful, and calling the real inventory processing subsystem;
and the real inventory processing subsystem is used for deducting the inventory in the database according to the list placing quantity of the user currently positioned at the first position in the message queue.
7. The system of claim 6, wherein the throttling subsystem is further configured to, prior to accepting an order placement request from the M users based on the current cached inventory amount:
the current limiting subsystem judges whether a blacklist comprises the user identification according to the user identification corresponding to the ordering request;
when the blacklist does not include the user identification, receiving order placing requests of M users according to the current cached stock; alternatively, the first and second electrodes may be,
the current limiting subsystem is further configured to, before receiving ordering requests of the M users according to the current cached inventory amount:
judging whether the frequency of the order placing request submitted by the user is less than or equal to the preset frequency or not according to the user identification;
and when the frequency of the order placing requests submitted by the users is less than or equal to the preset frequency, receiving the order placing requests of M users according to the stock of the current cache.
8. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the method of any of claims 1-5.
9. A computer-readable storage medium, in which computer-executable instructions or a program are stored, which, when executed by a processor, implement the method according to any one of claims 1-5.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the method according to any of claims 1-5 when executed by a processor.
CN202111152540.8A 2021-09-29 2021-09-29 High-concurrency service scene processing method, system and storage medium Pending CN113887935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152540.8A CN113887935A (en) 2021-09-29 2021-09-29 High-concurrency service scene processing method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152540.8A CN113887935A (en) 2021-09-29 2021-09-29 High-concurrency service scene processing method, system and storage medium

Publications (1)

Publication Number Publication Date
CN113887935A true CN113887935A (en) 2022-01-04

Family

ID=79008083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152540.8A Pending CN113887935A (en) 2021-09-29 2021-09-29 High-concurrency service scene processing method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113887935A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115208893A (en) * 2022-05-23 2022-10-18 中国银行股份有限公司 Resource scheduling method and device
CN115423521A (en) * 2022-09-05 2022-12-02 深圳市一页科技有限公司 Large-batch concurrent payment optimization system and method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115208893A (en) * 2022-05-23 2022-10-18 中国银行股份有限公司 Resource scheduling method and device
CN115208893B (en) * 2022-05-23 2024-04-16 中国银行股份有限公司 Resource scheduling method and device
CN115423521A (en) * 2022-09-05 2022-12-02 深圳市一页科技有限公司 Large-batch concurrent payment optimization system and method thereof
CN115423521B (en) * 2022-09-05 2023-08-04 深圳市一页科技有限公司 Large-batch concurrent payment optimizing system and method thereof

Similar Documents

Publication Publication Date Title
US10346731B2 (en) Method and apparatus for dynamic interchange pricing
CN112132662B (en) Commodity second killing method and device, computer equipment and storage medium
CN110390595B (en) Information processing system, method, server and storage medium
CN110363666B (en) Information processing method, apparatus, computing device and storage medium
US8156042B2 (en) Method and apparatus for automatically reloading a stored value card
CN113887935A (en) High-concurrency service scene processing method, system and storage medium
KR100583181B1 (en) System and method for providing partial payment in the electronic commerce
US20190197511A1 (en) Method and apparatus for processing information
CN110046995B (en) Method, device and equipment for processing refund request
CN112465489A (en) Payment service processing method and device and machine-readable storage medium
CN109003074A (en) Enrichment merchant identifier associated with account data update request
CN110458544B (en) Payment method and payment service system of multi-cash-register-crossing system
CN109886676A (en) Method of payment, calculating equipment, storage medium for block chain network
CN111930786A (en) Resource acquisition request processing system, method and device
CN110689394B (en) Method and device for processing service supplementary notes
CN110969520A (en) Loan application method, loan application device, loan application server and computer storage medium
CN108564354B (en) Settlement method, service platform and server
US20220351264A1 (en) Financial Service Providing Method and Electronic Apparatus Performing the Same
JP2020518067A (en) System, method, and computer program for providing a card-linked offer network that allows consumers to link the same payment card to the same offer at multiple issuer sites.
US8977305B1 (en) Initiation of wireless service
US20150120419A1 (en) System and method for providing sale items
US20220318769A1 (en) Electronic apparatus for processing item sales information and method thereof
JP7466046B1 (en) Information processing device, information processing method, and program
JP6993840B2 (en) Management server, credit center server, and computer program
KR101074617B1 (en) System and method for providing partial payment in the electronic commerce

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination