CN112102044A - Method, system and device for processing high-concurrency second-killing commodities by message queue - Google Patents

Method, system and device for processing high-concurrency second-killing commodities by message queue Download PDF

Info

Publication number
CN112102044A
CN112102044A CN202011242683.3A CN202011242683A CN112102044A CN 112102044 A CN112102044 A CN 112102044A CN 202011242683 A CN202011242683 A CN 202011242683A CN 112102044 A CN112102044 A CN 112102044A
Authority
CN
China
Prior art keywords
message queue
killing
message
request
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011242683.3A
Other languages
Chinese (zh)
Other versions
CN112102044B (en
Inventor
张艳清
陈宇坚
蓝科
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sefon Software Co Ltd
Original Assignee
Chengdu Sefon Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sefon Software Co Ltd filed Critical Chengdu Sefon Software Co Ltd
Priority to CN202011242683.3A priority Critical patent/CN112102044B/en
Publication of CN112102044A publication Critical patent/CN112102044A/en
Application granted granted Critical
Publication of CN112102044B publication Critical patent/CN112102044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method, a system and a device for processing high-concurrency second-killing commodities by a message queue, and mainly solves the problems that an existing second-killing system in the prior art is difficult to deal with large-scale access flow, fair competition of all users cannot be guaranteed, and hardware resources cannot be fully used. The message queue processes high-concurrency second-killing commodities, firstly checks a user login state interception invalid request, then distributes the request to different servers through a distribution strategy, and then obtains an optimal distribution strategy through message queue analysis; and high concurrent flow enters the database to process and update product information after being shunted by the three layers of interception frameworks, and finally jumps to a payment interface for payment. Through the scheme, the invention achieves the purposes of improving the overall performance of the system, ensuring fair competition of all users and enabling the system resources to achieve the maximum utilization rate.

Description

Method, system and device for processing high-concurrency second-killing commodities by message queue
Technical Field
The invention relates to the technical field of electronic commerce, in particular to a method, a system and a device for processing high-concurrency second-killing commodities by a message queue.
Background
With the rapid development of the internet industry, in the highly-parallel second-killing-goods activity of Taobao, the scalability architecture is realized by adopting common methods of increasing the number of servers, configuring server clusters and the like, and asynchronous message processing is realized by using a message queue. In order to ensure efficient transmission in the transmission process, aiming at the second-killing commodity service, the message queue is considered, high-performance deployment is carried out, and personalized customization of a user is supported.
Many studies of the second killing system have appeared in recent years, such as: yi biographical affairs and the like design a fair competitive system for killing seconds by using traditional redis and memcached, but the system cannot be applied to high-concurrency scenes; the method is characterized in that a second killing system with high concurrency is designed in a mode that load balancing and reverse proxy are used for flow peak clipping by Jiangyihua and the like, and a winner queue is constructed by memcached; liu Lei et al have designed a high concurrency second killing system based on SSM frame, Bootstrap frame, Redis cache, but they all can't intelligent selection cluster optimization strategy, can't promote the rate of utilization of system hardware promptly, have guaranteed the business that second killed is normally gone on alone, do not based on network optimization, do not guarantee all users ' fairness.
The existing second killing system is difficult to deal with large-scale access flow before second killing activity, needs to be supported by enough hardware facilities, and adopts a load balancing strategy to deal with high concurrency; often, due to network reasons, time difference exists, and fair competition of all users cannot be guaranteed; the method is difficult to adapt to various complex application scenes and cannot fully use hardware resources.
Disclosure of Invention
The invention aims to provide a method, a system and a device for processing high-concurrency second-killing commodities by a message queue, and aims to solve the problems that an existing second-killing system is difficult to deal with large-scale access flow, fair competition of all users cannot be guaranteed, and hardware resources cannot be fully used.
In order to solve the above problems, the present invention provides the following technical solutions:
a method for processing high-concurrency second-killing commodities by a message queue comprises the following steps:
s1, checking the login state of the user, intercepting the invalid second killing request to obtain an effective second killing request;
s2, distributing the effective killing-by-seconds request in the step S1 to different servers through different distribution strategies; the different distribution strategies can be strategies made by users, and can also be default load balancing strategies, and generally, which servers are idle are distributed to idle servers; the method can also be set to be that more idle resources of the servers are correspondingly allocated with more requests to the servers with more idle resources;
s3, accessing the server in the step S2 to obtain the killing-by-second request message, analyzing the message and comparing to obtain the optimal distribution strategy, writing the optimal distribution strategy into a message queue and entering a database; the request is distributed to different message queues by a message distributor according to the scene, and different message queues have different analysis strategies and are finally sent to a database.
S4, processing the information entering the database through redis, and optimizing the cache of the database;
s5, detecting whether the product information allowance coefficient of the database in the step S4 is larger than a threshold value, if so, turning to the step S6, otherwise, updating the database;
and S6, jumping to a payment interface, if the payment is successful within the set time, finishing the second killing if the payment is successful, and otherwise, returning to the payment interface.
The method comprises the steps of checking a user login state interception invalid request, distributing the request to different servers through a distribution strategy, and analyzing through a message queue to obtain an optimal distribution strategy; after high concurrent flow is shunted through the three-layer interception framework, the flow which can finally operate the database is reduced to a lower value, the reading and writing efficiency of the inventory is improved through sql optimization, the overall performance of the system is greatly improved, and the number of hardware support facilities is reduced.
Furthermore, the accurate time is needed before the user login state is verified, and the accurate time is synchronized to the commodity killing interface after login, and the specific process is as follows:
s001, acquiring standard time;
s002, measuring the time offset between the master clock and the slave clock, and eliminating the time offset by taking the standard time in the step S001 as a standard;
and S003, measuring the time delay of message transmission between the master clock and the slave clock, selecting a point of the standard time in the step S001 for monitoring to obtain a discrete error signal, judging whether the discrete coefficient of the error signal is greater than a set standard value by adopting Gaussian filtering, if so, ending the second killing, and otherwise, generating a second killing address of the commodity.
The existing killing-by-second system has poor command triggering precision due to delay uncertainty of network transmission; the invention eliminates the time offset between the master clock and the slave clock, controls the discrete coefficient of the error signal in a set range by a Gaussian filtering method, ensures that the display time precision error value of each user is between dozens of nanoseconds and dozens of submicroseconds, ensures that the use feeling of the users is not reduced due to network reasons to the maximum extent, and simultaneously ensures fair competition of all users.
Further, the specific process of step S1 is as follows:
s101, sending a request for entering a single commodity second killing interface according to the second killing address in the step S003, generating a login state code by the Token to verify the login state of the user, returning to the user login interface if the login state code is a failure state code, and turning to the step S102 if the login state code is a success state code;
s102, after the single commodity killing-by-second interface request and the commodity killing-by-second request are queued to enter the client for polling, repeated same request information is intercepted through CDN redirection and cache filtering, and effective request information is reserved.
According to the method, the login state of the user is verified, so that invalid requests are reduced, high concurrent flow generated by a single commodity killing-by-second interface request and a commodity killing-by-second request is polled through the DNS, the requests are classified, static data are directly obtained by the CDN, repeated identical requests are removed through cache filtering, and the invalid requests are intercepted.
Further, the specific process of step S2 is: the Nginx load balancing distributes the request information effective in the step S102 to different servers according to a distribution strategy; the request load balance of the browser server is realized by using the Nginx, and the request load balance is distributed to different servers in the cluster through different strategies, so that the pressure of processing data by the server side is reduced.
Further, the specific process of step S3 is as follows:
s301, accessing the server in the step S2 to obtain a message queue of the request information, constructing a factor transmission message queue in a message transmission protocol by adopting a tree structure according to the message queue, and grouping the factors according to a set main factor group;
s302, counting the value ranges of the key words in the main factor group in the step S301, and dividing the value ranges into corresponding message queue solutions to obtain an optimal distribution strategy;
and S303, writing the optimal allocation strategy in the step S302 into a corresponding message queue, and then judging whether the head-to-pointer value of the message queue is a module of a tail pointer and the maximum capacity, if so, the second killing fails, otherwise, the message queue enters a database.
The invention improves the message queue by simulating the purchasing behavior of the consumer, adds configuration items, analyzes the received messages, distinguishes the influence factors according to different types and conditions, formulates different analysis functions, and determines which mode of message queue solution is used according to the set weight, thereby not only ensuring asynchronous transmission of messages, decoupling among modules and safe and reliable data transmission under high concurrency, but also avoiding mutual interference in different types of application conditions according to the difference of the modes and being independent of each other; the weight and the analytic function are changed, so that the deployment scheme can be rapidly and flexibly expanded, the complex application scenes under various conditions can be processed, and the resource of the system can reach the maximum utilization rate.
Further, the message queue solution in step S302 includes: the system comprises a single-node solution formed by a single message queue, a multi-node message queue cluster solution formed by a plurality of single message queues, a single-node message queue solution based on a master-slave architecture and a multi-node message queue cluster solution based on the master-slave architecture; through the analysis of the service, the above four message queues basically satisfy all second killing scenes.
Further, the specific process of step S302 is:
s302.1, analyzing the message queue according to the keywords, and converting the analyzed file and distributing the converted file to different message queue groups in a binary file package mode;
s302.2, determining a parsing function corresponding to the message queue group according to the message queue group and the main factor parsing keywords in the step S302.1;
and S302.3, carrying out secondary analysis on the binary file packet according to the analysis function in the step S302.2, weighting by using a weight method, and sequentially selecting corresponding distribution strategies according to weighted results.
The reasonable distribution of the message queue scheme is completed through the scheme, and the aim of fully utilizing resources is fulfilled.
Further, the specific process of step S4 is as follows:
s401, caching product stock through a message entering a database through redis processing;
s402, obtaining a product object through deserialization after the step S401 is completed, and then modifying the inventory through insert;
s403, acquiring the lock through Update after the step S402 is completed, and then releasing the lock;
s404, after the step S403 is completed, through Cookie verification, whether the product allowance coefficient is larger than a threshold value is detected, otherwise, the database is updated, and if yes, a payment interface is skipped.
According to the invention, product inventory is cached through redis, when the same competitive product is operated, if Update operation is firstly used, an uplink lock is added to the data, and when other users operate, the lock is released after the last user submits a transaction or rolls back, so that the second killing process is greatly blocked, and therefore, the operation sequence of insert and Update determines whether a database design module has high performance.
A system for processing high-concurrency second-killing commodities by message queue comprises
A user login module: the system is used for logging in the users and ensuring that the users participating in killing in seconds are all valid users registered by the platform;
and a second commodity killing information module: the commodity category and the list are used for showing the commodity category and the list which are put on the shelf by the user and participate in the killing activity of the second, and the detailed information of the commodity can be checked by clicking the commodity category and the list to enter a detailed page;
a time module: the user ensures that the display time precision error value of each user is between tens of nanoseconds and tens of submicroseconds;
a server filtering module: the system is used for verifying the user state and intercepting the same repeated request;
a load balancing module: for distributing the request to different servers;
the message queue improving module: the system is used for analyzing and distributing user request information and determining a reasonable information processing and distributing strategy;
a database operation module: for updating delivery product inventory information;
a payment page module: for the user to pay for the product money.
An electronic device includes
A memory: for storing executable instructions;
a processor: the system is used for executing the executable instructions stored in the memory and realizing a method for processing high-concurrency second-killing commodities by the message queue.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method comprises the steps of checking a user login state interception invalid request, distributing the request to different servers through a distribution strategy, and analyzing through a message queue to obtain an optimal distribution strategy; after high concurrent flow is shunted through the three-layer interception framework, the flow which can finally operate the database is reduced to a lower value, the reading and writing efficiency of the inventory is improved through sql optimization, the overall performance of the system is greatly improved, and the number of hardware support facilities is reduced.
(2) The invention eliminates the time offset between the master clock and the slave clock, controls the discrete coefficient of the error signal in a set range by a Gaussian filtering method, ensures that the display time precision error value of each user is between dozens of nanoseconds and dozens of submicroseconds, ensures that the use feeling of the users is not reduced due to network reasons to the maximum extent, and simultaneously ensures fair competition of all users.
(3) The invention improves the message queue by simulating the purchasing behavior of the consumer, adds configuration items, analyzes the received messages, distinguishes the influence factors according to different types and conditions, formulates different analysis functions, and determines which mode of message queue solution is used according to the set weight, thereby not only ensuring asynchronous transmission of messages, decoupling among modules and safe and reliable data transmission under high concurrency, but also avoiding mutual interference in different types of application conditions according to the difference of the modes and being independent of each other; the weight and the analytic function are changed, so that the deployment scheme can be rapidly and flexibly expanded, the complex application scenes under various conditions can be processed, and the resource of the system can reach the maximum utilization rate.
(4) The invention vertically expands and improves concurrency, adopts three ways to process network fluctuation on standard time, furthest ensures the consistency of the starting time of all users, and improves the message queue based on the purchasing behavior theory of consumers, so that the message queue cluster uses one system of an optimal strategy, which can process a plurality of complex application complex scenes, and has high performance and high stability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts, wherein:
fig. 1 is a block diagram of a flow structure of the embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to fig. 1, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, the invention is a second killing method for improving concurrency capability by vertical expansion, processing large-scale requests in unit time, balancing nginx load and optimizing database reading and writing, wherein a message queue is improved based on a consumer purchasing behavior theory, so that the method has the characteristics of high concurrency, high performance, time precision, high data response efficiency and message queue strategy optimization; a method for processing high-concurrency second-killing commodities by a message queue specifically comprises the following steps:
a1, logging in by a user, ensuring that each user participating in killing in seconds is a valid user registered by the platform and can participate in the killing in seconds only after logging in;
a2, checking the commodity category and list of participating in killing after logging in, for the user to select the corresponding category and commodity, clicking to enter the detailed page to kill the second;
a3, acquiring standard time, measuring the time offset between a master clock and a slave clock, and eliminating the time offset by taking the standard time as a standard; measuring the time delay of message transmission between a master clock and a slave clock, selecting a point of standard time for monitoring to obtain a discrete error signal, judging whether the discrete coefficient of the error signal is greater than a set standard value by adopting Gaussian filtering, if so, ending the killing in seconds, otherwise, generating a killing in seconds address of the commodity; the invention provides a synchronous mode of accurate time, which ensures that the display time accuracy error value of each user is between dozens of nanoseconds and dozens of sub-microseconds;
a4, verifying the user state through Token, returning to the user login interface if the user state is a failure state code, and turning to the next step if the user state is a success state code, wherein the browser has a large access amount at the beginning of killing the second, and the same repeated invalid requests can be intercepted through CDN and cache filtering to obtain valid requests;
a5, the effective request in the step A4 uses Nginx to realize the request load balance of the browser server, and the effective request is distributed to different servers in the cluster through different strategies, so that the pressure of the server side in processing data is reduced;
a6, accessing the server to obtain the killing request message, analyzing the message and comparing to obtain the optimal distribution strategy, writing the optimal distribution strategy into the message queue, and entering the database;
in economics, a consumer's purchasing behavior is to meet a certain demand, under the control of a certain purchasing motivation, in two or more purchasing schemes that can be selected, the best purchasing scheme is analyzed, evaluated, selected and implemented, in a complex purchase, the purchasing decision process of the consumer is decided by multiple influencing factors, and different products are selected for consumption according to different situations; the message queue link of the step simulates the purchasing behavior theory of the consumer, so that the message queue is improved, and the aims of higher efficiency and more reasonability are fulfilled;
a6.1, in the message queue, we can configure different message queues according to different scenarios, and through structural analysis of the message queue model, the message queue structure under four modes can be obtained approximately: the system comprises a single-node solution formed by a single message queue, a multi-node message queue cluster solution formed by a plurality of single message queues, a single-node message queue solution based on a master-slave architecture and a multi-node message queue cluster solution based on the master-slave architecture; through the analysis of the service, the above four message queues basically satisfy all second killing scenes.
A6.2, when receiving the message queue of the server web, adding a configuration item, automatically operating the configuration item by a program, and analyzing the received message queue; the analysis process is that according to some characteristics of the message queue, a tree structure is adopted to construct a factor in the protocol through a message transmission protocol; each factor can be independently expanded, so that various different states of the user psychology can be fully simulated, and various conditions exist correspondingly; respectively setting a plurality of influence factors such as user groups, application groups, message numbers, queue numbers, message bodies and the like, defining the five factors as main factors in a protocol, wherein the other factors are sub-factors of the five factors; the main factors may be different for each application group, and by counting the key words of the main factors, the reference value may be many factors such as application range, user name, queue number, etc., and secondly, the nature of the message body itself;
performing value domain division on the keywords, and corresponding the message quantities of different value domains to different message queue solutions so as to achieve the purpose of reasonably utilizing resources; when the application types divided by a single application group are not many and the message size is particularly large, in order to ensure that some important information can be completely transmitted, the message queue groups can be distinguished by factors, or different message types are distributed to different message queue groups; if the message concurrency is not high and the message number is in a normal value range, excessive resource waste can be caused if a cluster message queue solution is adopted; for performance considerations of the overall system, the parsed file transformations are distributed in the form of binary file packages into different sets of message queues. And carrying out secondary analysis on the binary file packet through an analysis function, weighting by using a weight method, preferentially selecting a corresponding strategy, completing reasonable distribution of a message queue scheme, and achieving the aim of fully utilizing resources. After the message queue group and the influence factor are determined, the relationship of the corresponding message queue group is determined according to the distribution of the key factor, a message distributor is used for distributing the message, then the key word is analyzed, and which analysis function should be used by the corresponding group of the message is determined. Different analytic functions have different analytic conditions and different established weights for different influence factors, and a reasonable distribution strategy can be formulated through the analysis of the different analytic functions to complete the process of the simulation strategy.
A6.3, after the message queue group classifies the messages, the messages of different application groups need to be distributed to corresponding message queues through a message distributor before entering the message queues, the traditional message queues do not have the role of the distributor, a middle device is set up to forward the messages, the messages are forwarded to the message distributor from the server cluster, and then the messages are analyzed and distributed to the message server through the message distributor. When the message is distributed, a HashMap structure is adopted to store the mapping relation between the message queue group and the message in the configuration file, all message types are combined into a List to be used as a value, and the key words of the message are used as keys. When distributing, firstly reading the mapping relation of the structure in the HashMap, determining the message queue to which the HashMap belongs, configuring the address corresponding to the message queue in the configuration file, managing the message distribution by connecting the number of connections and the number of sessions created by a factory, or setting the transmission mode when sending the message to be a point-to-point or a publish-subscribe mode, and finally sending the message to a target place.
A7, product inventory is cached through redis, when the same competitive product is operated, if update operation is used first, an uplink lock is added to the data, other users can wait for the last user to submit a transaction or roll back before releasing the lock when operating, so that the killing of seconds is greatly blocked, and insert operation can be performed concurrently, so that the operation sequence of insert and update determines whether the database design module has high performance.
A8, jumping to a payment page after confirming that the allowance is sufficient through the allowance coefficient, if the user finishes payment within a certain time, killing the order in seconds is successful, if the user fails to finish payment according to the requirement, the user is regarded as an invalid order event, and at the moment, the user needs to jump to the payment page again to pay again.
Example 2
A system for processing high-concurrency second-killing commodities by a message queue comprises a user login module: the system is used for logging in the users and ensuring that the users participating in killing in seconds are all valid users registered by the platform; and a second commodity killing information module: the commodity category and the list are used for showing the commodity category and the list which are put on the shelf by the user and participate in the killing activity of the second, and the detailed information of the commodity can be checked by clicking the commodity category and the list to enter a detailed page; a time module: the user ensures that the display time precision error value of each user is between tens of nanoseconds and tens of submicroseconds; a server filtering module: the system is used for verifying the user state and intercepting the same repeated request; a load balancing module: for distributing the request to different servers; the message queue improving module: the system is used for analyzing and distributing user request information and determining a reasonable information processing and distributing strategy; a database operation module: a payment page module for updating the inventory information of the delivered product: for the user to pay for the product money.
Example 3
An electronic device includes a memory: for storing executable instructions; a processor: the system is used for executing the executable instructions stored in the memory and realizing a method for processing high-concurrency second-killing commodities by the message queue.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for processing high-concurrency second-killing commodities by a message queue is characterized by comprising the following steps:
s1, checking the login state of the user, intercepting the invalid second killing request to obtain an effective second killing request;
s2, distributing the effective killing-by-seconds request in the step S1 to different servers through different distribution strategies;
s3, accessing the server in the step S2 to obtain the killing-by-second request message, analyzing the message and comparing to obtain the optimal distribution strategy, writing the optimal distribution strategy into a message queue and entering a database;
s4, processing the information entering the database through redis, and optimizing the cache of the database;
s5, detecting whether the product information allowance coefficient of the database in the step S4 is larger than a threshold value, if so, turning to the step S6, otherwise, updating the database;
and S6, jumping to a payment interface, if the payment is successful within the set time, finishing the second killing if the payment is successful, and otherwise, returning to the payment interface.
2. The method for processing high-concurrency second-killing commodity by using the message queue as claimed in claim 1, wherein the accurate time is required before the login state of the user is verified, and the accurate time is synchronized to a second-killing commodity interface after the user logs in, and the specific process is as follows:
s001, acquiring standard time;
s002, measuring the time offset between the master clock and the slave clock, and eliminating the time offset by taking the standard time in the step S001 as a standard;
and S003, measuring the time delay of message transmission between the master clock and the slave clock, selecting a point of the standard time in the step S001 for monitoring to obtain a discrete error signal, judging whether the discrete coefficient of the error signal is greater than a set standard value by adopting Gaussian filtering, if so, ending the second killing, and otherwise, generating a second killing address of the commodity.
3. The method for processing high-concurrency second-killing goods by using the message queue as claimed in claim 2, wherein the specific process of step S1 is as follows:
s101, sending a request for entering a single commodity second killing interface according to the second killing address in the step S003, generating a login state code by the Token to verify the login state of the user, returning to the user login interface if the login state code is a failure state code, and turning to the step S102 if the login state code is a success state code;
s102, after the single commodity killing-by-second interface request and the commodity killing-by-second request are queued to enter the client for polling, repeated same request information is intercepted through CDN redirection and cache filtering, and effective request information is reserved.
4. The method for processing high-concurrency second-killing goods by using the message queue as claimed in claim 3, wherein the specific process of step S2 is as follows: the Nginx load balancing distributes the request information valid in the step S102 to different servers according to a distribution policy.
5. The method for processing high-concurrency second-killing goods by using the message queue as claimed in claim 1, wherein the specific process of step S3 is as follows:
s301, accessing the server in the step S2 to obtain a message queue of the request information, constructing a factor transmission message queue in a message transmission protocol by adopting a tree structure according to the message queue, and grouping the factors according to a set main factor group;
s302, counting the value ranges of the key words in the main factor group in the step S301, and dividing the value ranges into corresponding message queue solutions to obtain an optimal distribution strategy;
and S303, writing the optimal allocation strategy in the step S302 into a corresponding message queue, and then judging whether the head-to-pointer value of the message queue is a module of a tail pointer and the maximum capacity, if so, the second killing fails, otherwise, the message queue enters a database.
6. The method for processing high-concurrency second-killing goods by message queue as claimed in claim 5, wherein the message queue solution in step S302 comprises: the system comprises a single-node solution formed by a single message queue, a multi-node message queue cluster solution formed by a plurality of single message queues, a single-node message queue solution based on a master-slave architecture and a multi-node message queue cluster solution based on the master-slave architecture.
7. The method for processing high-concurrency second-killing goods by using the message queue as claimed in claim 6, wherein the specific process of the step S302 is as follows:
s302.1, analyzing the message queue according to the keywords, and converting the analyzed file and distributing the converted file to different message queue groups in a binary file package mode;
s302.2, determining a parsing function corresponding to the message queue group according to the message queue group and the main factor parsing keywords in the step S302.1;
and S302.3, carrying out secondary analysis on the binary file packet according to the analysis function in the step S302.2, weighting by using a weight method, and sequentially selecting corresponding distribution strategies according to weighted results.
8. The method for processing high-concurrency second-killing goods by using the message queue as claimed in claim 1, wherein the specific process of step S4 is as follows:
s401, caching product stock through a message entering a database through redis processing;
s402, obtaining a product object through deserialization after the step S401 is completed, and then modifying the inventory through insert;
s403, acquiring the lock through Update after the step S402 is completed, and then releasing the lock;
s404, after the step S403 is completed, through Cookie verification, whether the product allowance coefficient is larger than a threshold value is detected, otherwise, the database is updated, and if yes, a payment interface is skipped.
9. A system for processing high-concurrency second-killing commodities by a message queue is characterized by comprising
A user login module: the system is used for logging in the users and ensuring that the users participating in killing in seconds are all valid users registered by the platform;
and a second commodity killing information module: the commodity category and the list are used for showing the commodity category and the list which are put on the shelf by the user and participate in the killing activity of the second, and the detailed information of the commodity can be checked by clicking the commodity category and the list to enter a detailed page;
a time module: the user ensures that the display time precision error value of each user is between tens of nanoseconds and tens of submicroseconds;
a server filtering module: the system is used for verifying the user state and intercepting the same repeated request;
a load balancing module: for distributing the request to different servers;
the message queue improving module: the system is used for analyzing and distributing user request information and determining a reasonable information processing and distributing strategy;
a database operation module: for updating delivery product inventory information;
a payment page module: for the user to pay for the product money.
10. An electronic device, comprising
A memory: for storing executable instructions;
a processor: for executing executable instructions stored in said memory, to implement a method of message queue processing high concurrent second killers as claimed in any one of claims 1 to 8.
CN202011242683.3A 2020-11-10 2020-11-10 Method, system and device for processing high-concurrency second-killing commodities by message queue Active CN112102044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011242683.3A CN112102044B (en) 2020-11-10 2020-11-10 Method, system and device for processing high-concurrency second-killing commodities by message queue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011242683.3A CN112102044B (en) 2020-11-10 2020-11-10 Method, system and device for processing high-concurrency second-killing commodities by message queue

Publications (2)

Publication Number Publication Date
CN112102044A true CN112102044A (en) 2020-12-18
CN112102044B CN112102044B (en) 2021-03-09

Family

ID=73785208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011242683.3A Active CN112102044B (en) 2020-11-10 2020-11-10 Method, system and device for processing high-concurrency second-killing commodities by message queue

Country Status (1)

Country Link
CN (1) CN112102044B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667600A (en) * 2020-12-28 2021-04-16 紫光云技术有限公司 Inventory solution method combining redis and MySQL
CN112801753A (en) * 2021-02-09 2021-05-14 深圳市富途网络科技有限公司 Page display method, device and medium
CN112950307A (en) * 2021-01-29 2021-06-11 成都环宇知了科技有限公司 Swoole framework-based second killing method and system
CN113360570A (en) * 2021-05-31 2021-09-07 紫光云技术有限公司 High-concurrency system inventory module implementation method
CN116567281A (en) * 2023-04-19 2023-08-08 上海百秋智尚网络服务有限公司 Live interaction method, device, equipment and storage medium
CN117972096A (en) * 2024-03-29 2024-05-03 深圳美云集网络科技有限责任公司 Method and system for processing interaction message of social platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103825835A (en) * 2013-11-29 2014-05-28 中邮科通信技术股份有限公司 Internet high concurrency seckilling system
CN108133399A (en) * 2016-11-30 2018-06-08 北京京东尚科信息技术有限公司 The second of high concurrent fast-response kills the method, apparatus and system that inventory precisely reduces
CN108418821A (en) * 2018-03-06 2018-08-17 北京焦点新干线信息技术有限公司 Redis and Kafka-based high-concurrency scene processing method and device for online shopping system
CN109582738A (en) * 2018-12-03 2019-04-05 广东鸭梨科技集团股份有限公司 A kind of processing high concurrent second kills movable method
CN111008882A (en) * 2019-11-13 2020-04-14 上海易点时空网络有限公司 Data processing method and device for killing activity in seconds
US10637730B2 (en) * 2018-02-02 2020-04-28 Citrix Systems, Inc. Message queue migration on A/B release environments
CN111091405A (en) * 2019-09-12 2020-05-01 达疆网络科技(上海)有限公司 Implementation scheme for solving high concurrency of killing and promoting second

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103825835A (en) * 2013-11-29 2014-05-28 中邮科通信技术股份有限公司 Internet high concurrency seckilling system
CN108133399A (en) * 2016-11-30 2018-06-08 北京京东尚科信息技术有限公司 The second of high concurrent fast-response kills the method, apparatus and system that inventory precisely reduces
US10637730B2 (en) * 2018-02-02 2020-04-28 Citrix Systems, Inc. Message queue migration on A/B release environments
CN108418821A (en) * 2018-03-06 2018-08-17 北京焦点新干线信息技术有限公司 Redis and Kafka-based high-concurrency scene processing method and device for online shopping system
CN109582738A (en) * 2018-12-03 2019-04-05 广东鸭梨科技集团股份有限公司 A kind of processing high concurrent second kills movable method
CN111091405A (en) * 2019-09-12 2020-05-01 达疆网络科技(上海)有限公司 Implementation scheme for solving high concurrency of killing and promoting second
CN111008882A (en) * 2019-11-13 2020-04-14 上海易点时空网络有限公司 Data processing method and device for killing activity in seconds

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘磊: "一种高并发电商秒杀系统的设计与实现", 《现代计算机(专业版)》 *
开拖拉机的蜡笔小新: "秒杀系统优化方案(下)吐血整理", 《HTTPS://WWW.CNBLOGS.COM/XIANGKEJIN/P/9351501.HTML》 *
杨丰瑞等: "基于ESSH框架的高校科研团队信息管理系统设计与实现", 《软件导刊》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667600A (en) * 2020-12-28 2021-04-16 紫光云技术有限公司 Inventory solution method combining redis and MySQL
CN112950307A (en) * 2021-01-29 2021-06-11 成都环宇知了科技有限公司 Swoole framework-based second killing method and system
CN112801753A (en) * 2021-02-09 2021-05-14 深圳市富途网络科技有限公司 Page display method, device and medium
CN112801753B (en) * 2021-02-09 2024-04-23 深圳市富途网络科技有限公司 Page display method, device and medium
CN113360570A (en) * 2021-05-31 2021-09-07 紫光云技术有限公司 High-concurrency system inventory module implementation method
CN116567281A (en) * 2023-04-19 2023-08-08 上海百秋智尚网络服务有限公司 Live interaction method, device, equipment and storage medium
CN117972096A (en) * 2024-03-29 2024-05-03 深圳美云集网络科技有限责任公司 Method and system for processing interaction message of social platform
CN117972096B (en) * 2024-03-29 2024-06-07 深圳美云集网络科技有限责任公司 Method and system for processing interaction message of social platform

Also Published As

Publication number Publication date
CN112102044B (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN112102044B (en) Method, system and device for processing high-concurrency second-killing commodities by message queue
US10977083B2 (en) Cost optimized dynamic resource allocation in a cloud infrastructure
Chihoub et al. Harmony: Towards automated self-adaptive consistency in cloud storage
US11442818B2 (en) Prioritized leadership for data replication groups
US7503052B2 (en) Asynchronous database API
US20100293334A1 (en) Location updates for a distributed data store
US10789267B1 (en) Replication group data management
Gao et al. Improving availability and performance with application-specific data replication
CN101605092A (en) A kind of content-based SiteServer LBS
CN106375416B (en) Consistency dynamic adjusting method and device in distributed data-storage system
CN106484713A (en) A kind of based on service-oriented Distributed Request Processing system
CN103312624A (en) Message queue service system and method
CN110032451A (en) Distributed multilingual message realization method, device and server
CN105976245A (en) Simulated trading system and method
Malyuga et al. Fault tolerant central saga orchestrator in RESTful architecture
US8752071B2 (en) Identifying subscriber data while processing publisher event in transaction
US20230336368A1 (en) Block chain-based data processing method and related apparatus
CN109033315A (en) Data query method, client, server and computer-readable medium
Alonso‐Monsalve et al. A new volunteer computing model for data‐intensive applications
CN107370797A (en) A kind of method and apparatus of the strongly-ordered queue operation based on HBase
Chihoub et al. 10 ConsistencyManagement in Cloud Storage Systems
Coelho et al. GeoPaxos+: practical geographical state machine replication
Sun et al. Adaptive trade‐off between consistency and performance in data replication
CN110311789A (en) Data safe transmission method and device
CN117478504B (en) Information transmission method, device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant