CN115760301A - Design method and system for processing second-killing-high concurrent scene of time window - Google Patents

Design method and system for processing second-killing-high concurrent scene of time window Download PDF

Info

Publication number
CN115760301A
CN115760301A CN202211501741.9A CN202211501741A CN115760301A CN 115760301 A CN115760301 A CN 115760301A CN 202211501741 A CN202211501741 A CN 202211501741A CN 115760301 A CN115760301 A CN 115760301A
Authority
CN
China
Prior art keywords
request
requests
time window
cache
commodity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211501741.9A
Other languages
Chinese (zh)
Inventor
张炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Electronic Commerce Co Ltd
Original Assignee
Tianyi Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Electronic Commerce Co Ltd filed Critical Tianyi Electronic Commerce Co Ltd
Priority to CN202211501741.9A priority Critical patent/CN115760301A/en
Publication of CN115760301A publication Critical patent/CN115760301A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a design method and a system for processing a second killing height concurrency scene of a time window, and relates to the field of Internet e-commerce. The method comprises the steps that when a request enters a server, a route is carried out through a commodity id, and all requests of the same commodity id are guaranteed to be printed on the same machine; the server starts a counter to count the request amount, and when the request amount reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the number of commodities from high to low; merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record; the remaining requests that do not enter the priority queue in a unit of time are entered into the next time window merge in the manner of a CAS. The method can support corresponding rule configuration aiming at different data sources, is applied to batch processing tasks and updating, and improves the timeliness and the effect of low-loss transmission.

Description

Design method and system for processing second-killing-high concurrent scene of time window
Technical Field
The invention relates to the field of Internet e-commerce, in particular to a design method and a system for processing a second killing height concurrency scene of a time window.
Background
With the gradual development of enterprise informatization, the gradual expansion of e-commerce platforms and various marketing activities of e-commerce platform layers in recent years, activities such as killing seconds, promoting greatly and issuing consumption coupons become more and more. Through researching the design of the killing system of different platforms, the performance bottleneck of most designs is found to be generated on the level of a database, the processing mode is mostly caching or machine adding, and the current limiting is carried out when the bearing capacity reaches the upper limit. Therefore, a design method for processing the second-kill-high concurrency scene is needed at present, and is applied to batch processing tasks and updating, and the timeliness and the low-loss transmission effect are improved.
Disclosure of Invention
The invention aims to provide a design method for processing a second-highest-order concurrent scene of a time window, which can support various data sources to carry out differentiated analysis, dynamically adjust configuration rules under the condition of not modifying function codes of stream processing, support corresponding rule configuration aiming at different data sources, be applied to batch processing tasks and updating and improve the timeliness and the effect of low-loss transmission.
The invention also aims to provide a system for designing the second over high concurrent scene processing of the time window, which is realized based on a method for designing the second over high concurrent scene processing of the time window.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a method for designing a time window to process a second-kill-height concurrency scenario, which includes the following steps, (1) when a request enters a server, a route is performed through a commodity id, and it is ensured that all requests of the same commodity id are printed on the same machine; (2) The server starts a counter to count the request quantity, when the request quantity reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the commodity quantity from high to low; (3) Merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record; (4) The remaining requests which do not enter the priority queue in unit time enter the merging of the next time window in a CAS mode; (5) When the stock is modified, writing in an order flow water meter, when the deduction is successful and the request response is overtime or fails, the order system sends a message to the stock, and the operation which is successfully deducted is rolled back; (6) Modifying the sql statement of the inventory by inserting insert first and then updating update; (7) The cache of the whole order system synchronously refreshes the cache by monitoring the binlog of the database, and the system cache is synchronously refreshed when the system monitors the update of the binlog.
In some embodiments of the present invention, the step (2) includes that the higher the request amount is, the shorter the time window is set.
In some embodiments of the present invention, the step (3) includes that the merged record includes a subtracted record and an added record, before modification, it is determined whether the total amount of the stock is smaller than the merged record, and if the total amount of the stock is smaller than the merged record, the subtracted record is performed in sequence.
In some embodiments of the present invention, the step (4) includes the following steps, which are implemented by using @ retunable notation of spring.
In some embodiments of the present invention, the step (7) includes upgrading the version number each time the synchronization update is performed.
In some embodiments of the present invention, step (7) includes preheating the cache before the system starts, and then all query requests only query the cache without directly operating the database.
In a second aspect, an embodiment of the present application provides a system for designing a concurrency scenario for processing a second-kill-high in a time window, where the system includes: when a request enters the server, the request is routed through the commodity id, and all requests of the same commodity id are guaranteed to be printed on the same machine; a request arrangement module: the server starts a counter to count the request quantity, when the request quantity reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the commodity quantity from high to low; a request merging module: merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record; the remaining requests which do not enter the priority queue in unit time enter the merging of the next time window in a CAS mode; an order writing module: when the stock is modified, writing in an order flow water meter, when the deduction is successful and the request response is overtime or fails, the order system sends a message to the stock, and the operation which is successfully deducted is rolled back; modifying the sql statement of the inventory by inserting insert first and then updating update; a cache updating module: the cache of the whole order system synchronously refreshes the cache by monitoring the binlog of the database, and the system synchronously refreshes the cache when monitoring the update of the binlog.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
1. different strategies can be carried out according to different request amounts in unit time, and the dynamic property is strong;
2. the database can not be operated frequently, and the database can be operated only once in unit time;
3. the maximum priority value is taken as the main point, and the profit maximization is realized;
4. the commodity and the fragment route are bound, and the whole combination processing is carried out, so that the interaction times among services are reduced.
The invention relates to the field of high concurrency of Internet e-commerce, which can design a cache design scheme according to different requests of each time period and the order quantity of each time when stock of consumption tickets is released in seconds, is greatly promoted, supports various data sources to carry out differentiated analysis, dynamically adjusts configuration rules under the condition of not modifying functional codes of stream processing, supports corresponding rule configuration aiming at different data sources, is applied to batch processing tasks and updating, and improves timeliness and the effect of low-loss transmission.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic diagram of a method for designing a time window to process a second-kill-high concurrency scenario in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a system for processing a second-kill-high concurrent scene in a time window according to an embodiment 2 of the present invention;
fig. 3 is a schematic diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations.
Example 1
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a design method for processing a second-kill-high concurrent scene in a time window according to an embodiment of the present disclosure. The design method for processing the second-killing-high concurrent scene of the time window comprises the following steps of (1) routing through a commodity id when a request enters a server, and ensuring that all requests of the same commodity id are printed on the same machine; (2) The server starts a counter to count the request quantity, when the request quantity reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the commodity quantity from high to low; (3) Merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record; (4) The remaining requests which do not enter the priority queue in unit time enter the merging of the next time window in a CAS mode; (5) When the stock is modified, writing in an order flow water meter, when the deduction is successful and the request response is overtime or fails, the order system sends a message to the stock, and the operation which is successfully deducted is rolled back; (6) Modifying the sql statement of the inventory by inserting insert first and then updating update; (7) The cache of the whole order system synchronously refreshes the cache by monitoring the binlog of the database, and the system cache is synchronously refreshed when the system monitors the update of the binlog.
In some embodiments of the present invention, the step (2) includes that the higher the request amount is, the shorter the time window is set.
In some embodiments of the present invention, the step (3) includes that the merged record includes a subtracted record and an added record, before modification, it is determined whether the total amount of the stock is smaller than the merged record, and if the total amount of the stock is smaller than the merged record, the subtracted record is performed in sequence.
In some embodiments of the present invention, the step (4) includes the following, which is implemented by using @ retunable note of spring.
In some embodiments of the present invention, step (7) includes upgrading the version number each time the synchronization update is performed.
In some embodiments of the present invention, the step (7) includes preheating the cache before the system starts, and then all query requests only query the cache, without directly operating the database.
In the step (2), when the request amount is small, time window combination is not needed, and when the request amount reaches a certain magnitude, all requests in unit time are placed into a priority queue and are arranged according to the commodity quantity from high to low. According to different practical situations, different strategies can be configured in the configuration center according to the request amount to carry out dynamic setting, and the time of the corresponding time window can be configured according to the request. In the step (3), in order to prevent the occurrence of problems such as over-selling and the like, judgment is carried out before modification, and if the total inventory amount is smaller than the records after combination, deduction is carried out according to the sequence, and the records in the queue are arranged according to the maximum priority, so that the deduction of the maximum value and the maximum profit can be realized. In the step (5), when the inventory is modified, a order flow water meter is written in at the same time, so that the consistency of upstream and downstream data is ensured; when the deduction is successful, but the request response is overtime or fails, the order system sends a message to the inventory to roll back the operation which was just successfully deducted, so as to solve the problem of the distributed transaction. In the step (6), the mode of firstly insert and then update is adopted for modifying the sql statement of the stock, the occupied time of a database row lock is reduced, the concurrency is improved, and a stock flow meter is subjected to cold-hot separation, so that the pressure of the database is reduced. In the step (7), the cache of the whole system is in a mode of monitoring the binlog of the database to synchronously refresh the cache, when the database has data updating, the system monitors the updating of the binlog and synchronizes the cache of the system, and simultaneously, the version number is upgraded every time of synchronous updating, so that the problem of updating errors caused by network delay is avoided.
The invention relates to the field of high concurrency of Internet e-commerce, which can design a cache design scheme according to different requests of each time period and the order quantity of each time when stock of consumption tickets is released in seconds, is greatly promoted, supports various data sources to carry out differentiated analysis, dynamically adjusts configuration rules under the condition of not modifying functional codes of stream processing, supports corresponding rule configuration aiming at different data sources, is applied to batch processing tasks and updating, and improves timeliness and the effect of low-loss transmission.
When the method is used, commodity ids which are requested to be transmitted are routed through Nginx or F5, and each commodity id is bound with a certain fragment. And each server starts a time window, combines the requests in the period of 200ms every 200ms, arranges the requests according to the commodity number priority, accumulates the requests, judges whether the current inventory is greater than the combined value, starts an asynchronous thread to modify the database if the current inventory is greater than or equal to the combined value, sequentially modifies the database if the current inventory is less than the combined value, and performs spin attempt on the requests outside 200ms to enter the next combination. And querying all the inventory through a redis cache, refreshing the redis cache by monitoring the binlog of the mysql, adding a version field in the record queue, adding 1 to the version field every time, and updating the cache only if the version number of the binlog queue is greater than the current cache version number before refreshing.
Example 2
Referring to fig. 2, fig. 2 is a schematic structural block diagram of a design system for processing a second-kill-high concurrency scenario of a time window according to an embodiment of the present application. The design system for processing the second-killing-high concurrent scene of the time window comprises a server request module: when a request enters the server, the request is routed through the commodity id, and all requests of the same commodity id are guaranteed to be printed on the same machine; a request arrangement module: the server starts a counter to count the request quantity, when the request quantity reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the commodity quantity from high to low; a request merging module: merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record; the remaining requests which do not enter the priority queue in unit time enter the merging of the next time window in a CAS mode; an order writing module: when the stock is modified, writing in an order flow water meter, when the deduction is successful and the request response is overtime or fails, the order system sends a message to the stock, and the operation which is successfully deducted is rolled back; modifying the sql statement of the inventory by inserting insert first and then updating update; a cache updating module: the cache of the whole order system synchronously refreshes the cache by monitoring the binlog of the database, and the system synchronously refreshes the cache when monitoring the update of the binlog.
The principle of the embodiment of the present application is the same as that of embodiment 1, and the description thereof will not be repeated. It will be appreciated that the configuration shown in fig. 2 is merely illustrative and that a design system for a time window to handle a second-kill-high concurrency scenario may include more or fewer components than shown in fig. 2 or have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Example 3
Referring to fig. 3, fig. 3 is a schematic structural block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to the design system for processing the second-killing-high concurrency scenario provided in embodiment 2 of the present application, and the processor 102 executes the software programs and modules stored in the memory 101 to thereby execute various functional applications and data processing. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
To sum up, the design method and system for processing the second-kill-high concurrency scene of the time window provided by the embodiment of the application have at least the following advantages or beneficial effects: 1. different strategies can be carried out according to different request amounts in unit time, and the dynamic property is strong; 2. the database can not be operated frequently, and the database can be operated only once in unit time; 3. the maximum priority value is taken as the main part, and the profit maximization is realized; 4. and the commodities and the fragment routes are bound and integrally combined, so that the interaction times among services are reduced.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. A design method for processing a second killing high concurrent scene of a time window is characterized by comprising the following steps,
(1) When a request enters a server, routing is carried out through the commodity id, and all requests of the same commodity id are guaranteed to be printed on the same machine;
(2) The server starts a counter to count the request amount, and when the request amount reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the number of commodities from high to low;
(3) Merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record;
(4) The remaining requests which do not enter the priority queue in unit time enter the merging of the next time window in a CAS mode;
(5) When the stock is modified, writing the data into an order flow water meter, and when the deduction is successful and the request response is overtime or fails, sending a message to the stock by an order system, and rolling back the operation which is just successfully deducted;
(6) Modifying the sql statement of the inventory by inserting insert first and then updating update;
(7) The cache of the whole order system synchronously refreshes the cache by monitoring the binlog of the database, and the system synchronously refreshes the cache when monitoring the update of the binlog.
2. The method for designing a time window to process a second-highest concurrent scene as claimed in claim 1, wherein the step (2) comprises the following steps, wherein the higher the request amount is, the shorter the time of setting the time window is.
3. The method for designing the time window processing second-kill-high concurrency scenario as claimed in claim 1, wherein the step (3) comprises merging records including deducting and adding inventory, determining whether the total inventory amount is less than the merged records before modification, and deducting in order if the total inventory amount is less than the merged records.
4. The method for designing a time window for processing a second-killing-high concurrent scene as claimed in claim 1, wherein the step (4) comprises the following steps, which are implemented by using the @ returable annotation of spring.
5. The method for designing the time window processing second-kill-high concurrency scenario as claimed in claim 1, wherein step (7) comprises upgrading the version number each time the synchronization update is performed.
6. The method for designing the time window to process the second-kill-high concurrency scenario as claimed in claim 1, wherein step (7) includes preheating the cache before the system starts, and then querying only the cache for all query requests without directly operating the database.
7. A design system for processing a second-killing-high concurrent scene of a time window is characterized by comprising,
a server request module: when a request enters the server, the request is routed through the commodity id, and all requests of the same commodity id are guaranteed to be printed on the same machine;
a request arrangement module: the server starts a counter to count the request quantity, when the request quantity reaches a certain magnitude, all requests in unit time are put into a priority queue and are arranged according to the commodity quantity from high to low;
a request merging module: merging requests entering the queue in unit time into a record, and starting an asynchronous thread to modify the inventory database according to the merged record; the remaining requests which do not enter the priority queue in unit time enter the merging of the next time window in a CAS mode;
an order writing module: when the stock is modified, writing in an order flow water meter, when the deduction is successful and the request response is overtime or fails, the order system sends a message to the stock, and the operation which is successfully deducted is rolled back; modifying the sql statement of the inventory by inserting insert first and then updating update;
a cache update module: the cache of the whole order system synchronously refreshes the cache by monitoring the binlog of the database, and the system synchronously refreshes the cache when monitoring the update of the binlog.
CN202211501741.9A 2022-11-28 2022-11-28 Design method and system for processing second-killing-high concurrent scene of time window Pending CN115760301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211501741.9A CN115760301A (en) 2022-11-28 2022-11-28 Design method and system for processing second-killing-high concurrent scene of time window

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211501741.9A CN115760301A (en) 2022-11-28 2022-11-28 Design method and system for processing second-killing-high concurrent scene of time window

Publications (1)

Publication Number Publication Date
CN115760301A true CN115760301A (en) 2023-03-07

Family

ID=85339385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211501741.9A Pending CN115760301A (en) 2022-11-28 2022-11-28 Design method and system for processing second-killing-high concurrent scene of time window

Country Status (1)

Country Link
CN (1) CN115760301A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116567281A (en) * 2023-04-19 2023-08-08 上海百秋智尚网络服务有限公司 Live interaction method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116567281A (en) * 2023-04-19 2023-08-08 上海百秋智尚网络服务有限公司 Live interaction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
EP2590087A1 (en) Database log parallelization
EP4030287A1 (en) Transaction processing method and apparatus, computer device, and storage medium
CN108319656A (en) Realize the method, apparatus and calculate node and system that gray scale is issued
US11144536B2 (en) Systems and methods for real-time analytics detection for a transaction utilizing synchronously updated statistical aggregation data
CN111522631A (en) Distributed transaction processing method, device, server and medium
US11561939B2 (en) Iterative data processing
CN115760301A (en) Design method and system for processing second-killing-high concurrent scene of time window
CN111865687B (en) Service data updating method and device
CN112416972A (en) Real-time data stream processing method, device, equipment and readable storage medium
CN113342834A (en) Method for solving historical data change in big data system
CN115408391A (en) Database table changing method, device, equipment and storage medium
US8924343B2 (en) Method and system for using confidence factors in forming a system
CN109299175B (en) Dynamic expansion method, system, device and storage medium for database
US11281654B2 (en) Customized roll back strategy for databases in mixed workload environments
CN110852752B (en) Method, device, equipment and storage medium for processing recharge order withdrawal exception
CN116339626A (en) Data processing method, device, computer equipment and storage medium
CN112035503B (en) Transaction data updating method and device
CN109447777B (en) Financial data processing method and device, electronic equipment and readable medium
CN115168384A (en) Data consistency processing method, device, server and storage medium
CN113076297A (en) Data processing method, device and storage medium
CN111881149A (en) Large-scale concurrency solution method and system based on Java
CN111782346A (en) Distributed transaction global ID generation method and device based on same-library mode
CN112235332A (en) Read-write switching method and device for cluster
CN117971804A (en) Data migration method, apparatus, device, storage medium, and program product
CN116578572A (en) Service date changing method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination