CN117041315A - Second killing service method, device and equipment - Google Patents

Second killing service method, device and equipment Download PDF

Info

Publication number
CN117041315A
CN117041315A CN202310988033.0A CN202310988033A CN117041315A CN 117041315 A CN117041315 A CN 117041315A CN 202310988033 A CN202310988033 A CN 202310988033A CN 117041315 A CN117041315 A CN 117041315A
Authority
CN
China
Prior art keywords
client requests
interface
channel
processing
identification processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310988033.0A
Other languages
Chinese (zh)
Inventor
郭亚丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310988033.0A priority Critical patent/CN117041315A/en
Publication of CN117041315A publication Critical patent/CN117041315A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the disclosure provides a second killing service method which can be applied to the technical field of computers and the technical field of big data. The method comprises the following steps: multiple customer requests are obtained through multiple channels. And carrying out flow limiting identification processing of the system dimension on the plurality of client requests. And carrying out standardization processing on a plurality of client requests after the system dimension current limit identification processing. And carrying out flow limiting identification processing of interface dimensions on the plurality of standardized client requests. Responding to the plurality of client requests after interface dimension flow limit identification processing, and carrying out data processing and storage on the plurality of client requests after interface dimension flow limit identification processing. The present disclosure also provides a second killing service apparatus, a computing device, a medium, and a program product.

Description

Second killing service method, device and equipment
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the field of big data technology, and in particular, to a second killing service method, apparatus, device, medium, and program product.
Background
With the popularization of electronic commerce, shopping on an internet platform becomes a common phenomenon, and second killing is an online time-limited shopping activity for promotion, so that a plurality of people can be allowed to online purchase goods in real time, and financial products of banks can be sold in a second killing mode. Current second killing systems can provide a core second killing service to a customer, generally comprising: the method comprises the steps of maintaining commodity information for killing the second before killing the second, deducting inventory after starting killing the second, and inquiring the current inventory for killing the second. For a second killing system which only provides core second killing service, the three services are packaged into three http interfaces, so that second killing service can be realized by calling the http interfaces by different channel platforms.
However, the current second killing service method enables the time limit flow value of the same interface to be called by different channels, and the cut-off current limiting strategy often causes the problem of resource disputes caused by sharing the current limit value of different channels. For example, the burst transaction amount of the channel A is expanded, but the transaction amount of the channel B is not high, and because the two channels share the current limiting value of the same interface, the transaction of the client of the channel B is greatly limited, and the client experience of the channel B is influenced. The second killing service method causes improper allocation of server resources, reduces the resource utilization rate of the computer, and causes waste of the computer resources.
Disclosure of Invention
In view of the foregoing, the present disclosure provides a second killing service method, apparatus, device, medium, and program product.
According to a first aspect of the present disclosure, there is provided a second killing service method, the method comprising:
acquiring a plurality of client requests through a plurality of channels;
carrying out current limiting identification processing of system dimension on the plurality of client requests;
carrying out standardization processing on a plurality of client requests subjected to system dimension current limiting identification processing;
carrying out interface dimension flow limiting identification processing on the plurality of standardized client requests; and
Responding to the plurality of client requests after interface dimension flow limit identification processing, and carrying out data processing and storage on the plurality of client requests after interface dimension flow limit identification processing.
According to an embodiment of the present disclosure, the performing a flow restriction identification process of a system dimension on a plurality of client requests includes:
counting a plurality of client requests in unit time to obtain the number of the first client requests;
presetting the maximum transaction number processed in unit time of a second killing service system as a first current limiting value; and
and if the first client request number is greater than the first limiting value, data limiting is performed.
According to an embodiment of the present disclosure, the data throttling includes:
acquiring acquisition time of the plurality of client requests;
sorting the plurality of client requests according to the acquisition time of the plurality of client requests; and
and sequentially carrying out batch processing on the plurality of ordered client requests, wherein the number of each batch of client requests is smaller than the first current limiting value.
According to an embodiment of the present disclosure, the standardized processing of the plurality of client requests after the system dimension current limit identification processing includes:
channel information and interface service information of a plurality of client requests after system dimension current limiting identification processing are acquired;
Channel marking is carried out on a plurality of client requests after the system dimension current limiting identification processing; and
and marking interface service for the plurality of client requests after the system dimension flow limit identification processing.
According to an embodiment of the present disclosure, the flow limit identification processing for performing interface dimension on the plurality of standardized client requests includes:
classifying the plurality of client requests subjected to the system dimension current limiting identification processing through the channel mark and the interface service mark to obtain client requests of different interface services of a plurality of different channels;
counting the number of customer requests in unit time of each interface service in each channel to obtain the second number of customer requests of each interface service in each channel;
presetting the maximum transaction number processed in unit time of each interface service in each channel as a second current limiting value of each interface service in each channel; and
and if the number of the second client requests is larger than the second limiting value corresponding to the second client requests, data limiting is carried out.
According to an embodiment of the present disclosure, the obtaining a plurality of client requests through a plurality of channels includes:
acquiring a uniform resource locator of each channel;
Verifying whether the uniform resource locator is legal;
and if the uniform resource locator is legal, acquiring a plurality of client requests through a plurality of channels.
According to a second aspect of the present disclosure there is provided a second killing service device comprising:
the first acquisition module is used for acquiring a plurality of client requests through a plurality of channels;
the first flow limiting module is used for carrying out flow limiting identification processing of system dimensions on the plurality of client requests;
the standardized module is used for carrying out standardized processing on a plurality of client requests after the system dimension current limiting identification processing;
the second flow limiting module is used for carrying out flow limiting identification processing of interface dimensions on the plurality of standardized client requests; and
the data processing module is used for responding to the plurality of client requests after the interface dimension current limiting identification processing, processing the data of the plurality of client requests after the interface dimension current limiting identification processing and storing the data.
According to a third aspect of the present disclosure there is provided an electronic device comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the above-described second killing service method.
According to a fourth aspect of the present disclosure there is provided a computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the above-described second kill service method.
There is also provided according to a fifth aspect of the present disclosure a computer program product comprising a computer program which, when executed by a processor, implements the above-described second killing service method.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of a second killing service method and apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a second killing service method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of an acquisition request in a second-kill-service method according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of system dimension current limiting in a second-killing-service method according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow chart of data throttling in a second killing service method according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flowchart of a normalization process in a second-killing-service method according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of interface dimension throttling in a second-killing-service method according to an embodiment of the disclosure;
FIG. 8 schematically illustrates a flow diagram of a second killing service architecture according to an embodiment of the present disclosure;
fig. 9 schematically illustrates a block diagram of a second killing service according to an embodiment of the present disclosure;
fig. 10 schematically illustrates a block diagram of a first acquisition module in a second killing-service device according to an embodiment of the present disclosure;
fig. 11 schematically illustrates a block diagram of a first current limiting module in a second killing service device according to an embodiment of the present disclosure;
fig. 12 schematically illustrates a block diagram of a third current limiting module in a second killing service device according to an embodiment of the present disclosure;
FIG. 13 schematically illustrates a block diagram of a standardized module in a second killing service device according to an embodiment of the present disclosure;
fig. 14 schematically illustrates a block diagram of a second current limiting module in a second killing-by-second service according to an embodiment of the present disclosure;
fig. 15 schematically illustrates a block diagram of an electronic device adapted to implement a second killing service method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable control apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
First, technical terms appearing herein are explained as follows:
second killing system: a system is widely applied to internet transactions, and an open system is used for all clients to conduct transactions simultaneously at the same time and at the same entrance. The method is characterized by multiple request clients, simple transaction flow, large instantaneous concurrency, and high requirements on the aspects of system performance, efficiency, safety and the like.
Flow limiting: refers to limiting the flow entering the system in unit time in order to ensure the stability of the system. Current limiting strategies commonly used in second killing systems are current limiting for the interface dimension and the system dimension.
Uniform resource locator (Universal Resource Locator abbreviated as: URL): is the address of a standard resource on the internet, also called the web page address. Each file on the internet has a unique URL that contains information indicating the location of the file and how the browser should handle it.
tps (Transaction Per Second: number of transactions transferred per second): is the number of transactions per second processed by the server, tps includes a message in and a message out, plus a user database access. tps is the unit of measurement of the software test results. A transaction refers to a process in which a client sends a request to a server and then the server reacts, and the client starts timing when sending the request, and ends timing after receiving a response from the server, so as to calculate the time of use and the number of completed transactions.
The embodiment of the disclosure provides a second killing service method, which comprises the following steps: multiple customer requests are obtained through multiple channels. And carrying out flow limiting identification processing of the system dimension on the plurality of client requests. And carrying out standardization processing on a plurality of client requests after the system dimension current limit identification processing. And carrying out flow limiting identification processing of interface dimensions on the plurality of standardized client requests. Responding to the plurality of client requests after interface dimension flow limit identification processing, and carrying out data processing and storage on the plurality of client requests after interface dimension flow limit identification processing.
The method processes the client request by combining the current limit identification processing of the system dimension and the current limit identification processing of the interface dimension. The method and the device have the advantages of improving the stability of the second killing service system, improving the resource utilization rate of the second killing service system, rationalizing the resource allocation of the computer, and avoiding the technical problem of loss caused by system breakdown due to overload work of the computer.
Fig. 1 schematically illustrates an application scenario diagram of a second killing service method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is merely an example of a scenario in which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, the application scenario 100 according to this embodiment may include a plurality of application terminals and application servers. For example, the plurality of application terminals includes an application terminal 101, an application terminal 102, an application terminal 103, and the like. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the application server 105 via the network 104 using the application terminal devices 101, 102, 103 to receive or send messages or the like. Various application programs such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the application terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the second killing service method provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the second killing service provided by embodiments of the present disclosure may be generally provided in the server 105. The second killing service method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the second killing service apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The second killing service method of the disclosed embodiment will be described in detail below with reference to the scenario described in fig. 1 through fig. 2 to 7. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present disclosure, and the embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Fig. 2 schematically illustrates a flow chart of a second killing service method according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 includes steps S201 to S205.
Step S201, a plurality of client requests are acquired through a plurality of channels.
The second killing service method obtains customer requests through a plurality of channels. For example, the client request may be received from an access channel such as a PC, applet, APP, etc. of the respective application platform. The second killing system then obtains these customer requests from various channels.
Fig. 3 schematically illustrates a flow chart of an acquisition request in a second-kill-service method according to an embodiment of the present disclosure.
As shown in fig. 3, the method 300 includes steps S301 to S303.
Step S301, a uniform resource locator of each channel is obtained.
For example, uniform resource locators of access channels such as PCs, applets, APP and the like of each application platform are respectively obtained.
Step S302, verifying whether the uniform resource locator is legal.
For example, the server verifies the uniform resource locator of the access channel such as PC, applet, APP and the like of each application platform obtained in step S301. The method specifically comprises the following steps: a legal channel white list is preset, wherein the legal channel white list contains uniform resource locators of legal access channels. And comparing the acquired uniform resource locators of the plurality of access channels with the uniform resource locators in the legal channel white list in consistency to verify whether the uniform resource locators are legal or not.
Step S303, if the uniform resource locator is legal, a plurality of client requests are acquired through a plurality of channels.
For example, if the obtained uniform resource locator of the access channel is consistent with the uniform resource locator in the legal channel white list, obtaining a client request of the access channel; and if the acquired uniform resource locator of the access channel is inconsistent with the uniform resource locator in the legal channel white list, rejecting the client request of the access channel.
By verifying the legality of channel data, the safety of the second killing system is improved, and the second killing system can be operated more stably and safely.
Referring back to fig. 2, in step S202, the flow restriction identification process of the system dimension is performed on the plurality of client requests.
Fig. 4 schematically illustrates a flow chart of system dimension current limiting in a second-killing-service method according to an embodiment of the disclosure.
As shown in fig. 4, the method 400 includes steps S401 to S403.
In step S401, a number of client requests in a unit time is counted to obtain a first number of client requests.
The number of client requests per second can be counted directly as tps value for the client access request. In order to obtain the tps value of the client access request with more reference value, the tps value of the client access request can also be calculated according to the number of the client requests per day. For example, when the number of daily client requests is 100 tens of thousands, 80% of the daily client requests occur within 8 hours of the working time. It will be appreciated that the 80 ten thousand customer requests occurring daily within 8 hours of the operating time are valid reference customer requests. The tps value of the client access request may be calculated by effectively referencing the number of client requests. I.e., by counting the number of 80 ten thousand client requests in 8 hours (i.e., 28800 seconds), the number of client requests per second is counted. The tps value for the customer access request is calculated to be 27.78.
Step S402, presetting the maximum transaction number processed in a unit time of a second killing service system as a first current limiting value.
For example, the maximum number of transactions processed per unit time of the second killing service system is preset as the current limit threshold.
Step S403, if the number of the first client requests is greater than the first limiting value, performing data limiting.
For example, if the tps value of the obtained client access request exceeds the current limit threshold, the client access request data is subjected to current limit and then standardized. If the tps value of the obtained client access request does not exceed the current limit threshold, the client access request data is directly subjected to standardization processing.
By the technical means of presetting the first current limiting value, the problem of overload operation of the system caused by data congestion is avoided, and the technical effect of ensuring safe and stable operation of the system is realized.
Fig. 5 schematically illustrates a flow chart of data throttling in a second killing service method according to an embodiment of the present disclosure.
As shown in fig. 5, the method 500 includes steps S501 to S503.
Step S501, acquiring acquisition times of the plurality of client requests.
Step S502, sorting the plurality of client requests according to the acquisition time of the plurality of client requests.
Step S503, performing batch processing on the ordered plurality of client requests sequentially, where the number of client requests in each batch is smaller than the first current limit value.
For example, the ordered plurality of client requests are sequentially batched, an interval between each batch of clients entering the system is greater than or equal to 1 second, and the number of client requests in each batch is less than a current limit threshold.
Through data flow limiting, the server can be prevented from running stably due to data overload.
Referring back to fig. 2, in step S203, the plurality of client requests after the system dimension flow limit identification process are subjected to the normalization process.
Fig. 6 schematically shows a flowchart of a normalization process in the second killing service method according to an embodiment of the present disclosure.
As shown in fig. 6, the method 600 includes steps S601 to S603.
Step S601, obtaining channel information and interface service information of a plurality of client requests after the system dimension flow limit identification process.
For example, the channel information includes: PC information, applet information, APP information, etc. of the application platform. The interface service information includes: maintaining second killing commodity interface information, second killing interface information, inquiring second killing inventory interface information and the like.
Step S602, channel marking is carried out on a plurality of client requests after the system dimension current limit identification processing.
Channel tags can be marked on a plurality of client requests after the system dimension flow limit identification process. Such as PC information tags, applet information tags, APP information tags, etc. of the application platform.
Step S603, marking the interface service for the plurality of client requests after the system dimension flow limit identification process.
And marking interface service labels on a plurality of client requests after the system dimension flow limit identification processing. For example, maintenance of a second kill interface information tag, a query second kill inventory interface information tag, and the like.
By carrying out standardized processing on the data, the method is favorable for classifying and identifying different interfaces of the standardized data without channels.
Referring back to fig. 2, in step S204, the flow limit identification process of the interface dimension is performed on the plurality of client requests after the normalization process.
Fig. 7 schematically illustrates a flow chart of interface dimension throttling in a second-killing service method according to an embodiment of the disclosure.
As shown in fig. 7, the method 700 includes steps S701 to S704.
Step S701, classifying the plurality of client requests after the system dimension current limit identification processing by the channel mark and the interface service mark, to obtain client requests of different interface services of different channels.
And classifying the client requests through the channel marks and the interface service marks to obtain the client requests of different interface services of a plurality of different channels. For example, a plurality of client requests to access maintenance secondaries through the PC channel of the platform, a plurality of client requests to access inquiry secondaries through the PC channel of the platform, a plurality of client requests to access maintenance secondaries through the applet channel, a plurality of client requests to access inquiry secondaries through the applet channel, a plurality of client requests to access maintenance secondaries through the APP channel, a plurality of client requests to access inquiry secondaries through the APP channel, and the like may be obtained.
Step S702, counting the number of customer requests in unit time of each interface service in each channel to obtain the second number of customer requests of each interface service in each channel.
For example, counting the number of multiple customer requests to access the maintenance secondaries through the platform's PC channel, the number of multiple customer requests to access the query secondaries through the platform's PC channel, the number of multiple customer requests to access the secondaries through the applet channel, the number of multiple customer requests to access the query secondaries through the applet channel, the number of multiple customer requests to access the maintenance secondaries through the applet channel, the number of multiple customer requests to access the secondaries through the APP channel, the number of multiple customer requests to access the query secondaries through the APP channel, etc.
Step S703, presetting the maximum number of transactions processed per unit time of each interface service in each channel as the second current limit value of each interface service in each channel.
The maximum number of transactions processed in unit time of each interface service in each channel is preset as a current limit threshold of each interface service in each channel. For example, a current limit threshold for a maintenance secondaries interface through a PC channel access of the platform, a current limit threshold for a query secondaries interface through a PC channel access of the platform, a current limit threshold for a maintenance secondaries interface through an applet channel access, a current limit threshold for a query secondaries interface through an applet channel access, a current limit threshold for a maintenance secondaries interface through an applet channel access, a current limit threshold for an APP channel access secondaries interface, a current limit threshold for a query secondaries interface through an APP channel access, and the like are preset.
Step S704, if the number of the second client requests is greater than the second limiting value corresponding to the second client request, data limiting is performed.
Taking a client request of accessing the maintenance secondaries commodity-killing interface through a PC channel of the platform as an example, if the number of client requests of accessing the maintenance secondaries commodity-killing interface through the PC channel of the platform is larger than a preset current-limiting threshold of accessing the maintenance secondaries commodity-killing interface through the PC channel of the platform, carrying out data current limiting. The data flow limiting includes: acquiring acquisition time of a plurality of client requests for accessing and maintaining a second killing commodity interface through a PC channel of a platform; sorting the plurality of client requests according to the acquisition time of the plurality of client requests; and sequentially carrying out batch processing on the plurality of ordered client requests. If the number of the client requests for accessing the maintenance secondkilling commodity interface through the PC channel of the platform does not exceed the preset current limiting threshold of the maintenance secondkilling commodity interface through the PC channel of the platform, directly performing data processing.
By classifying the client requests, the problem of resource contention when different channels call different interfaces is avoided. In addition, the technical means of setting the second current limiting value by the interface dimension can avoid the problem of resource waste caused by different channel heat because the current limiting values of different interfaces of different channels are not mutually influenced, and realize the technical effect of rationalization of resource utilization.
Referring back to fig. 2, in step S205, in response to the plurality of client requests after the interface dimension flow restriction identification process, the plurality of client requests after the interface dimension flow restriction identification process are subjected to data processing and saved.
After the data processing and the storage are carried out on the client request, the processing result can be returned to the channel corresponding to the processing result. For example, after data processing and saving are performed on a client request for access to a PC channel, the processing result is returned to the PC channel.
Fig. 8 schematically illustrates a flow chart of a second killing service architecture according to an embodiment of the present disclosure.
As shown in fig. 8, the architecture 800 includes: channel access layer, current limiting processing layer, transaction processing layer and data storage layer.
The channel access layer receives the client requests of multiple channels, uniformly processes the requests into a standard format required by the second killing system, adds channel marks, and forwards the requests to the current limiting processing layer of the second killing system. The channels of the channel access layer can comprise access channels of PCs, applets, APP and the like of each application platform, and after receiving a request of a client, the request is forwarded to the second killing system.
The first pass of the throttling layer is typically implemented by an interceptor or filter of the backend system. And judging whether a preset current limit threshold is exceeded or not by calculating tps value of the access request, if yes, limiting the current, otherwise, requesting to enter a second access control of the current limit processing layer.
The second channel of the flow limiting treatment layer can set different flow limiting values for different interfaces of different channels. For example, the flow limit value of the interface a of the channel a is limit (a, a), the flow limit value of the interface B of the channel a is limit (a, B), the flow limit value of the interface a of the channel B is limit (B, a), the flow limit value of the interface B of the channel B is limit (B, B), and so on.
After the second access control of the current limiting processing layer receives the request, extracting a channel mark and an interface name in the request, calculating whether the request exceeds a set current limiting value, if not, passing the request (i.e. forwarding the request to the transaction processing layer), and if so, rejecting the request. Different current limiting values are set in different channels, so that the current limiting values of different channels are not mutually influenced for the same interface request. For example, if the channel a's interface a request concurrency tps (a, a) exceeds the channel a's interface a current limit value limit (a, a), then the channel a's interface a request may be current limited, while the channel B's a request is unaffected.
The request passing through the flow limiting processing layer enters the transaction processing layer, and each interface has independent processing logic, so after receiving the front-end request, the request can be returned to the channel access layer after business processing according to the input parameters.
The transaction processing layer processes the corresponding data and stores the data into a data storage layer, wherein the data storage layer generally comprises a database, a distributed cache and the like and is used for storing business data.
The architecture provides a current limiting method based on channel interfacing dimension pertinently, and improves system stability and system resource utilization rate.
Fig. 9 schematically illustrates a block diagram of a second killing service according to an embodiment of the present disclosure.
As shown in fig. 9, the second killing service device 800 includes: a first acquisition module 901, a first current limit module 902, a normalization module 903, a second current limit module 904, and a data processing module 905.
The first obtaining module 901 is configured to obtain a plurality of client requests through a plurality of channels. In an embodiment, the first obtaining module 901 may be used to perform step S201 described above.
Fig. 10 schematically illustrates a block diagram of a first acquisition module in a second killing-service device according to an embodiment of the present disclosure.
As shown in fig. 10, the first acquisition module 901 includes: a second acquisition module 1001, a verification module 1002, and a third acquisition module 1003.
A second obtaining module 1001, configured to obtain a uniform resource locator of each channel. In an embodiment, the second obtaining module 1001 may be used to perform the step S301 described above, which is not described herein.
A verification module 1002, configured to verify whether the uniform resource locator is legal. In an embodiment, the verification module 1002 may be configured to perform the step S302 described above, which is not described herein.
And a third obtaining module 1003, configured to obtain a plurality of client requests through a plurality of channels if the uniform resource locator meets a rule. In an embodiment, the third obtaining module 1003 may be used to perform the step S303 described above, which is not described herein.
Referring back to fig. 9, a first flow restriction module 902 is configured to perform flow restriction identification processing in a system dimension on the plurality of client requests. In one embodiment, the first current limit module 902 may be used to perform step S202 described above.
Fig. 11 schematically illustrates a block diagram of a first current limiting module in a second killing service device according to an embodiment of the present disclosure.
As shown in fig. 11, the first current limiting module 902 includes: a counting module 1101, a first preset module 1102 and a third current limiting module 1103.
The counting module 1101 is configured to count a plurality of client requests in a unit time, to obtain a first number of client requests. In an embodiment, the counting module 1101 may be used to perform the step S401 described above, which is not described herein.
The first preset module 1102 is configured to preset a maximum number of transactions processed in a unit time of the second killing service system as a first current limit value. In an embodiment, the first preset module 1102 may be used to perform the step S402 described above, which is not described herein.
A third current limiting module 1103, configured to perform data current limiting if the number of the first client requests is greater than the first current limiting value. In an embodiment, the third flow restriction module 1103 may be configured to perform step S403 described above.
Fig. 12 schematically illustrates a block diagram of a third current limiting module in a second killing service device according to an embodiment of the present disclosure.
As shown in fig. 12, the third flow restriction module 1103 includes: a fourth acquisition module 1201, a sequencing module 1202 and a batch processing module 1203.
A fourth obtaining module 1201 is configured to obtain the obtaining times of the plurality of client requests. In an embodiment, the fourth obtaining module 1201 may be used to perform the step S501 described above, which is not described herein.
A ranking module 1202, configured to rank the plurality of client requests according to the acquisition times of the plurality of client requests. In an embodiment, the sorting module 1202 may be configured to perform the step S502 described above, which is not described herein.
The batch processing module 1203 is configured to sequentially batch-process the plurality of ordered client requests, where the number of client requests in each batch is smaller than the first limiting value. In an embodiment, the batch processing module 1203 may be configured to perform the step S503 described above, which is not described herein.
Referring back to fig. 9, the normalization module 903 is configured to perform normalization processing on the plurality of client requests after the system dimension flow restriction identification processing. In one embodiment, the normalization module 903 may be configured to perform step S203 described above.
Fig. 13 schematically shows a block diagram of the structure of a normalization module in a second killing service according to an embodiment of the present disclosure.
As shown in fig. 13, the normalization module 903 includes: a fifth acquisition module 1301, a first marking module 1302, and a second marking module 1303.
A fifth obtaining module 1301, configured to obtain channel information and interface service information of the plurality of client requests after the system dimension flow restriction identification processing. In an embodiment, the fifth obtaining module 1301 may be configured to perform the step S601 described above, which is not described herein.
A first marking module 1302, configured to mark channels for a plurality of client requests after the system dimension flow limit identification process. In an embodiment, the first marking module 1302 may be used to perform the step S602 described above, which is not described herein.
And the second marking module 1303 is used for marking interface service of the plurality of client requests after the system dimension flow limit identification processing. In an embodiment, the second marking module 1303 may be used to perform the step S603 described above, which is not described herein.
Referring back to fig. 9, the second flow limiting module 904 is configured to perform flow limiting identification processing of interface dimensions on the plurality of client requests after normalization processing. In an embodiment, the second current limiting module 904 may be configured to perform step S204 described above.
Fig. 14 schematically illustrates a block diagram of a second current limiting module in a second killing-by-second service device according to an embodiment of the present disclosure.
As shown in fig. 14, the second current limiting module 904 includes: classification module 1401, statistics module 1402, second preset module 1403, and fourth current limit module 1404.
And the classification module 1401 is configured to classify the plurality of client requests after the system dimension current limit identification processing according to the channel mark and the interface service mark, so as to obtain client requests of different interface services of a plurality of different channels. In an embodiment, the classification module 1401 may be used to perform the step S701 described above, which is not described herein.
A statistics module 1402, configured to count the number of client requests in unit time of each interface service in each channel, and obtain the second number of client requests of each interface service in each channel. In an embodiment, the statistics module 1402 may be used to perform the step S702 described above, which is not described herein.
A second preset module 1403, configured to preset a maximum number of transactions processed per unit time of each interface service in each channel as a second current limit value of each interface service in each channel. In an embodiment, the second preset module 1403 may be used to perform the step S703 described above, which is not described herein.
A fourth current limit module 1404 for limiting data if the second number of client requests is greater than the corresponding second current limit value. In an embodiment, the fourth current limiting module 1404 may be used to perform the step S704 described above, which is not described herein.
Referring back to fig. 9, the data processing module 905 is configured to process and store data of the plurality of client requests after the interface dimension flow limit identification process in response to the plurality of client requests after the interface dimension flow limit identification process. In an embodiment, the data processing module 905 may be used to perform the step S205 described above, which is not described herein.
According to an embodiment of the present disclosure, any of the first acquisition module 901, the first current limit module 902, the normalization module 903, the second current limit module 904, and the data processing module 905 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the first acquisition module 901, the first current limit module 902, the normalization module 903, the second current limit module 904, and the data processing module 905 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or in any suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the first acquisition module 901, the first current limit module 902, the normalization module 903, the second current limit module 904, and the data processing module 905 may be at least partially implemented as a computer program module which, when executed, may perform the corresponding functions.
Fig. 15 schematically illustrates a block diagram of an electronic device adapted to implement a second killing service method according to an embodiment of the present disclosure.
As shown in fig. 15, an electronic device 1500 according to an embodiment of the present disclosure includes a processor 1501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1502 or a program loaded from a storage section 1508 into a Random Access Memory (RAM) 1503. The processor 1501 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 1501 may also include on-board memory for caching purposes. The processor 1501 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.
In the RAM1503, various programs and data necessary for the operation of the electronic device 1500 are stored. The processor 1501, the ROM1502, and the RAM1503 are connected to each other through a bus 1504. The processor 1501 performs various operations of the method flow according to an embodiment of the present disclosure by executing programs in the ROM1502 and/or the RAM 1503. Note that the program may be stored in one or more memories other than the ROM1502 and the RAM 1503. The processor 1501 may also perform various operations of the method flow according to an embodiment of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the electronic device 1500 may also include an input/output (I/O) interface 1505, the input/output (I/O) interface 1505 also being connected to the bus 1504. Electronic device 1500 may also include one or more of the following components connected to I/O interface 1505: an input section 1506 including a keyboard, mouse, and the like; an output portion 1507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. Removable media 1511, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1510 as needed so that a computer program read therefrom is mounted into the storage section 1508 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM1502 and/or RAM1503 described above and/or one or more memories other than ROM1502 and RAM 1503.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. When the computer program product is run in a computer system, the program code is for causing the computer system to implement the blockchain-based program delivery method provided by the embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1501. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program can also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication portion 1509, and/or installed from the removable medium 1511. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1509, and/or installed from the removable medium 1511. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1501. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (10)

1. A method of killing a service in seconds, the method comprising:
acquiring a plurality of client requests through a plurality of channels;
Carrying out current limiting identification processing of system dimension on the plurality of client requests;
carrying out standardization processing on a plurality of client requests subjected to system dimension current limiting identification processing;
carrying out interface dimension flow limiting identification processing on the plurality of standardized client requests; and
responding to the plurality of client requests after interface dimension flow limit identification processing, and carrying out data processing and storage on the plurality of client requests after interface dimension flow limit identification processing.
2. The method of claim 1, wherein the performing the system-dimensional flow-limit identification process on the plurality of client requests comprises:
counting a plurality of client requests in unit time to obtain the number of the first client requests;
presetting the maximum transaction number processed in unit time of a second killing service system as a first current limiting value; and
and if the first client request number is greater than the first limiting value, data limiting is performed.
3. The method of claim 2, wherein the data throttling comprises:
acquiring acquisition time of the plurality of client requests;
sorting the plurality of client requests according to the acquisition time of the plurality of client requests; and
And sequentially carrying out batch processing on the plurality of ordered client requests, wherein the number of each batch of client requests is smaller than the first current limiting value.
4. The method of claim 1, wherein normalizing the plurality of client requests after the system dimension flow limit identification process comprises:
channel information and interface service information of a plurality of client requests after system dimension current limiting identification processing are acquired;
channel marking is carried out on a plurality of client requests after the system dimension current limiting identification processing; and
and marking interface service for the plurality of client requests after the system dimension flow limit identification processing.
5. The method of claim 4, wherein the performing the flow-limiting identification process of the interface dimension on the plurality of standardized client requests comprises:
classifying the plurality of client requests subjected to the system dimension current limiting identification processing through the channel mark and the interface service mark to obtain client requests of different interface services of a plurality of different channels;
counting the number of customer requests in unit time of each interface service in each channel to obtain the second number of customer requests of each interface service in each channel;
Presetting the maximum transaction number processed in unit time of each interface service in each channel as a second current limiting value of each interface service in each channel; and
and if the number of the second client requests is larger than the second limiting value corresponding to the second client requests, data limiting is carried out.
6. The method of any one of claims 1-5, wherein the obtaining a plurality of customer requests through a plurality of channels comprises:
acquiring a uniform resource locator of each channel;
verifying whether the uniform resource locator is legal;
and if the uniform resource locator is legal, acquiring a plurality of client requests through a plurality of channels.
7. A second killing service device, comprising:
the first acquisition module is used for acquiring a plurality of client requests through a plurality of channels;
the first flow limiting module is used for carrying out flow limiting identification processing of system dimensions on the plurality of client requests;
the standardized module is used for carrying out standardized processing on a plurality of client requests after the system dimension current limiting identification processing;
the second flow limiting module is used for carrying out flow limiting identification processing of interface dimensions on the plurality of standardized client requests; and
The data processing module is used for responding to the plurality of client requests after the interface dimension current limiting identification processing, processing the data of the plurality of client requests after the interface dimension current limiting identification processing and storing the data.
8. An electronic device, comprising:
one or a processor;
a storage device for storing one or a program,
wherein the one or program, when executed by the one or processor, causes the one or processor to perform the method of any of claims 1-6.
9. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform the method of any of claims 1 to 6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN202310988033.0A 2023-08-07 2023-08-07 Second killing service method, device and equipment Pending CN117041315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310988033.0A CN117041315A (en) 2023-08-07 2023-08-07 Second killing service method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310988033.0A CN117041315A (en) 2023-08-07 2023-08-07 Second killing service method, device and equipment

Publications (1)

Publication Number Publication Date
CN117041315A true CN117041315A (en) 2023-11-10

Family

ID=88636627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310988033.0A Pending CN117041315A (en) 2023-08-07 2023-08-07 Second killing service method, device and equipment

Country Status (1)

Country Link
CN (1) CN117041315A (en)

Similar Documents

Publication Publication Date Title
US10621641B2 (en) Method and device for pushing information
CN108664609B (en) Data sharing method, network equipment and terminal
WO2019019636A1 (en) User identification method, electronic device, and computer readable storage medium
CN105488125A (en) Page access method and apparatus
CN115587575A (en) Data table creation method, target data query method, device and equipment
CN117041315A (en) Second killing service method, device and equipment
CN115965474A (en) Service processing method, device, equipment and storage medium
CN114693358A (en) Data processing method and device, electronic equipment and storage medium
CN114780807A (en) Service detection method, device, computer system and readable storage medium
CN114500640A (en) Message generation method, message transmission device, electronic equipment and medium
CN111105301B (en) Information processing method, terminal, server and storage medium
WO2019169696A1 (en) Platform client data backflow method, electronic apparatus, device, and storage medium
CN114648410A (en) Stock staring method, apparatus, system, device and medium
CN113434754A (en) Method and device for determining recommended API (application program interface) service, electronic equipment and storage medium
CN111131369A (en) APP use condition transmission method and device, electronic equipment and storage medium
CN113722642B (en) Webpage conversion method and device, electronic equipment and storage medium
CN114721882B (en) Data backup method and device, electronic equipment and storage medium
CN116527595A (en) Service flow limiting method, device, equipment and storage medium of distributed system
CN114969059B (en) Method and device for generating order information, electronic equipment and storage medium
CN114900807B (en) Method and system for processing short message problem event list
CN111629038B (en) Virtual resource sharing processing method and device, server and storage medium
CN115333871B (en) Firewall operation and maintenance method and device, electronic equipment and readable storage medium
CN109474447B (en) Alarm method and device for real-time monitoring system
CN116860857A (en) Commodity data processing method, device, equipment and storage medium
CN116128607A (en) Product recommendation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination