CN117492990A - Method, device, equipment and storage medium for processing high concurrency data request - Google Patents

Method, device, equipment and storage medium for processing high concurrency data request Download PDF

Info

Publication number
CN117492990A
CN117492990A CN202311442992.9A CN202311442992A CN117492990A CN 117492990 A CN117492990 A CN 117492990A CN 202311442992 A CN202311442992 A CN 202311442992A CN 117492990 A CN117492990 A CN 117492990A
Authority
CN
China
Prior art keywords
inventory
partition
data request
information
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311442992.9A
Other languages
Chinese (zh)
Inventor
李宗宝
李冠群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202311442992.9A priority Critical patent/CN117492990A/en
Publication of CN117492990A publication Critical patent/CN117492990A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for processing high-concurrency data requests. The method comprises the following steps: acquiring a data request sent by a requester; hashing the data request to a first partition of the at least one partition; requesting a partition operation lock from the first partition according to the data request; inquiring partition inventory information of the first partition; in response to no unallocated inventory in the partitioned inventory, acquiring a batch of inventory from the inventory pool; allocating the unallocated inventory to the data request in response to the unallocated inventory being in the partition inventory; and feeding back the data request result to the requester. According to the method, the inventory in the inventory pool is allocated to each partition in batches and gradually in small quantities for inventory allocation, so that the inventory pool can dynamically balance the inventory allocation among the partitions according to the inventory allocation speed of each partition, and quick balanced inventory allocation is realized. Meanwhile, the work complexity of overall inventory allocation is reduced, and the stability of overall work is enhanced.

Description

Method, device, equipment and storage medium for processing high concurrency data request
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for processing a high concurrency data request.
Background
There are many seconds of kill scenarios in the current internet environment that can result in the formation of a large number of user's centralized access needs to a certain hotspot Key (e.g., a certain promotional item in the seconds of kill scenario) in a short period of time. During peak time, a large number of users intensively visit the pages, so that QPS (Query Per Second) of the pages is increased, a large number of concurrent requests fall on nodes in a Redis cluster, the concurrent requirement of a hot Key is easy to exceed the concurrent capability of the Redis node, the Redis cluster cannot process and reject the requests, and the front-end pages cannot acquire data, so that page access abnormality is caused. This problem is known as the hot Key problem.
In order to solve the above problem of the hot Key, in the conventional technology, the hot Key is often limited or the access pressure of a single slice is dispersed by a slice manner. However, the current limiting mode is limited by the access flow limitation supported by a single fragment, so that a large number of users cannot normally access, and poor user experience is caused for the users. And the slicing mode is adopted, although the access pressure of single slicing can be relieved. However, the inventory consumption rate of different slices is different, resulting in inconsistent progress seen by different users. In addition, the method has no service performance scalability and expandability, and cannot be expanded in real time according to the access requirement of the Key. In summary, the existing solutions to the problem of hot keys have shortcomings.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for processing high-concurrency data requests.
According to a first aspect of embodiments of the present disclosure, there is provided a method of processing a high concurrency data request, comprising: acquiring a data request sent by a requester; the data request at least comprises the following steps: target data identification; hashing the data request to a first partition of the at least one partition; the at least one partition corresponds to the target data identification; requesting a partition operation lock from the first partition according to the data request; obtaining the partition operation lock in response to the data request, and querying partition inventory information of the first partition corresponding to the target data identifier; responding to the fact that unallocated inventory is not in the subarea inventory of the first subarea, and the first subarea acquires a batch of inventory from an inventory pool corresponding to the target data identification; the batch of stock comprises a preset number of stock; re-inventory allocation of the data request; allocating the unallocated inventory to the data request in response to an unallocated inventory in a partition inventory of the first partition; and feeding back a data request result to the requester according to the unallocated inventory allocated to the data request.
In some exemplary embodiments of the present disclosure, the partition inventory information includes at least: partition base stock, partition batch stock, and partition used stock; the partition base stock representing a base ranking of the partition's current lot stock in the stock pool; the partition batch inventory is used for representing the inventory quantity of the current batch inventory of the partition; the partition used inventory is used to represent the number of used inventories in the current lot inventory of the partition.
In some exemplary embodiments of the disclosure, the obtaining, by the first partition, a batch of inventory from the inventory pool corresponding to the target data identifier in response to no unallocated inventory in the partition inventory of the first partition includes: determining that there is no unassigned inventory in the partition inventory of the first partition when the partition lot inventory of the first partition is equal to the partition used inventory; the first partition sends an inventory acquisition request to an inventory pool corresponding to the target data identifier; responding to the inventory obtaining request, and sending first batch inventory information to the first partition by the inventory pool; the first lot inventory information includes: first lot base stock information and first lot stock quantity; the first batch basic inventory information is determined according to the allocated inventory quantity in the inventory pool; the first batch inventory quantity is equal to the preset quantity; the first partition updates the partition inventory information of the first partition according to the first batch inventory information.
In some exemplary embodiments of the disclosure, the allocating the unallocated inventory to the data request in response to an unallocated inventory in the partition inventory of the first partition includes: determining that unallocated inventory is present in the partition inventory of the first partition when the partition lot inventory of the first partition is greater than the partition used inventory; and adding one to the partition used inventory of the first partition according to the data request, and obtaining the allocated partition used inventory.
In some exemplary embodiments of the disclosure, the feeding back data request results to the requesting party according to the unallocated inventory allocated for the data requests includes: determining the data request result according to the unallocated inventory allocated to the data request; the data request result is the sum of the partition base stock of the first partition and the allocated partition used stock; and feeding back the data request result to the requester.
In some exemplary embodiments of the present disclosure, the number information of the data requests to be allocated in each of the partitions is acquired at regular time; and determining the newly added partition according to the quantity information of the data requests to be distributed.
In some exemplary embodiments of the present disclosure, in response to the first partition not having the partition inventory information, the first partition is a newly added partition; the first partition obtains the batch of inventory from the inventory pool; and re-allocating the inventory to the data request.
In some exemplary embodiments of the present disclosure, the concurrent data request number information corresponding to the target data identifier is acquired at regular time; and determining the preset quantity according to the concurrent data request quantity information.
In some exemplary embodiments of the present disclosure, the partition inventory information for each of the partitions is periodically obtained; and determining the used inventory quantity in the inventory pool according to the partition inventory information of each partition.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for processing a high concurrency data request, comprising: the data request acquisition module is configured to acquire a data request sent by a requester; the data request at least comprises the following steps: target data identification; a data request hashing module configured to hash the data request to a first partition of the at least one partition; the at least one partition corresponds to the target data identification; a partition operation lock acquisition module configured to request a partition operation lock from the first partition according to the data request; a partition inventory information query module configured to obtain the partition operation lock in response to the data request, query partition inventory information of the first partition corresponding to the target data identification; the inventory obtaining module is configured to obtain a batch of inventory from the inventory pool corresponding to the target data identifier in response to the fact that the partition inventory of the first partition does not have unallocated inventory; the batch of stock comprises a preset number of stock; re-inventory allocation of the data request; an inventory allocation module configured to allocate unallocated inventory to the data request in response to there being unallocated inventory in the partition inventory of the first partition; and the result feedback module is configured to feed back a data request result to the requester according to the unallocated inventory allocated to the data request.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the executable instructions to implement any of the methods of processing highly concurrent data requests.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform any of the methods of processing highly concurrent data requests.
According to the method for processing the high-concurrency data request, the inventory in the inventory pool is distributed to each partition in batches and gradually in small amounts to perform inventory distribution, so that the inventory pool can dynamically balance the inventory distribution among the partitions according to the inventory distribution speed of each partition, and the problem of unbalanced inventory distribution of each partition is avoided on the premise that the processing capacity of each partition is not exceeded, and quick balanced inventory distribution is realized. Meanwhile, the work complexity of overall inventory allocation is reduced, and the stability of overall work is enhanced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 shows a schematic diagram of an exemplary system architecture to which the methods of embodiments of the present disclosure may be applied.
FIG. 2 is a flowchart illustrating a method of processing a high concurrency data request, according to an example embodiment.
FIG. 3 is a flow diagram illustrating a method of handling high concurrency data requests, according to one example.
FIG. 4 is a flow chart illustrating an inventory acquisition method according to one example.
FIG. 5 is a flow chart illustrating a method of inventory allocation according to one example.
FIG. 6 is a flow chart illustrating a method for updating an inventory count for an inventory pool, according to an example.
FIG. 7 is a block diagram illustrating an apparatus for handling high concurrency data requests, according to an example embodiment.
Fig. 8 is a schematic diagram illustrating a structure of an electronic device suitable for use in implementing an exemplary embodiment of the present disclosure, according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in at least one hardware module or integrated circuit or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the present specification, the terms "a," "an," "the," "said" and "at least one" are used to indicate the presence of at least one element/component/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and do not limit the number of their objects.
FIG. 1 shows a schematic diagram of an exemplary system architecture to which the methods of embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture may include a server 101, a network 102, a terminal device 103, a terminal device 104, and a terminal device 105. Network 102 is the medium used to provide communication links between terminal device 103, terminal device 104, or terminal device 105 and server 101. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The server 101 may be a server providing various services, such as a background management server providing support for devices operated by a user with the terminal device 103, the terminal device 104, or the terminal device 105. The background management server may perform analysis and other processing on the received data such as the request, and feed back the processing result to the terminal device 103, the terminal device 104, or the terminal device 105.
The terminal device 103, the terminal device 104, and the terminal device 105 may be, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a wearable smart device, a virtual reality device, an augmented reality device, and the like.
It should be understood that the numbers of the terminal device 103, the terminal device 104, the terminal device 105, the network 102 and the server 101 in fig. 1 are only illustrative, and the server 101 may be a server of one entity, may be a server cluster formed by a plurality of servers, may be a cloud server, and may have any number of terminal devices, networks and servers according to actual needs.
The steps of the method in the exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings and examples.
FIG. 2 is a flowchart illustrating a method of processing a high concurrency data request, according to an example embodiment. FIG. 3 is a flow diagram illustrating a method of handling high concurrency data requests, according to one example. The methods provided by the embodiments of fig. 2 and 3 may be performed by any electronic device, such as the terminal device in fig. 1, or the server in fig. 1, or a combination of the terminal device and the server in fig. 1, but the disclosure is not limited thereto.
In step S210, a data request sent by a requester is acquired; the data request at least comprises the following steps: and (5) identifying target data.
In the embodiment of the disclosure, under the second killing scene, a large number of requesters send data requests to a certain target hot Key. The front end obtains the data requests sent by the requesters. The data request should at least include: and (5) identifying target data. The target data identifier is used for indicating the identifier of the target hot Key corresponding to the data request. The hot Key refers to a certain hot Key in the Redis cluster, for example, a certain promotion commodity in a second killing scene.
In an exemplary embodiment, different requesters may send their data requests through different platforms or channels, resulting in the data request formats of the different requesters may be different. Therefore, a message generating module can be arranged at the front end to process different data requests and generate corresponding data request messages for subsequent processing of the data request messages.
In step S220, the data request is hashed to a first partition of the at least one partition; the at least one partition corresponds to the target data identification.
In the embodiment of the disclosure, as described above, for the hot Key, the access capability of a single partition is far from meeting the access requirement of the user. Thus, multiple partitions need to be set corresponding to the target data identity of the hot Key to spread the access pressure of a single partition. A number of data requests corresponding to the hot Key may be uniformly hashed to any one of the at least one partition. Here, how many partitions are specifically set for the hot Key may be dynamically set according to the concurrency amount of the hot Key (such as QPS referring to the hot Key), or manually preset. The first partition may refer to any one of the plurality of partitions. The partitioning may be implemented by an instance of independent hardware or by a slice divided by software, and the embodiment of the present disclosure is not limited to the form of implementation.
In an exemplary embodiment, so-called hashing is the transformation of an arbitrary length input into a fixed length output, which is a hash value, by a hashing algorithm. Based on the hash value, the corresponding partition is selected by solving the remainder, so that the aim of uniformly distributing each data request to each partition and balancing the processing pressure of each partition is fulfilled. The data request may be the user ID information, or may be random information such as the joining time information or the IP information, as an input to the hash algorithm. Whatever information is employed as input to the hashing algorithm should be considered within the scope of the present disclosure as long as the purpose of evenly distributing the individual data requests to the individual partitions is achieved. Similarly, other technical means are adopted to uniformly distribute the data request, and the data request is also within the protection scope of the present disclosure.
In step S230, a partition operation lock is requested from the first partition according to the data request.
In the embodiment of the disclosure, after the data request is allocated to the designated first partition, the data request enters a data request processing queue of the first partition. The first partition establishes a corresponding thread for each data request. Limited by the processing power of the first partition, it is not possible to meet the processing requirements of the respective data request threads simultaneously. To ensure the security of each thread, a partition operation lock mechanism is set in the first partition. That is, each thread requests the partition operation lock, the thread that obtains the partition operation lock obtains the authority of the subsequent operation processing, and the thread that does not obtain the partition operation lock waits for the release of the partition operation lock and continues to rob the authority of the partition operation lock.
In addition, one partition may process multiple hot keys at the same time, so that one partition may set different partition operation locks corresponding to different target data identifications. And respectively robbing the partition operation locks corresponding to the data requests corresponding to different target data identifiers.
In step S240, the partition operation lock is obtained in response to the data request, and partition inventory information of the first partition corresponding to the target data identification is queried.
In the embodiment of the disclosure, if the thread responding to the data request obtains the partition operation lock, the partition inventory information corresponding to the target data identification of the first partition is queried. The partition inventory information is used for recording the current inventory information of the partition.
In the embodiment of the disclosure, the inventory pool corresponding to the target data identification is not divided into all the partitions at a time, but is distributed to all the partitions in batches by small amounts. Based on this, the partition inventory information includes at least: partition base inventory, partition batch inventory, and partition used inventory. The partition base stock B is used to represent the base ranking of the current lot stock of the partition in the stock pool. The base ranking refers to the number of allocated inventory in the inventory pool when the inventory pool allocates the lot inventory to the partition. The inventory pool needs to synchronize the lot base inventory information to the partition when assigning the lot inventory to the partition. The partition lot inventory N is used to represent the inventory quantity of the current lot inventory for the partition. As previously described, the pool of inventory is allocated to a predetermined number of inventory for each partition in successive increments, such as N0. The zone lot inventory is indicative of the number of lot inventory to which the lot is assigned. The partition used inventory M is used to represent the number of used inventory in the current lot inventory for the partition.
In an exemplary embodiment, each partition maintains the partition inventory information described above by way of key-value pairs in the Redis cluster. A key-value pair is a data structure stored in a key (key) and value correspondence. In this embodiment, the above-described partition base inventory, partition batch inventory, and partition used inventory may be saved by the following key-value pairs:
key: $ { prefix0}: value: partition base stock B;
key: $ { prefix1}: value: zoned lot inventory N;
key: $ { prefix2}: value: partition used inventory M;
wherein, prefix0, prefix1, prefix2 correspond to the partition base stock, partition batch stock, and partition used stock, respectively.
In an exemplary embodiment, the total number of stores corresponding to a certain hot Key is 1000 stores, and the total number of stores is placed in the store pool corresponding to the hot Key. The hot Key corresponds to five partitions, namely partition 1, partition 2, partition 3, partition 4 and partition 5. The inventory pool allocates inventory to each partition in batches according to a preset number, assuming that the preset number n0=5. Based on this, the partition inventory information of each partition is shown in table 1.
Partition base stock B Zoned batch inventory N Partition used inventory M
Partition 1 0 5 4
Partition 2 5 5 3
Partition 3 25 5 0
Partition 4 15 5 4
Partition 5 20 5 5
TABLE 1
As shown in the above table, the partition lot inventories of each partition are all allocated to 5 inventories by a preset number n0=5. At initialization, partitions 1-5 correspond to partition base stock B of 0, 5, 10, 15, 20, respectively. At this time, the number of allocated inventory in the inventory pool is 25. When the inventory of partition 3 has been exhausted, a batch of inventory is again requested from the inventory pool, and the partition base inventory B for that batch of inventory is 25. And the used inventory M of the partition indicates the number of used inventories in the current lot inventory of the partition. The partitions corresponding to partitions 1-5 have been stocked with M as 4, 3, 0, 4, 5, respectively. Wherein partition 3 will be reset with its used inventory after requesting a batch of inventory from the inventory pool.
In step S250, in response to no unallocated inventory in the partition inventory of the first partition, the first partition obtains a batch of inventory from the inventory pool corresponding to the target data identifier; the batch of stock comprises a preset number of stock; and re-allocating the inventory to the data request.
In the embodiment of the disclosure, if it is determined that there is no unallocated inventory in the partition inventory according to the partition inventory information of the first partition, the first partition acquires a new batch of inventory from the inventory pool corresponding to the target data identifier. In the embodiment of the disclosure, the inventory pools are allocated to each subarea inventory in batches in small amounts. The inventory allocates a batch of inventory to the first partition by a preset amount. As described in the foregoing step S240, when the inventory pool allocates the inventory to the partition, the batch inventory information of the batch inventory is synchronized to the partition at the same time, including: batch base stock information and batch stock quantity. The first partition updates the local partition inventory information after receiving the batch inventory information. After the update is completed, inventory allocation is performed again on the data request.
In an exemplary embodiment, the preset number is not fixed, but may be dynamically varied according to the number of concurrent data requests of the hot Key. Specifically, the method comprises the following steps:
and acquiring the concurrent data request quantity information corresponding to the target data identifier at fixed time. The concurrent data request number information may be QPS (Query Per Second), or other information that may represent the number of concurrent data requests corresponding to the target data identifier. And determining the preset quantity according to the concurrent data request quantity information.
According to the embodiment of the disclosure, the real-time flow condition of the hot Key corresponding to the target data identifier is known by acquiring the concurrent data request quantity information corresponding to the target data identifier at fixed time. As the number of concurrent data requests increases, indicating that the amount of access to the hot Key is increasing, the preset number may be increased to reduce the frequency of batch inventory requests made by each partition. The preset quantity can be dynamically adjusted by the method, so that the configuration is more flexible.
In step S260, in response to an unallocated inventory in the partition inventory of the first partition, the data request is allocated with the unallocated inventory.
In the embodiment of the disclosure, if it is determined that there is an unallocated inventory in the partition inventory according to the partition inventory information of the first partition, the first partition allocates an inventory that has not been allocated to the data request.
In step S270, a data request result is fed back to the requester according to the unallocated inventory allocated to the data request.
In the embodiment of the disclosure, the data request result is determined according to the unallocated inventory allocated to the data request, and the data request result is fed back to the corresponding requester.
According to the method for processing the high-concurrency data request, the inventory in the inventory pool is distributed to each partition in batches and gradually in small amounts to perform inventory distribution, so that the inventory pool can dynamically balance the inventory distribution among the partitions according to the inventory distribution speed of each partition, and the problem of unbalanced inventory distribution of each partition is avoided on the premise that the processing capacity of each partition is not exceeded, and quick balanced inventory distribution is realized. Meanwhile, the work complexity of overall inventory allocation is reduced, and the stability of overall work is enhanced.
FIG. 4 is a flow chart illustrating an inventory acquisition method according to one example. As shown in fig. 4, the foregoing step S250 may further include the following steps.
In step S410, when the partition lot inventory of the first partition is equal to the partition used inventory, it is determined that there is no unallocated inventory in the partition inventory of the first partition.
In the embodiment of the disclosure, as described in the foregoing step S240, the partition inventory information includes at least: partition base stock B, partition lot stock N, and partition used stock M. When the partition batch inventory N of the first partition is equal to the partition used inventory M, it indicates that the batch inventory has been fully allocated, and no unallocated inventory is present in the partition inventory.
In step S420, the first partition sends an inventory obtaining request to the inventory pool corresponding to the target data identifier.
In the embodiment of the disclosure, the first partition sends an inventory obtaining request to an inventory pool corresponding to the target data identifier. The stock pool is stored with a total stock corresponding to the target data identifier, and is used for distributing the stock to each partition in batches.
In step S430, in response to the inventory obtaining request, the inventory pool sends first lot inventory information to the first partition; the first lot inventory information includes: first lot base stock information and first lot stock quantity; the first batch basic inventory information is determined according to the allocated inventory quantity in the inventory pool; the first lot stock quantity is equal to the preset quantity.
In an embodiment of the disclosure, the inventory pool sends first lot inventory information to the first partition in response to the inventory acquisition request. The first lot inventory information includes: first lot base stock information and first lot stock quantity. The first lot base stock information, corresponding to the partition base stock B, is determined based on the number of allocated stock in the stock pool. The first lot inventory number, corresponding to the partitioned lot inventory N, is equal to the preset number N0. Meanwhile, the inventory pool records the first batch inventory information and updates the local inventory information according to the first batch inventory information. For example, the allocated inventory quantity of the inventory pool is updated for inventory allocation for the next lot.
In an exemplary embodiment, the preset number N0 is considered to be allocated to the last insufficient allocation of inventory pools, or as previously described the preset number N0 is dynamically varied according to the number of concurrent data requests. Thus, the inventory pool needs to send the first lot inventory quantity to the partition at each lot inventory allocation.
In the exemplary embodiment, if the inventory pool employs a fixed, preset number N0 in each batch inventory allocation, the number is not dynamically changed. The stock pool may only send the first lot base stock information when each lot stock is allocated, and the corresponding partition defaults to obtain the preset number N0 of lot stocks.
In an exemplary embodiment, referring to the example of Table 1 above, when inventory of partition 3 has been consumed, partition 3 sends an inventory acquisition request to the inventory pool. When the stock pool receives the stock acquisition request, the stock pool has been allocated a stock quantity of 25, so the first lot base stock information is 25. And the first lot stock quantity is equal to the preset quantity of 5. The inventory pool sends associated first lot base inventory information and first lot inventory amounts to the first partition. At the same time, the inventory pool updates the locally allocated inventory quantity to 30.
In step S440, the first partition updates the partition inventory information of the first partition according to the first lot inventory information.
In the embodiment of the disclosure, after the first partition receives the first batch inventory information fed back by the inventory pool, the partition inventory information of the partition is updated according to the first batch inventory information. The updating operation specifically comprises the following steps: taking the first batch of base stock information as updated partition base stock; taking the first batch inventory quantity as an updated partition batch inventory; the partition is cleared from inventory.
By the inventory acquisition method, under the condition that the partition is determined to finish inventory allocation, a batch of inventory allocation can be performed on the partition by the inventory pool, so that the partition obtains a new unallocated inventory, and inventory allocation can be performed continuously in response to a data request. Meanwhile, the inventory pool can orderly record the inventory allocation conditions of all the subareas in a simple mode, and realize the dynamic orderly allocation of the inventory.
FIG. 5 is a flow chart illustrating a method of inventory allocation according to one example. As shown in fig. 5, the foregoing step S260 may further include the following steps.
In step S510, when the partition lot inventory of the first partition is greater than the partition used inventory, it is determined that there is an unallocated inventory in the partition inventory of the first partition.
In the embodiment of the disclosure, as described in the foregoing step S240, the partition inventory information includes at least: partition base stock B, partition lot stock N, and partition used stock M. When the partition batch inventory N of the first partition is greater than the partition used inventory M, it indicates that the batch inventory is not completely allocated, and there is no allocated inventory in the partition inventory.
In step S520, according to the data request, the partition used inventory of the first partition is added by one, so as to obtain the allocated partition used inventory.
In the embodiment of the disclosure, the unallocated inventory in the partition inventory is determined according to the partition inventory information of the first partition, so that a new batch inventory is not required to be pulled from an inventory pool, but the first partition requests to allocate the inventory which is not allocated to the data. Meanwhile, the previously used inventory M for the partition is updated based on the following formula.
M 1 =M 0 +1
Wherein M is 0 For the original partition to be used in stock, M 1 Inventory has been used for the assigned partition.
As shown in fig. 5, the foregoing step S270 may further include the following steps.
In step S530, determining the data request result according to the unallocated inventory allocated to the data request; the data request result is a sum of the partition base inventory of the first partition and the allocated partition used inventory.
In the disclosed embodiments, the result of a data request is a ranking of the data request corresponding to the hot Key. In a second killing scenario, a user may determine whether he qualifies for second killing based on the ranking information. Based on this, the data request result R can be calculated based on the following formula.
R=B+M 1
Wherein B is partition basic inventory, M 1 Inventory has been used for the assigned partition. Since the partition base stock B is determined by the stock pool based on the number of allocated stock in the stock pool at the time of batch stock allocation. And adding the allocated partition used inventory on the basis of the partition basic inventory B to obtain the total ranking of the data request in the inventory pool corresponding to the hot Key.
In step S540, the data request result is fed back to the requester.
In the embodiment of the disclosure, the ranking information of the data request obtained by the calculation is used as a data request result and fed back to the requester to complete the data request process. In a second killing scenario, a user may determine whether he qualifies for second killing based on the ranking information.
By the inventory allocation method, the inventory allocation can be performed in response to the data request. Meanwhile, ranking of each user corresponding to the hot Key can be determined in each partition in a simple mode without ranking aggregation.
In the embodiment of the disclosure, the number of partitions corresponding to the target data identifier may be set manually, or may be dynamically adjusted according to the number of data requests to be allocated accumulated by each partition. Based on this, the method for adjusting the partition number according to the embodiment of the disclosure may include the following steps.
And acquiring the quantity information of the data requests to be distributed in each partition at fixed time.
In the embodiment of the disclosure, the number information of the data requests to be distributed in each partition is periodically acquired according to a preset time interval. The number information of the data requests to be allocated may be the number information of the current data requests to be allocated in the data request processing queues of each partition.
And determining the newly added partition according to the quantity information of the data requests to be distributed.
In the embodiment of the disclosure, according to the number information of the current data requests to be allocated in the data request processing queues of each partition obtained in the above steps, data statistics is performed on the number information, so as to determine whether to add a partition newly. The data statistics may be implemented in a wide variety of ways, and are described below by way of example, but are not intended to limit the scope of the present disclosure.
In an exemplary embodiment, the data statistics manner may be that arithmetic average is performed on the number information of the data requests to be allocated to each partition, and when the average value is greater than a preset threshold value, it is indicated that the backlog of the data request processing queue of each partition is serious currently, and then a new added partition is determined.
In an exemplary embodiment, the data statistics manner may be to sum the number information of the data requests to be allocated to each partition, and when the sum value is greater than a preset threshold value, it is indicated that the backlog of the current overall data request is serious, and then a new partition is determined.
For the newly added partition, the method for processing the high concurrency data request may further include the following steps.
Responding to the first partition without the partition inventory information, and then the first partition is a newly added partition; the first partition obtains the batch of inventory from the inventory pool; and re-allocating the inventory to the data request.
In the embodiment of the disclosure, since the newly added partition does not have the existing partition inventory information, when the data request inquires the partition inventory information, if the partition inventory information does not exist, the partition is indicated as the newly added partition. For the newly added partition, a batch of inventory needs to be obtained from the inventory pool for inventory allocation. The inventory acquisition process is similar to the previous step S250 and steps S420, S430, S440 and will not be repeated here.
According to the embodiment of the disclosure, the number information of the data requests to be distributed in each partition is obtained at fixed time, the queuing condition of the data requests in the current partition is monitored in real time, the number of the partitions is dynamically adjusted, and the flow change of the self-adaptive hot Key is realized. Because the method for processing the high-concurrency data requests adopts a mode that each partitioned batch obtains a small amount of inventory from the inventory pool, and determines each data request result based on the partitioned inventory information. Aggregation and ordering of data between the various partitions is not required, nor is complex work arrangements between the various partitions required. The system has the advantages that the working complexity is effectively reduced, the whole inventory distribution system has good scalability and expandability, the on-line flow burst can be faced, the partition expansion can be flexibly carried out, and the whole flow processing capacity is ensured.
In a second killing scenario, it is often necessary to know the total used inventory quantity of the hot Key in real time, in addition to the personal rank of each user. In response to this need, embodiments of the present disclosure provide a method for updating the amount of inventory used by an inventory pool.
FIG. 6 is a flow chart illustrating a method for updating an inventory count for an inventory pool, according to an example. As shown in fig. 6, the method for updating the used inventory quantity of the inventory pool may include the following steps.
In step S610, the partition inventory information of each of the partitions is acquired at regular time.
In the embodiment of the disclosure, the partition inventory information of each partition is periodically obtained according to a preset time interval. The partition inventory information may further include: the partition accumulates the number of used inventory. The partition accumulates the number of used inventory to represent the number of inventory accumulated and allocated by the partition. The partition accumulates the number of used inventory, and can be calculated by adding the batch inventory of each partition of the partition to the partition used inventory of the current batch. Or may be counted separately each time inventory is dispensed.
In step S620, the number of used inventory in the inventory pool is determined according to the partition inventory information of each of the partitions.
In the embodiment of the disclosure, the sum calculation is performed according to the accumulated used inventory quantity of the subareas in the subarea inventory information of each subarea, so as to obtain the used inventory quantity in the inventory pool. The current overall second killing process condition can be known through the used stock quantity in the stock pool.
In an exemplary embodiment, the above embodiment provides a method for updating the number of used inventory in an inventory pool. However, the method of updating the already-used inventory amounts of the inventory pool is not limited thereto.
In step S610, the partition inventory information of each of the partitions is acquired at regular time.
In the embodiment of the disclosure, the partition inventory information of each partition is periodically obtained according to a preset time interval. The partition inventory information may further include: the used inventory is not reported. The unreported used inventory is used for representing the quantity of the allocated inventory in the reporting period of the round. The partition inventory information of each partition is periodically acquired, and the acquisition process is the reporting process. Reporting from the previous round to the current time period, namely the reporting period of the round. The inventory pool periodically polls each partition to obtain an newly added amount of allocated inventory in the round of reporting.
In step S620, the number of used inventory in the inventory pool is determined according to the partition inventory information of each of the partitions.
In the embodiment of the disclosure, the unreported used inventory in the partition inventory information of each partition is accumulated to obtain the total unreported used inventory. The total unreported used inventory is used for representing the total quantity of the allocated inventory in the inventory pool corresponding to the target data identifier in the reporting period of the round. And adding the total unreported used inventory to the used inventory quantity in the inventory pool determined in the previous round to obtain the used inventory quantity in the inventory pool of the current round. And recording the used inventory quantity in the current round of inventory pool for use in the next round of calculation.
According to the embodiment of the disclosure, the partition inventory information of each partition is acquired at fixed time and reported to an inventory pool. And carrying out statistical calculation on partition inventory information of each partition by the inventory pool to obtain the total used inventory quantity of the inventory pool corresponding to the hot Key, and providing a data base for solving the current whole second killing process condition.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
FIG. 7 is a block diagram illustrating an apparatus for handling high concurrency data requests, according to an example embodiment. Referring to fig. 7, the apparatus 700 may include: a data request acquisition module 710, a data request hashing module 720, a partition operation lock acquisition module 730, a partition inventory information query module 740, an inventory acquisition module 750, an inventory allocation module 760, and a result feedback module 770.
A data request acquisition module 710 configured to acquire a data request sent by a requester; the data request at least comprises the following steps: target data identification;
a data request hashing module 720 configured to hash the data request to a first partition of the at least one partition; the at least one partition corresponds to the target data identification;
a partition operation lock acquisition module 730 configured to request a partition operation lock from the first partition according to the data request;
a partition inventory information query module 740 configured to obtain the partition operation lock in response to the data request, and query partition inventory information of the first partition corresponding to the target data identification;
an inventory obtaining module 750 configured to obtain, in response to no unallocated inventory in the inventory of the partition of the first partition, a batch of inventory from an inventory pool corresponding to the target data identifier by the first partition; the batch of stock comprises a preset number of stock; re-inventory allocation of the data request;
An inventory allocation module 760 configured to allocate unallocated inventory to the data request in response to there being unallocated inventory in the partition inventory of the first partition;
a result feedback module 770 configured to feedback data request results to the requestor based on the unallocated inventory allocated for the data requests.
In some exemplary embodiments of the present disclosure, the partition inventory information includes at least: partition base stock, partition batch stock, and partition used stock; the partition base stock representing a base ranking of the partition's current lot stock in the stock pool; the partition batch inventory is used for representing the inventory quantity of the current batch inventory of the partition; the partition used inventory is used to represent the number of used inventories in the current lot inventory of the partition.
In some exemplary embodiments of the present disclosure, an inventory acquisition module 750 is configured to determine that there is no unallocated inventory in the partition inventory of the first partition when the partition lot inventory of the first partition is equal to the partition used inventory; the first partition sends an inventory acquisition request to an inventory pool corresponding to the target data identifier; responding to the inventory obtaining request, and sending first batch inventory information to the first partition by the inventory pool; the first lot inventory information includes: first lot base stock information and first lot stock quantity; the first batch basic inventory information is determined according to the allocated inventory quantity in the inventory pool; the first batch inventory quantity is equal to the preset quantity; the first partition updates the partition inventory information of the first partition according to the first batch inventory information.
In some exemplary embodiments of the present disclosure, an inventory allocation module 760 configured to determine that there is unallocated inventory in a partition inventory of the first partition when the partition lot inventory of the first partition is greater than the partition used inventory; and adding one to the partition used inventory of the first partition according to the data request, and obtaining the allocated partition used inventory.
In some exemplary embodiments of the present disclosure, a result feedback module 770 is configured to determine the data request result from the unallocated inventory allocated for the data request; the data request result is the sum of the partition base stock of the first partition and the allocated partition used stock; and feeding back the data request result to the requester.
In some exemplary embodiments of the present disclosure, the method further includes a newly added partition module configured to obtain the number information of the data requests to be allocated in each partition; and determining the newly added partition according to the quantity information of the data requests to be distributed.
In some exemplary embodiments of the present disclosure, an inventory acquisition module 750 is configured to, in response to the first partition not having the partition inventory information, then the first partition is a newly added partition; the first partition obtains the batch of inventory from the inventory pool; and re-allocating the inventory to the data request.
In some exemplary embodiments of the present disclosure, a preset number adjustment module is configured to periodically obtain concurrent data request number information corresponding to the target data identifier; and determining the preset quantity according to the concurrent data request quantity information.
In some exemplary embodiments of the present disclosure, a used inventory quantity determination module is configured to periodically obtain the partition inventory information for each of the partitions; and determining the used inventory quantity in the inventory pool according to the partition inventory information of each partition.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
An electronic device 800 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one storage unit 820, a bus 830 connecting the different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present description of exemplary methods. For example, the processing unit 810 may perform the various steps shown in fig. 2.
As another example, the electronic device may implement the various steps shown in fig. 2.
Storage unit 820 may include readable media in the form of volatile storage units such as Random Access Memory (RAM) 821 and/or cache memory unit 822, and may further include Read Only Memory (ROM) 823.
The storage unit 820 may also include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 870 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment, a computer readable storage medium is also provided, e.g., a memory, comprising instructions executable by a processor of an apparatus to perform the above method. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program/instruction which, when executed by a processor, implements the method in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of processing highly concurrent data requests, comprising:
acquiring a data request sent by a requester; the data request at least comprises the following steps: target data identification;
hashing the data request to a first partition of the at least one partition; the at least one partition corresponds to the target data identification;
requesting a partition operation lock from the first partition according to the data request;
obtaining the partition operation lock in response to the data request, and querying partition inventory information of the first partition corresponding to the target data identifier;
responding to the fact that unallocated inventory is not in the subarea inventory of the first subarea, and the first subarea acquires a batch of inventory from an inventory pool corresponding to the target data identification; the batch of stock comprises a preset number of stock; re-inventory allocation of the data request;
allocating the unallocated inventory to the data request in response to an unallocated inventory in a partition inventory of the first partition;
and feeding back a data request result to the requester according to the unallocated inventory allocated to the data request.
2. The method of claim 1, wherein the zone inventory information includes at least: partition base stock, partition batch stock, and partition used stock;
The partition base stock representing a base ranking of the partition's current lot stock in the stock pool;
the partition batch inventory is used for representing the inventory quantity of the current batch inventory of the partition;
the partition used inventory is used to represent the number of used inventories in the current lot inventory of the partition.
3. The method of claim 2, wherein the first partition obtaining a batch of inventory from the inventory pool corresponding to the target data identification in response to no unallocated inventory in the partition inventory of the first partition comprises:
determining that there is no unassigned inventory in the partition inventory of the first partition when the partition lot inventory of the first partition is equal to the partition used inventory;
the first partition sends an inventory acquisition request to an inventory pool corresponding to the target data identifier;
responding to the inventory obtaining request, and sending first batch inventory information to the first partition by the inventory pool; the first lot inventory information includes: first lot base stock information and first lot stock quantity; the first batch basic inventory information is determined according to the allocated inventory quantity in the inventory pool; the first batch inventory quantity is equal to the preset quantity;
The first partition updates the partition inventory information of the first partition according to the first batch inventory information.
4. The method of claim 2, wherein the allocating the unallocated inventory to the data request in response to an unallocated inventory in the partition inventory of the first partition comprises:
determining that unallocated inventory is present in the partition inventory of the first partition when the partition lot inventory of the first partition is greater than the partition used inventory;
and adding one to the partition used inventory of the first partition according to the data request, and obtaining the allocated partition used inventory.
5. The method of claim 4, wherein said feeding back data request results to said requestor based on said unallocated inventory allocated for said data requests comprises:
determining the data request result according to the unallocated inventory allocated to the data request; the data request result is the sum of the partition base stock of the first partition and the allocated partition used stock;
and feeding back the data request result to the requester.
6. The method as recited in claim 1, further comprising:
Acquiring the quantity information of the data requests to be distributed in each partition at fixed time;
and determining the newly added partition according to the quantity information of the data requests to be distributed.
7. The method according to claim 1 or 6, further comprising:
responding to the first partition without the partition inventory information, and then the first partition is a newly added partition; the first partition obtains the batch of inventory from the inventory pool; and re-allocating the inventory to the data request.
8. The method as recited in claim 1, further comprising:
acquiring concurrent data request quantity information corresponding to the target data identifier at regular time;
and determining the preset quantity according to the concurrent data request quantity information.
9. The method as recited in claim 1, further comprising:
the partition inventory information of each partition is acquired at fixed time;
and determining the used inventory quantity in the inventory pool according to the partition inventory information of each partition.
10. An apparatus for processing highly concurrent data requests, comprising:
the data request acquisition module is configured to acquire a data request sent by a requester; the data request at least comprises the following steps: target data identification;
A data request hashing module configured to hash the data request to a first partition of the at least one partition; the at least one partition corresponds to the target data identification;
a partition operation lock acquisition module configured to request a partition operation lock from the first partition according to the data request;
a partition inventory information query module configured to obtain the partition operation lock in response to the data request, query partition inventory information of the first partition corresponding to the target data identification;
the inventory obtaining module is configured to obtain a batch of inventory from the inventory pool corresponding to the target data identifier in response to the fact that the partition inventory of the first partition does not have unallocated inventory; the batch of stock comprises a preset number of stock; re-inventory allocation of the data request;
an inventory allocation module configured to allocate unallocated inventory to the data request in response to there being unallocated inventory in the partition inventory of the first partition;
and the result feedback module is configured to feed back a data request result to the requester according to the unallocated inventory allocated to the data request.
11. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the executable instructions to implement the method of processing highly concurrent data requests according to any one of claims 1 to 9.
12. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of processing highly concurrent data requests according to any one of claims 1 to 9.
CN202311442992.9A 2023-11-01 2023-11-01 Method, device, equipment and storage medium for processing high concurrency data request Pending CN117492990A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311442992.9A CN117492990A (en) 2023-11-01 2023-11-01 Method, device, equipment and storage medium for processing high concurrency data request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311442992.9A CN117492990A (en) 2023-11-01 2023-11-01 Method, device, equipment and storage medium for processing high concurrency data request

Publications (1)

Publication Number Publication Date
CN117492990A true CN117492990A (en) 2024-02-02

Family

ID=89666962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311442992.9A Pending CN117492990A (en) 2023-11-01 2023-11-01 Method, device, equipment and storage medium for processing high concurrency data request

Country Status (1)

Country Link
CN (1) CN117492990A (en)

Similar Documents

Publication Publication Date Title
CN109660607B (en) Service request distribution method, service request receiving method, service request distribution device, service request receiving device and server cluster
CN100407153C (en) On demand node and server instance allocation and de-allocation
US10733029B2 (en) Movement of services across clusters
CN106453457B (en) Multi-priority service instance allocation within a cloud computing platform
EP3281359B1 (en) Application driven and adaptive unified resource management for data centers with multi-resource schedulable unit (mrsu)
US7937493B2 (en) Connection pool use of runtime load balancing service performance advisories
US7389293B2 (en) Remastering for asymmetric clusters in high-load scenarios
US7437460B2 (en) Service placement for enforcing performance and availability levels in a multi-node system
KR20170029263A (en) Apparatus and method for load balancing
CN109933431B (en) Intelligent client load balancing method and system
CN111459641B (en) Method and device for task scheduling and task processing across machine room
US8356098B2 (en) Dynamic management of workloads in clusters
US20200050479A1 (en) Blockchain network and task scheduling method therefor
US20200042608A1 (en) Distributed file system load balancing based on available node capacity
CN110244901B (en) Task allocation method and device and distributed storage system
US7660897B2 (en) Method, system, and program for distributing application transactions among work servers
US7437459B2 (en) Calculation of service performance grades in a multi-node environment that hosts the services
WO2017207049A1 (en) A node of a network and a method of operating the same for resource distribution
CN109413117B (en) Distributed data calculation method, device, server and computer storage medium
CN117492990A (en) Method, device, equipment and storage medium for processing high concurrency data request
CN109005071B (en) Decision deployment method and scheduling equipment
US11256440B2 (en) Method and distributed storage system for aggregating statistics
CN112073223B (en) System and method for managing and controlling operation of cloud computing terminal and cloud server
CN110046040B (en) Distributed task processing method and system and storage medium
CN114090256A (en) Application delivery load management method and system based on cloud computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination