CN111124672A - Data distribution method based on Redis cluster and related equipment - Google Patents

Data distribution method based on Redis cluster and related equipment Download PDF

Info

Publication number
CN111124672A
CN111124672A CN201911256861.5A CN201911256861A CN111124672A CN 111124672 A CN111124672 A CN 111124672A CN 201911256861 A CN201911256861 A CN 201911256861A CN 111124672 A CN111124672 A CN 111124672A
Authority
CN
China
Prior art keywords
preset
user
queues
target
users
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911256861.5A
Other languages
Chinese (zh)
Inventor
谢铭熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201911256861.5A priority Critical patent/CN111124672A/en
Publication of CN111124672A publication Critical patent/CN111124672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a data distribution method based on Redis cluster and related equipment, which are applied to a server, wherein the method comprises the following steps: the method comprises the steps of obtaining a plurality of preset cache queues, wherein each preset cache queue comprises at least one preset event result data, carrying out data distribution on a plurality of users according to the preset cache queues to obtain a plurality of user distribution information corresponding to the users, and storing the plurality of user distribution information into a preset Redis cluster if the distribution of the plurality of preset event result data in the plurality of preset cache queues is completed, wherein the Redis cluster comprises a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.

Description

Data distribution method based on Redis cluster and related equipment
Technical Field
The present application relates to the field of data processing, and in particular, to a data distribution method based on a Redis cluster and a related device.
Background
With the development of internet communication, in order to improve the participation of users and attract more customer sources, many lottery schemes are proposed by many platforms, for example, prize acquisition, prize probability calculation, and prize matching with a random number are acquired, and if the stock is insufficient, the whole lottery process needs to be executed again or a prize is not won; however, when the user floods a large amount at a certain point of time, the server needs to receive a large amount of requests, and the server cannot bear such stress while consuming the cpu, and the response speed of the server is also reduced.
Disclosure of Invention
The embodiment of the application provides a data distribution method based on Redis cluster and related equipment, which are beneficial to reducing the operation and maintenance pressure of a server and improving the data distribution efficiency.
A first aspect of an embodiment of the present application provides a method for data distribution based on a Redis cluster, which is applied to a server, and the method includes:
acquiring a plurality of preset buffer queues, wherein each preset buffer queue comprises at least one preset event result data;
according to the preset cache queues, data distribution is carried out on a plurality of users to obtain a plurality of user distribution information corresponding to the users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data;
if the distribution of the preset event result data in the preset cache queues is completed, storing the user distribution information into a preset Redis cluster, wherein the Redis cluster comprises a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.
A second aspect of the embodiments of the present application provides a data allocation apparatus based on a Redis cluster, where the apparatus includes an obtaining unit, an allocating unit, and a storage unit, where:
the acquisition unit is used for acquiring a plurality of preset buffer queues, and each preset buffer queue comprises at least one preset event result data;
the distribution unit is used for distributing data to a plurality of users according to the plurality of preset cache queues to obtain a plurality of user distribution information corresponding to the plurality of users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data;
the storage unit is configured to store the plurality of user allocation information into a preset Redis cluster if the plurality of preset event result data in the plurality of preset cache queues are completely allocated, where the Redis cluster includes a plurality of target cache queues, and each target cache queue corresponds to at least one user allocation information.
A third aspect of embodiments of the present application provides a server, where the server includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method according to the first aspect of embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps as described in the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has at least the following beneficial effects:
by applying the embodiment of the application to a server, a plurality of preset buffer queues can be obtained, each preset buffer queue comprises at least one preset event result data, data distribution is performed on a plurality of users according to the plurality of preset buffer queues to obtain a plurality of user distribution information corresponding to the plurality of users, wherein each user corresponds to one preset buffer queue, each user distribution information corresponds to one preset event result data, if the distribution of the plurality of preset event result data in the plurality of preset buffer queues is completed, the plurality of user distribution information is stored into a preset Redis cluster, the Redis cluster comprises a plurality of target buffer queues, each target buffer queue corresponds to at least one user distribution information, so that the plurality of preset event result data which are distributed in advance can be stored in the plurality of preset buffer queues, and then the preset event result data are distributed for the plurality of users, and then storing the data in a plurality of target cache queues in a preset Redis cluster, and realizing the distribution and storage of the data through two different cache queues, thereby being beneficial to reducing the operation and maintenance pressure of a server and improving the efficiency of data distribution.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for data distribution based on a Redis cluster according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for data distribution based on a Redis cluster according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for data distribution based on a Redis cluster according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data distribution device based on a Redis cluster according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the embodiments of the present application, methods of applying the embodiments of the present application will be described below.
The servers mentioned in the embodiments of the present application may include, but are not limited to, a background server, a component server, a cloud server, a data distribution system server, or a data distribution software server, which are merely examples, and are not exhaustive, and include, but are not limited to, the above devices.
Referring to fig. 1, fig. 1 is a schematic flowchart of a method for data distribution based on a Redis cluster according to an embodiment of the present application, where the method is applied to a server, and the method includes the following steps:
101. acquiring a plurality of preset buffer queues, wherein each preset buffer queue comprises at least one preset event result data;
the preset buffer queue can be set by a user or default by a system, at least one preset event result data can be stored in the preset buffer queue, the preset event result data can be set in advance, and the preset event can include at least one of the following events: for example, if the preset event is a lottery event, a pre-lottery operation may be performed on the lottery event, and a plurality of lottery result data may be stored in a preset buffer queue.
Optionally, in step 101, before obtaining the plurality of preset buffer queues, the method may further include the following steps:
a1, acquiring an event result list in a preset initial period, wherein the event result list comprises a plurality of result types;
a2, calculating a result probability value corresponding to each result type to obtain a plurality of result probability values corresponding to a plurality of result types, wherein each result type corresponds to one result probability value;
a3, pre-distributing the result types according to the result probability values to obtain preset event result data;
a4, storing the preset event result data into the preset buffer queues.
The preset initial period may be set by the user or default to the system, the preset initial period may be set as a period in which the request initiated by the event participant is the minimum, for example, the period may be early morning or late night on the day when a certain lottery event occurs, the event result list may be set in advance, for example, if the preset event is a lottery event, the event result list may include a plurality of preset prizes, the preset prizes may be set by the user or default to the system, for example, the preset prizes may include at least one of the following: coupons, red packs, lottery tickets, appliances, living goods, etc., without limitation thereto.
Further, a result probability value corresponding to each result type in the event result list can be calculated, specifically, a ratio of the occurrence frequency of each result type to the occurrence frequency of all the result types can be calculated, then, a plurality of result probability values corresponding to the plurality of result categories may be obtained, based on the plurality of result probability values, a pre-allocation rule may be set, distributing the result types according to a pre-distribution rule until all the result types are distributed, ending the distribution, so as to obtain a plurality of preset event result data, storing the plurality of preset event result data into a plurality of preset buffer queues, the preset cache queues can also be distributed in nodes in the Redis cluster, and each node can at least correspond to one preset cache queue.
For example, if the result categories are divided into four levels, the result probability values corresponding to the result categories of each level are: 5% of first-class prizes, 20% of second-class prizes, 30% of third-class prizes and 45% of fourth-class prizes, if 100 result classifications are provided, the result types are divided into [1-100] intervals according to the 4 probabilities, in the interval, the part of [1-5] represents the first-class prizes, the part of [6-35] represents the second-class prizes, the part of [36-55] represents the third-class prizes, and the part of [56-100] represents the fourth-class prizes, a random number can be randomly selected from [1-100] and can be taken for 100 times, if the random number is in the [1-5] interval, the random number corresponds to the first-class prizes, so that the pre-allocation step can be performed on the result types according to the result probability values, a plurality of preset event result data can be obtained and stored in a preset cache queue.
102. According to the preset cache queues, data distribution is carried out on a plurality of users to obtain a plurality of user distribution information corresponding to the users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data;
the user may refer to a user participating in the preset event, for example, may refer to a plurality of users participating in a lottery event, when the preset event occurs, a plurality of preset time result data corresponding to the plurality of preset buffer queues may be allocated to the plurality of users, each user may correspond to one preset time result data, each preset buffer queue may correspond to at least one user, and the user allocation information may include preset event result data and a target preset buffer queue corresponding to the preset event result data, and the like, which is not limited herein.
Optionally, in step 102, the data allocation to the multiple users according to the multiple preset buffer queues may include the following steps:
21. counting the sequence of preset events corresponding to each user in the plurality of users through a preset counter to obtain a plurality of count values corresponding to the plurality of users, wherein each user corresponds to one count value;
22. and matching the preset event result data corresponding to each user in the plurality of users from the plurality of preset cache queues according to the plurality of count values.
The server may include a preset counter, where the preset counter may be configured to count the number of times that a plurality of users access the server, and when the plurality of users access the server at the same time, a high concurrent pressure may be applied to the server.
Furthermore, the target preset buffer queue allocated to each user during access can be determined according to the plurality of count values, and since the preset counter can record one count value during access of each user, the count value can represent the access sequence of the user, and preset event result data subjected to pre-allocation is stored in the plurality of preset buffer queues, the preset event result data corresponding to each user can be matched for the plurality of users according to the plurality of count values.
Optionally, in the step 21, counting, by a preset counter, an order of a preset event corresponding to each of the plurality of users to obtain a plurality of count values corresponding to the plurality of users, may include the following steps:
and when the preset instruction aiming at the preset event is triggered by any user i in the plurality of users, controlling the current count value of the preset counter to be increased by 1 to obtain a target count value m aiming at the user i, wherein i and m are positive integers.
The preset instruction can be set by a user or defaulted by a system, and the preset instruction can include at least one of the following instructions: for example, the lottery instruction may be preset for a lottery event, and when a user triggers the lottery instruction, the preset instruction for the preset event may be triggered, at this time, the current count value of the preset counter may be controlled to be increased by 1, when any user i triggers the preset instruction, the preset count value may be increased by 1, and the count value after the completion of the increment is the target count value corresponding to the user i.
In addition, a preset time period may be set for the preset event, the preset instruction is triggered to be valid only in the preset time period, the user may participate in the preset event only in the preset time period, and the preset time period may be 30s, 1min, 2h, 2d, and the like for starting timing of the preset time period, which is not limited herein.
Optionally, in the step 22, matching the preset event result data corresponding to each of the plurality of users from the plurality of preset buffer queues according to the plurality of count values may include the following steps:
221. acquiring the number n corresponding to the preset cache queues, wherein n is a positive integer;
222. performing modulo operation on the number n of the plurality of preset cache queues based on the target count value m to obtain a target preset cache queue k, wherein k is an integer;
223. and distributing preset event result data corresponding to the target preset cache queue k to the user i.
In order to more quickly match the preset event result data in the preset buffer queue for the user, each preset buffer queue may be sequentially numbered when the preset event result data is stored in the preset buffer queue, and the preset buffer queues may be sorted from 0 bit, so that the preset event result data in the preset buffer queue may be distributed according to the numbers when distributed.
Specifically, a number n corresponding to a plurality of preset buffer queues may be obtained, where the number n may be a total number of the plurality of preset buffer queues, and a target count value m corresponding to any user i may be obtained, a modulo operation may be performed on n with respect to m to obtain a value k, and the value k is a number k corresponding to a target preset buffer queue in the plurality of preset buffer queues.
Optionally, after the step 222, after the obtaining of the target preset buffer queue k by taking a modulus of the number n of the preset buffer queues based on the target count value m, the method further includes the following steps:
if the target preset cache queue k does not include any preset event result data, deleting the target preset cache queue k, and executing the step of matching the preset event result data corresponding to each of the plurality of users from the plurality of preset cache queues for the user i.
After the modulo operation is performed, the obtained target preset buffer queue k does not include preset event result data, that is, when the value in the target preset buffer queue k is 0, at this time, the user i cannot allocate the preset event result data, the target preset buffer queue k may be directly deleted, and the target preset buffer queue is reselected from the remaining plurality of preset buffer queues, so as to avoid the occurrence of dead cycle and the situation of allocation failure.
103, if the distribution of the preset event result data in the preset cache queues is completed, storing the user distribution information into a preset Redis cluster, where the Redis cluster includes a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.
In order to reduce the pressure of the server, after the data allocation is completed, the user allocation information may be distributed and stored in a preset Redis cluster, where the preset Redis cluster may be understood as a target cache queue list, the target cache queue list may include a plurality of target cache queues, and each target cache queue may be allocated with at least one user allocation information.
Optionally, a storage rule may be set, for example, a user identifier corresponding to a plurality of users may be obtained, and the user identifier may be at least one of the following: a mobile phone number, an identification number, a user name, an International Mobile Subscriber Identity (IMSI), and the like, which are not limited herein; the user allocation information corresponding to the user identification marks corresponding to the users can be stored in the target cache queues after the marks are finished.
Optionally, in step 103, the storing the plurality of user allocation information to a preset Redis cluster may further include the following steps:
31. acquiring load conditions of a plurality of cluster nodes corresponding to the preset Redis cluster to obtain a plurality of load conditions;
32. based on the multiple load conditions, scoring the target cache queue corresponding to each cluster node to obtain multiple scoring values;
33. determining a plurality of priorities corresponding to the plurality of target cache queues according to the plurality of scoring values, wherein each target cache queue corresponds to one priority;
34. and storing a plurality of pieces of distribution information corresponding to the plurality of target cache queues according to the plurality of priorities.
Wherein, because each target buffer queue can store the user distribution information corresponding to at least one user, the load condition can be understood as the data size of the user distribution information corresponding to at least one user stored in the target buffer queue corresponding to the cluster node, in order to better manage the plurality of target buffer queues, the plurality of target buffer queues can be prioritized, the plurality of cluster nodes can be evaluated according to the load condition corresponding to the plurality of cluster nodes, the higher the data amount corresponding to the load condition is, the higher the rating value is, the higher the corresponding priority is, specifically, a plurality of rating value intervals can be preset in advance, each rating value interval can correspond to one priority, the higher the rating value is, the higher the corresponding priority level is, thus, the plurality of distribution information corresponding to the plurality of target buffer queues can be distributed and stored according to the priorities corresponding to the plurality of cluster nodes, the data management is facilitated, and the distribution efficiency is improved.
Optionally, after the plurality of users complete the preset event, preset event result data corresponding to the plurality of users may be obtained from the plurality of target cache queues and sent to the plurality of users.
It can be seen that, the method for data distribution based on Redis cluster described in this embodiment of the present application is applied to a server, and the method includes: obtaining a plurality of preset buffer queues, wherein each preset buffer queue comprises at least one preset event result data, distributing data to a plurality of users according to the plurality of preset buffer queues to obtain a plurality of user distribution information corresponding to the plurality of users, wherein each user corresponds to one preset buffer queue, each user distribution information corresponds to one preset event result data, if the distribution of the plurality of preset event result data in the plurality of preset buffer queues is completed, the plurality of user distribution information is stored in a preset Redis cluster, the Redis cluster comprises a plurality of target buffer queues, each target buffer queue corresponds to at least one user distribution information, so that the plurality of preset event result data which are distributed in advance can be stored in the plurality of preset buffer queues, then, the distribution of the preset event result data is performed for the plurality of users, and then the preset event result data are stored in the plurality of target buffer queues in one preset Redis cluster, through two different cache queues, data distribution and storage are realized, the operation and maintenance pressure of the server is reduced, and the data distribution efficiency is improved.
In accordance with the foregoing, please refer to fig. 2, where fig. 2 is a flowchart illustrating a method for data distribution based on a Redis cluster, which is applied to a server and disclosed in an embodiment of the present application, the method for data distribution based on a Redis cluster may include the following steps:
201. the method comprises the steps of obtaining a plurality of preset buffer queues, wherein each preset buffer queue comprises at least one preset event result datum.
202. And when the preset instruction aiming at the preset event is triggered by any user i in the plurality of users, controlling the current count value of the preset counter to be increased by 1 to obtain a target count value m aiming at the user i, wherein i and m are positive integers.
203. And acquiring the number n corresponding to the preset buffer queues, wherein n is a positive integer.
204. And performing modulus operation on the number n of the preset buffer queues based on the target count value m to obtain a target preset buffer queue k, wherein k is an integer.
205. And distributing preset event result data corresponding to the target preset cache queue k to the user i to obtain a plurality of user distribution information corresponding to the plurality of users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data.
206. If the distribution of the preset event result data in the preset cache queues is completed, storing the user distribution information into a preset Redis cluster, wherein the Redis cluster comprises a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.
The specific description of the steps 201 to 206 may refer to corresponding steps of the data allocation method based on the Redis cluster described in fig. 1, and will not be described herein again.
It can be seen that, in the method for data allocation based on a Redis cluster described in this embodiment of the present application, a plurality of preset buffer queues are obtained, each preset buffer queue includes at least one preset event result data, when a preset instruction for a preset event is detected to be triggered by any user i of a plurality of users, a current count value of a preset counter is controlled to be increased by 1 to obtain a target count value m for the user i, where i and m are positive integers, a number n corresponding to the plurality of preset buffer queues is obtained, n is a positive integer, a modulo is taken from the number n of the plurality of preset buffer queues based on the target count value m to obtain a target preset buffer queue k, where k is an integer, the preset event result data corresponding to the target preset buffer queue k is allocated to the user i to obtain a plurality of user allocation information corresponding to the plurality of users, where each user corresponds to one preset buffer queue, each user distribution information corresponds to one preset event result data, if the distribution of a plurality of preset event result data in a plurality of preset cache queues is completed, the plurality of user distribution information is stored in a preset Redis cluster, the Redis cluster comprises a plurality of target cache queues, each target cache queue corresponds to at least one user distribution information, and therefore when a preset instruction for a preset event is triggered, the target preset cache queue corresponding to any user can be determined according to the plurality of preset cache queues, the distribution of the plurality of preset event result data is achieved, the plurality of user distribution information after the distribution is completed is stored in the preset Redis cluster, the distribution and the storage of the data are achieved, the reduction of operation and maintenance pressure of a server is facilitated, and the normal operation of the server is guaranteed when real-time mass access is achieved.
In accordance with the foregoing, please refer to fig. 3, where fig. 3 is a flowchart illustrating a method for data distribution based on a Redis cluster, which is applied to a server and disclosed in an embodiment of the present application, the method for data distribution based on a Redis cluster may include the following steps:
301. in a preset initial period, an event result list is obtained, wherein the event result list comprises a plurality of result types.
302. And calculating a result probability value corresponding to each result type to obtain a plurality of result probability values corresponding to a plurality of result types, wherein each result type corresponds to one result probability value.
303. And pre-distributing the result types according to the result probability values to obtain preset event result data.
304. And storing the plurality of preset event result data into the plurality of preset buffer queues.
305. And when the preset instruction aiming at the preset event is triggered by any user i in the plurality of users, controlling the current count value of the preset counter to be increased by 1 to obtain a target count value m aiming at the user i, wherein i and m are positive integers.
306. And acquiring the number n corresponding to the preset buffer queues, wherein n is a positive integer.
307. And performing modulus operation on the number n of the preset buffer queues based on the target count value m to obtain a target preset buffer queue k, wherein k is an integer.
308. And distributing preset event result data corresponding to the target preset cache queue k to the user i to obtain a plurality of user distribution information corresponding to the plurality of users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data.
309. If the distribution of the preset event result data in the preset cache queues is completed, storing the user distribution information into a preset Redis cluster, wherein the Redis cluster comprises a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.
For the detailed description of steps 301 to 309, reference may be made to corresponding steps of the data allocation method based on the Redis cluster described in fig. 1, and details are not repeated here.
It can be seen that the method for data allocation based on Redis cluster described in the embodiments of the present application obtains an event result list in a preset initial time period, where the event result list includes a plurality of result types, calculates a result probability value corresponding to each result type to obtain a plurality of result probability values corresponding to the plurality of result types, each result type corresponds to one result probability value, pre-allocates the plurality of result types according to the plurality of result probability values to obtain a plurality of preset event result data, stores the plurality of preset event result data into a plurality of preset cache queues, and when a preset instruction for a preset event is detected to be triggered by any user i of the plurality of users, controls a current count value of a preset counter to be increased by 1 to obtain a target count value m for the user i, where i and m are positive integers to obtain a number n corresponding to the plurality of preset cache queues, n is a positive integer, a module is taken for the number n of the plurality of cache queues based on a target count value m to obtain a target preset cache queue k, k is an integer, preset event result data corresponding to the target preset cache queue k is distributed to a user i to obtain a plurality of user distribution information corresponding to a plurality of users, wherein each user corresponds to one preset cache queue, each user distribution information corresponds to one preset event result data, if the distribution of the plurality of preset event result data in the plurality of preset cache queues is completed, the plurality of user distribution information is stored into a preset Redis cluster, the Redis cluster comprises a plurality of target cache queues, each target cache queue corresponds to at least one user distribution information, so that the result types can be distributed in advance through pre-distribution to obtain a plurality of preset event result data, and the preset event result data can be stored in the plurality of preset cache queues, and then, when preset instructions for a plurality of users are triggered, data distribution is carried out on a plurality of preset event result data which are distributed in advance, finally, a plurality of user distribution information is obtained and stored in one Redis cluster, distribution and storage of the data are realized through two different cache queues, and the reduction of operation and maintenance pressure of the server is facilitated.
In accordance with the above, please refer to fig. 4, fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application, and as shown in fig. 4, the server includes a processor, a communication interface, a memory and one or more programs, where the processor, the communication interface and the memory are connected to each other, the memory is used for storing a computer program, the computer program includes program instructions, the processor is configured to call the program instructions, and the one or more program programs include instructions for performing the following steps:
acquiring a plurality of preset buffer queues, wherein each preset buffer queue comprises at least one preset event result data;
according to the preset cache queues, data distribution is carried out on a plurality of users to obtain a plurality of user distribution information corresponding to the users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data;
if the distribution of the preset event result data in the preset cache queues is completed, storing the user distribution information into a preset Redis cluster, wherein the Redis cluster comprises a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.
It can be seen that, the server described in this embodiment of the present application may obtain a plurality of preset buffer queues, each preset buffer queue including at least one preset event result data, and perform data allocation to a plurality of users according to the plurality of preset buffer queues to obtain a plurality of user allocation information corresponding to the plurality of users, where each user corresponds to one preset buffer queue and each user allocation information corresponds to one preset event result data, and if the plurality of preset event result data in the plurality of preset buffer queues are completely allocated, store the plurality of user allocation information into a preset Redis cluster, where the Redis cluster includes a plurality of target buffer queues, and each target buffer queue corresponds to at least one user allocation information, so that the plurality of preset event result data that are allocated in advance may be stored in the plurality of preset buffer queues, and then, the preset event result data are distributed aiming at a plurality of users and then stored in a plurality of target cache queues in a preset Redis cluster, and the distribution and storage of the data are realized through two different cache queues, so that the operation and maintenance pressure of the server is reduced, and the data distribution efficiency is improved.
In one possible example, the program is configured to, in terms of data distribution to a plurality of users according to the plurality of preset buffer queues, execute the following steps:
counting the sequence of preset events corresponding to each user in the plurality of users through a preset counter to obtain a plurality of count values corresponding to the plurality of users, wherein each user corresponds to one count value;
and matching the preset event result data corresponding to each user in the plurality of users from the plurality of preset cache queues according to the plurality of count values.
In one possible example, in counting, by a preset counter, an order for a preset event corresponding to each of the plurality of users to obtain a plurality of count values corresponding to the plurality of users, the program is configured to execute the following steps:
and when the preset instruction aiming at the preset event is triggered by any user i in the plurality of users, controlling the current count value of the preset counter to be increased by 1 to obtain a target count value m aiming at the user i, wherein i and m are positive integers.
In one possible example, in matching the preset event result data corresponding to each of the plurality of users from the plurality of preset buffer queues according to the plurality of count values, the program is configured to execute the following steps:
acquiring the number n corresponding to the preset cache queues, wherein n is a positive integer;
performing modulo operation on the number n of the plurality of preset cache queues based on the target count value m to obtain a target preset cache queue k, wherein k is an integer;
and distributing preset event result data corresponding to the target preset cache queue k to the user i.
In a possible example, after the target preset buffer queue k is obtained by taking the modulus of the number n of the plurality of preset buffer queues based on the target count value m, the program is configured to execute the following steps:
if the target preset cache queue k does not include any preset event result data, deleting the target preset cache queue k, and executing the step of matching the preset event result data corresponding to each of the plurality of users from the plurality of preset cache queues for the user i.
In one possible example, in storing the plurality of user allocation information to a preset Redis cluster, the program is for executing the instructions of:
acquiring load conditions of a plurality of cluster nodes corresponding to the preset Redis cluster to obtain a plurality of load conditions;
based on the multiple load conditions, scoring the target cache queue corresponding to each cluster node to obtain multiple scoring values;
determining a plurality of priorities corresponding to the plurality of target cache queues according to the plurality of scoring values, wherein each target cache queue corresponds to one priority;
and storing a plurality of pieces of distribution information corresponding to the plurality of target cache queues according to the plurality of priorities.
In one possible example, prior to said obtaining the plurality of preset cache queues, the program further includes instructions for:
acquiring an event result list in a preset initial period, wherein the event result list comprises a plurality of result types;
calculating a result probability value corresponding to each result type to obtain a plurality of result probability values corresponding to a plurality of result types, wherein each result type corresponds to one result probability value;
pre-distributing the result types according to the result probability values to obtain preset event result data;
and storing the plurality of preset event result data into the plurality of preset buffer queues.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the server includes hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the server may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In accordance with the foregoing, please refer to fig. 5, where fig. 5 is a schematic structural diagram of a data distribution apparatus based on a Redis cluster, applied to a server, the apparatus including: an acquisition unit 501, an allocation unit 502, and a storage unit 503, wherein:
the acquiring unit 501 is configured to acquire a plurality of preset buffer queues, where each preset buffer queue includes at least one preset event result data;
the allocating unit 502 is configured to perform data allocation to multiple users according to the multiple preset buffer queues to obtain multiple user allocation information corresponding to the multiple users, where each user corresponds to one preset buffer queue, and each user allocation information corresponds to one preset event result data;
the storage unit 503 is configured to store the plurality of user allocation information into a preset Redis cluster if the plurality of preset event result data in the plurality of preset cache queues are completely allocated, where the Redis cluster includes a plurality of target cache queues, and each target cache queue corresponds to at least one user allocation information.
It can be seen that, the data distribution device based on the Redis cluster described in the embodiments of the present application is applied to a server, and the device may obtain a plurality of preset buffer queues, each preset buffer queue includes at least one preset event result data, and perform data distribution to a plurality of users according to the plurality of preset buffer queues to obtain a plurality of user distribution information corresponding to the plurality of users, where each user corresponds to one preset buffer queue and each user distribution information corresponds to one preset event result data, if the distribution of the plurality of preset event result data in the plurality of preset buffer queues is completed, store the plurality of user distribution information into the preset Redis cluster, the Redis cluster includes a plurality of target buffer queues, each target buffer queue corresponds to at least one user distribution information, so that the plurality of preset event result data that are distributed in advance can be stored in the plurality of preset buffer queues, and then, distributing preset event result data for a plurality of users, storing the preset event result data in a plurality of target cache queues in a preset Redis cluster, and realizing the distribution and storage of the data through two different cache queues, thereby being beneficial to reducing the operation and maintenance pressure of a server and improving the efficiency of data distribution.
In a possible example, in terms of performing data allocation to multiple users according to the multiple preset buffer queues, the allocating unit 502 may be specifically configured to:
counting the sequence of preset events corresponding to each user in the plurality of users through a preset counter to obtain a plurality of count values corresponding to the plurality of users, wherein each user corresponds to one count value;
and matching the preset event result data corresponding to each user in the plurality of users from the plurality of preset cache queues according to the plurality of count values.
In a possible example, in counting, by a preset counter, an order of a preset event corresponding to each of the plurality of users to obtain a plurality of count values corresponding to the plurality of users, the allocating unit 502 may be further configured to:
and when the preset instruction aiming at the preset event is triggered by any user i in the plurality of users, controlling the current count value of the preset counter to be increased by 1 to obtain a target count value m aiming at the user i, wherein i and m are positive integers.
In a possible example, in terms of matching the preset event result data corresponding to each of the plurality of users from the plurality of preset buffer queues according to the plurality of count values, the allocating unit 502 may be further configured to:
acquiring the number n corresponding to the preset cache queues, wherein n is a positive integer;
performing modulo operation on the number n of the plurality of preset cache queues based on the target count value m to obtain a target preset cache queue k, wherein k is an integer;
and distributing preset event result data corresponding to the target preset cache queue k to the user i.
In a possible example, in terms of storing the plurality of user allocation information to a preset Redis cluster, the storage unit 503 may be specifically configured to:
acquiring load conditions of a plurality of cluster nodes corresponding to the preset Redis cluster to obtain a plurality of load conditions;
based on the multiple load conditions, scoring the target cache queue corresponding to each cluster node to obtain multiple scoring values;
determining a plurality of priorities corresponding to the plurality of target cache queues according to the plurality of scoring values, wherein each target cache queue corresponds to one priority;
and storing a plurality of pieces of distribution information corresponding to the plurality of target cache queues according to the plurality of priorities.
Embodiments of the present application also provide a computer-readable storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any of the Redis cluster-based data distribution methods described in the above method embodiments.
Embodiments of the present application further provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform part or all of the steps of any of the Redis cluster-based data distribution methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A data distribution method based on Redis cluster is applied to a server, and the method comprises the following steps:
acquiring a plurality of preset buffer queues, wherein each preset buffer queue comprises at least one preset event result data;
according to the preset cache queues, data distribution is carried out on a plurality of users to obtain a plurality of user distribution information corresponding to the users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data;
if the distribution of the preset event result data in the preset cache queues is completed, storing the user distribution information into a preset Redis cluster, wherein the Redis cluster comprises a plurality of target cache queues, and each target cache queue corresponds to at least one user distribution information.
2. The method according to claim 1, wherein said distributing data to a plurality of users according to said plurality of predetermined buffer queues comprises:
counting the sequence of preset events corresponding to each user in the plurality of users through a preset counter to obtain a plurality of count values corresponding to the plurality of users, wherein each user corresponds to one count value;
and matching the preset event result data corresponding to each user in the plurality of users from the plurality of preset cache queues according to the plurality of count values.
3. The method of claim 2, wherein counting, by a preset counter, an order of preset events for each of the plurality of users to obtain a plurality of count values for the plurality of users comprises:
and when the preset instruction aiming at the preset event is triggered by any user i in the plurality of users, controlling the current count value of the preset counter to be increased by 1 to obtain a target count value m aiming at the user i, wherein i and m are positive integers.
4. The method according to claim 3, wherein said matching the predetermined event result data corresponding to each of the plurality of users from the plurality of predetermined buffer queues according to the plurality of count values comprises:
acquiring the number n corresponding to the preset cache queues, wherein n is a positive integer;
performing modulo operation on the number n of the plurality of preset cache queues based on the target count value m to obtain a target preset cache queue k, wherein k is an integer;
and distributing preset event result data corresponding to the target preset cache queue k to the user i.
5. The method according to claim 4, wherein after the obtaining a target preset buffer queue k by taking a module of the number n of the plurality of preset buffer queues based on the target count value m, the method further comprises:
if the target preset cache queue k does not include any preset event result data, deleting the target preset cache queue k, and executing the step of matching the preset event result data corresponding to each of the plurality of users from the plurality of preset cache queues for the user i.
6. The method according to claim 1, wherein the storing the plurality of user allocation information to a preset Redis cluster comprises:
acquiring load conditions of a plurality of cluster nodes corresponding to the preset Redis cluster to obtain a plurality of load conditions;
based on the multiple load conditions, scoring the target cache queue corresponding to each cluster node to obtain multiple scoring values;
determining a plurality of priorities corresponding to the plurality of target cache queues according to the plurality of scoring values, wherein each target cache queue corresponds to one priority;
and storing a plurality of pieces of distribution information corresponding to the plurality of target cache queues according to the plurality of priorities.
7. The method of claim 1, wherein prior to said retrieving the plurality of predetermined buffer queues, the method further comprises:
acquiring an event result list in a preset initial period, wherein the event result list comprises a plurality of result types;
calculating a result probability value corresponding to each result type to obtain a plurality of result probability values corresponding to a plurality of result types, wherein each result type corresponds to one result probability value;
pre-distributing the result types according to the result probability values to obtain preset event result data;
and storing the plurality of preset event result data into the plurality of preset buffer queues.
8. An apparatus for data distribution based on Redis clusters, the apparatus comprising: an acquisition unit, an allocation unit, and a storage unit, wherein,
the acquisition unit is used for acquiring a plurality of preset buffer queues, and each preset buffer queue comprises at least one preset event result data;
the distribution unit is used for distributing data to a plurality of users according to the plurality of preset cache queues to obtain a plurality of user distribution information corresponding to the plurality of users, wherein each user corresponds to one preset cache queue, and each user distribution information corresponds to one preset event result data;
the storage unit is configured to store the plurality of user allocation information into a preset Redis cluster if the plurality of preset event result data in the plurality of preset cache queues are completely allocated, where the Redis cluster includes a plurality of target cache queues, and each target cache queue corresponds to at least one user allocation information.
9. A server comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-7.
CN201911256861.5A 2019-12-10 2019-12-10 Data distribution method based on Redis cluster and related equipment Pending CN111124672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911256861.5A CN111124672A (en) 2019-12-10 2019-12-10 Data distribution method based on Redis cluster and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911256861.5A CN111124672A (en) 2019-12-10 2019-12-10 Data distribution method based on Redis cluster and related equipment

Publications (1)

Publication Number Publication Date
CN111124672A true CN111124672A (en) 2020-05-08

Family

ID=70498016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911256861.5A Pending CN111124672A (en) 2019-12-10 2019-12-10 Data distribution method based on Redis cluster and related equipment

Country Status (1)

Country Link
CN (1) CN111124672A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858610A (en) * 2020-07-28 2020-10-30 贝壳技术有限公司 Data line number distribution method and device, storage medium and electronic equipment
CN114816687A (en) * 2021-01-22 2022-07-29 京东方科技集团股份有限公司 Cluster resource control method and device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858610A (en) * 2020-07-28 2020-10-30 贝壳技术有限公司 Data line number distribution method and device, storage medium and electronic equipment
CN114816687A (en) * 2021-01-22 2022-07-29 京东方科技集团股份有限公司 Cluster resource control method and device and storage medium

Similar Documents

Publication Publication Date Title
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
CN109194584A (en) A kind of flux monitoring method, device, computer equipment and storage medium
CN111124672A (en) Data distribution method based on Redis cluster and related equipment
CN104866339A (en) Distributed persistent management method, system and device of FOTA data
CN107733805B (en) Service load scheduling method and device
CN108241535B (en) Resource management method and device and server equipment
CN110139114B (en) Virtual asset data processing method and device, computer equipment and storage medium
CN110675133A (en) Red packet robbing method and device, electronic equipment and readable storage medium
CN109358964B (en) Server cluster resource scheduling method
CN113422808B (en) Internet of things platform HTTP information pushing method, system, device and medium
CN108833505B (en) Data request processing method, server and storage medium
CN110750350B (en) Large resource scheduling method, system, device and readable storage medium
CN112260962A (en) Bandwidth control method and device
CN108021597B (en) Parallel counter, server and counting method
CN111580975B (en) Memory optimization method and system for speech synthesis
CN113609178A (en) Data pushing method, device, equipment and storage medium
CN114338683A (en) Scheduling request processing method and device, storage medium and electronic equipment
CN111106945B (en) VNF instantiation method, device, equipment and storage medium
CN114048033A (en) Load balancing method and device for batch running task and computer equipment
CN114157482A (en) Service access control method, device, control equipment and storage medium
CN111427691A (en) Virtual resource allocation method, device, medium and electronic equipment
CN115858133B (en) Batch data processing method and device, electronic equipment and storage medium
CN113422877B (en) Method and device for realizing number outbound based on service scene and electronic equipment
CN113645324B (en) Queue-based IP distribution method and system
CN110210786B (en) Method and device for processing advance wind control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination