CN108574645B - Queue scheduling method and device - Google Patents

Queue scheduling method and device Download PDF

Info

Publication number
CN108574645B
CN108574645B CN201710149931.1A CN201710149931A CN108574645B CN 108574645 B CN108574645 B CN 108574645B CN 201710149931 A CN201710149931 A CN 201710149931A CN 108574645 B CN108574645 B CN 108574645B
Authority
CN
China
Prior art keywords
queue
physical
user
physical queue
queues
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710149931.1A
Other languages
Chinese (zh)
Other versions
CN108574645A (en
Inventor
柴东岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710149931.1A priority Critical patent/CN108574645B/en
Publication of CN108574645A publication Critical patent/CN108574645A/en
Application granted granted Critical
Publication of CN108574645B publication Critical patent/CN108574645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A queue scheduling method and device are used for solving the problem that in the prior art, a user queue occupies fixed queue resources, so that the resource utilization rate is not high. The method comprises the following steps: acquiring running parameter values of N user queues having a binding relationship with a first physical queue at each monitoring time point, obtaining a running statistical result of the first physical queue according to the running parameter values, and rebinding at least one user queue having a binding relationship with the first physical queue to a target physical queue when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold value, wherein the target physical queue is one of physical queue groups having a processing capacity higher than that of the first physical queue group in a first queue resource pool. Therefore, by adopting the method provided by the embodiment of the application, queue resources are not solidified when the queue is created, the binding relationship is adjusted according to the operation statistical result, and the resource utilization rate is effectively improved.

Description

Queue scheduling method and device
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a queue scheduling method and apparatus.
Background
Message queuing is a middleware technology that implements message processing between services, and typically includes information such as event notifications, service requests, and the like, for coordinating data exchange and message communication between different systems or service modules. The message queue is used as a basic service commonly used by an information system, can realize the decoupling of a production process and a consumption process, improve the response speed of the system, and carry out peak clipping and valley filling, and is widely applied to business scenes such as orders, logs, alarms, flow calculation, task scheduling and the like. Such as Kafka, RabbitMQ, ActiveMQ, etc., are common message queuing systems.
The message service has become one of the basic services of mainstream cloud service providers, plays an increasingly important role in many internet applications and enterprise applications, and is mainly used in the following scenes:
(1) service decoupling: the parts which depend on other systems and belong to non-core or unimportant simultaneously in the service can be notified by using the message without synchronously waiting for the processing results of other systems. For example, the product order processing of the e-commerce website can put the order information into a message queue, and the subsequent processes of warehouse-out, delivery and the like can read task information from the message queue and then execute the task information.
(2) Final data consistency: the states for both systems eventually remain consistent, either both succeed or both fail. For example, the transaction system is informed with high reliability, so that the final consistency of transactions across the system is realized, and the realization cost is reduced. In the electronic shopping mall, a user uses a coupon, after ordering, a coupon system locks the coupon, an inventory system reduces the inventory by one, and the locking of the coupon and the inventory reduction are completed by two different systems, so that the execution may fail, but the data must be consistent. When one action fails, such as the locking of the coupon fails, a message of 'order failure' is sent to the message queue by using the message queue system, and after the other system receives the message, a rollback operation can be performed to increase the stock by one, so that the final data consistency is realized.
(3) Peak staggering flow control: when the processing capacity of the upstream system and the downstream system is different, the communication data between the systems can be dumped by using the message service to provide the message accumulation capacity, and the messages can be processed by the downstream system when the downstream system has the capacity to process the messages, so that the problems of congestion, waiting and the like caused by the difference of the processing capacity are reduced, and the complexity of the system is reduced.
(4) Log synchronization: the application synchronizes the log information to the message service in a reliable and asynchronous mode, and then performs real-time or off-line analysis on the log through other components, and can also be used for collecting key log information to perform application monitoring.
Under the cloud computing environment, due to the limitation of factors such as physical resource scale and cost, the number of queues which can be supported by a message queue system is limited, the number of requests, throughput and the number of messages which can be supported by a single queue are also limited, and in order to respond to queue access requests of a large number of users, a method for improving the resource utilization efficiency of the message queue system must be explored.
In the existing message queue system, a user queue adopts a static queue resource allocation mode. Specifically, when the user creates the queue, the system allocates the corresponding queue resource to the user, so that the queue resource is determined when the queue is created, the user produces the message and consumes the message based on the queue resource, and the queue resource is exclusively occupied by the user. As shown in fig. 1, queues 1, 2, 3, and 4 are allocated with corresponding queue resources when the queues are established, so that a static queue resource allocation model is adopted, the user queues are coupled with physical resources, a fixed queue resource is occupied no matter how the actual conditions of parameters such as message request number, throughput, message quantity, and the like corresponding to the user queues are, in a cloud environment, a large number of tenants exist, a plurality of user queues need to be established at the same time, the number of queues that a system can support is limited, the scale of cloud services is limited, and the resource utilization efficiency is not high.
Disclosure of Invention
The embodiment of the application provides a queue scheduling method and device, which are used for solving the problem that in the prior art, a user queue occupies fixed queue resources, so that the resource utilization rate is not high.
In a first aspect, an embodiment of the present application provides a queue scheduling method, where the method is applied to a message queue system, where the message queue system includes queue resource pools of at least one queue type, each queue resource pool includes at least two physical queue groups with different processing capabilities, each physical queue group includes at least one physical queue, and the method includes: acquiring running parameter values of N user queues having a binding relationship with a first physical queue at each monitoring time point, and obtaining a running statistical result of the first physical queue according to the running parameter values, wherein the first physical queue belongs to a first physical queue group in a first queue resource pool, the first queue resource pool is any queue resource pool in a message queue system, the first physical queue group is any physical queue group in the first queue resource pool, and N is a positive integer; and when the preset statistical parameter value included in the operation statistical result of the first physical queue is greater than a first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, wherein the target physical queue is one of a group of physical queues having higher processing capacity than the first group of physical queues in the first queue resource pool. Therefore, compared with the problem that the resource utilization rate is not high because the user queues occupy fixed queue resources when creating the queues in the prior art, by adopting the method provided by the embodiment of the application, the queue resources are not solidified when creating the queues, the operation parameter value of each user queue is acquired, the operation statistical result of each physical queue is monitored, when the preset statistical parameter value included in the operation statistical result of a certain physical queue is greater than the first threshold value, at least one user queue which has a binding relationship with the physical queue is bound to the target physical queue again, the resource utilization rate is effectively improved, the initial investment is low, when the user queues just come on line, accurate resource planning is not required to be performed in advance, and the bound physical queues can be gradually adjusted along with the actual operation condition of the user queues.
In one possible design, the queue types of the N user queues are all the same as the queue type of the first queue resource pool. Therefore, when the queue resource pool is distributed to the user queue, the queue resource pool to which the queue resource pool is distributed is determined according to the queue type of the user queue, so that the overall performance of the system is higher, and the resource utilization rationality is improved.
In one possible design, further comprising: and when the preset statistical parameter value included in the operation statistical result of the first physical queue is less than or equal to a second threshold value, allowing a new user queue to be bound for the first physical queue, wherein the first threshold value is greater than the second threshold value. Therefore, the resource utilization rate is effectively improved, and in addition, when the preset statistical parameter value included in the operation statistical result of the physical queue is greater than the first threshold value, the capacity of the physical queue can be expanded.
In one possible design, when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold value, the rebinding at least one user queue having a binding relationship with the first physical queue to the target physical queue includes: continuously counting M preset statistical parameter values corresponding to the following M continuous monitoring time points after the preset statistical parameter value is determined to be larger than the first threshold for the first time, wherein M is a positive integer; and when K preset statistical parameter values in the M preset statistical parameter values are all larger than a first threshold value, re-binding at least one user queue having a binding relation with the first physical queue to the target physical queue, wherein K is a positive integer. Therefore, if the rebinding process is triggered only once according to the result of the calculation that the preset statistical parameter value is larger than the first threshold value, a large number of rebinding processes may be triggered, some of which are not needed and may seriously affect the overall performance of the system. In order to avoid the above situation, the method provided by the embodiment of the application can effectively avoid unnecessary repeated binding process, and improve the stability of the system.
In one possible design, the preset statistical parameter value is data throughput of the first physical queue or message request number of the first physical queue. Therefore, the embodiment of the application provides a plurality of preset statistical parameter values for judging whether the user queue having the binding relationship with the current physical queue needs to be bound again.
In one possible design, re-binding at least one user queue having a binding relationship with a first physical queue to a target physical queue includes: and selecting the first S user queues with the throughput from the N user queues having the binding relationship with the first physical queue from the N user queues with the throughput from large to small or the message request number from large to small to be bound to the target physical queue again, wherein S is a positive integer. Therefore, the method provided by the embodiment of the application can be used for rebinding the user queue with larger influence on the load of the first physical queue to the target physical queue, and the resource utilization rate is improved.
In one possible design, re-binding at least one user queue having a binding relationship with a first physical queue to a target physical queue includes: aiming at the ith user queue in the at least one user queue having a binding relationship with the first physical queue, wherein the ith user queue is any queue in the at least one user queue, executing the following steps: setting the state parameter of the ith user queue from a first state to a second state, wherein the first state is used for indicating that the ith user queue is in a normal binding state, and the second state is used for indicating that the ith user queue is in a rebinding state; sending a production message request aiming at the ith user queue to a target physical queue; sending a consumption message request aiming at the ith user queue to the first physical queue until all messages stored in the first physical queue by the ith user queue are read; after all messages stored in the first physical queue by the ith user queue are read out, sending a consumption message request aiming at the ith user queue to a target physical queue; and setting the state parameter of the ith user queue from the second state to the first state. Therefore, by adopting the method provided by the embodiment of the application, messiness in message processing in the rebinding process is avoided, and the normal operation of the message queue system is ensured.
In a second aspect, the present application provides a queue scheduling apparatus for performing the method of the first aspect or any possible design of the first aspect. In particular, the apparatus comprises means for performing the method of the first aspect or any possible design of the first aspect described above.
In a third aspect, the present application provides a server comprising a communication interface, a processor, and a memory. The communication interface, the processor and the memory can be connected through a bus system. The memory is used for storing programs, instructions or codes, and the processor is used for executing the programs, instructions or codes in the memory to realize the method of the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, the present application provides a message queue system, comprising: a physical storage layer, a metadata manager, a resource scheduler and an access interface layer as described in the third aspect.
In a fifth aspect, the present application provides a computer-readable storage medium or a computer program product for storing a computer program for executing the instructions of the method of the first aspect or any possible implementation manner of the first aspect.
Drawings
FIG. 1 is a static queue resource allocation model in the background art of the present application;
FIG. 2 is an architecture diagram of a message queue system according to an embodiment of the present application;
FIG. 3 is a diagram illustrating queue resource pool partitioning in an embodiment of the present application;
fig. 4 is a schematic diagram illustrating the division of a group of physical queues in any queue resource pool in the embodiment of the present application;
FIG. 5 is a flowchart illustrating an exemplary queue scheduling method according to the present invention;
FIG. 6 is a flowchart illustrating a queue scheduling method according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating an exemplary structure of a queue scheduling apparatus according to an embodiment of the present application;
FIG. 8 is a second schematic diagram of a queue scheduling apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 2 is a diagram showing an architecture of a message queue system in the embodiment of the present application. The message queue system includes: a physical storage layer, a metadata manager, a resource scheduler, and an access interface layer.
The physical storage layer includes a plurality of nodes for storing physical queues, for example, a queue server (Broker), where each node may be a physical machine or a virtual machine, and the number of nodes may be expanded according to actual needs, for example, a Broker cluster system composed of a plurality of nodes. The resources of the physical storage layer are first divided into one or more queue resource pools according to the queue types, for example, as shown in fig. 3, the three queue resource pools, which are a high-reliability queue resource pool, a high-performance queue resource pool, and a relatively-reliable queue resource pool, are included in the diagram and are respectively used for storing messages of queues of different queue types. The queue resource pools are divided based on the queue types, the queues of the same type can be organized together, and different (sometimes even conflicting) requirements are met by a single queue resource pool instead of the single queue resource pool, so that the overall performance of the system is higher, and the resource utilization rationality is improved. When a user queue is created, it is determined which queue resource pool is allocated to according to the configured queue type, which is a static allocation mode. Therefore, the queue type of the user queue is the same as that of the queue resource pool in which the user queue is located. When the queue type of the user queue is not changed, the user queue cannot migrate among different queue resource pools.
It should be appreciated that in a particular traffic scenario, the resources of the physical storage layer may serve as only one queue resource pool, i.e. only one queue type is supported, e.g. a high-reliability queue resource pool supporting only high-reliability queue types or a high-throughput queue resource pool supporting only high-throughput queue types.
The metadata manager is also called a metadata server cluster, and may include a plurality of metadata servers (metasserver) for storing metadata information such as a queue type of a user queue, a binding relationship between the user queue and a physical queue, and queue resource pool configuration information. For example, the metadata manager may be implemented using a relational database or a NoSQL database.
The resource Scheduler is a core component in the embodiment of the present application, and may include multiple scheduling servers (Scheduler servers) for implementing the queue scheduling method shown in fig. 5, establishing a binding relationship between a user queue and a physical queue, and acquiring an operation parameter value of the user queue. The resource scheduler may be deployed independently, or may be deployed together with the access server at a node, or the resource scheduler and the metadata manager are located at the same node. The resource scheduler may be a physical machine or a virtual machine.
It should be understood that each physical queue in the same queue resource pool also has the same queue type, providing similar service capability to the outside world. After the user queue is allocated to the queue resource pool with the same queue type as the user queue, a physical queue with the lowest current load can be selected according to a minimum load priority algorithm, and the binding relationship between the user queue and the physical queue is established. It should be noted that, in the embodiment of the present application, an algorithm for selecting a physical queue from a user queue is not specifically limited.
It should be appreciated that the message queue system pre-allocates the resources of each physical queue, so that the physical queues may exist without having established a binding relationship with the user queues. In addition, other parameters such as the maximum user queue capacity of the single physical queue and the like can be configured according to the physical resources.
The Access interface layer is an entrance for a user to Access the message queue system service, and may include multiple Access servers (Access servers) for creating queues, deleting queues, producing messages, consuming messages, etc. according to the user's requirements.
In the embodiment of the present application, a user queue refers to a queue object for producing and consuming messages, which may also be referred to as a virtual queue or a logical queue, and physical resources are not directly allocated to the user queue in the embodiment of the present application. The physical queues are actual carriers of messages, each queue resource pool includes at least two physical queue groups with different processing capabilities, each physical queue group includes at least one physical queue, the number of the physical queues in the physical queue group is dynamically expanded according to actual needs of the system, the physical queue groups with different processing capabilities may be divided according to the number of partitions of the physical queues, for example, as shown in fig. 4, the physical queues in the queue resource pool are divided into three physical queue groups based on three different partitions, and the higher the number of partitions is, the stronger the concurrent processing capability is.
Referring to fig. 5, an embodiment of the present application provides a queue scheduling method, which is applied to the above message queue system, where the message queue system includes queue resource pools of at least one queue type, each queue resource pool includes at least two physical queue groups with different processing capabilities, and each physical queue group includes at least one physical queue, and the method includes:
step 500: and acquiring the operation parameter values of the N user queues having binding relation with the first physical queue at each monitoring time point, and acquiring the operation statistical result of the first physical queue according to the operation parameter values.
The first physical queue belongs to a first physical queue group in a first queue resource pool, the first queue resource pool is any queue resource pool in the message queue system, the first physical queue group is any physical queue group in the first queue resource pool, and N is a positive integer.
Step 510: and when the preset statistical parameter value included in the operation statistical result of the first physical queue is greater than the first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to the target physical queue.
The target physical queue refers to one physical queue in a physical queue group with higher processing capacity in the first queue resource pool than the first physical queue group.
In one possible design. The preset statistical parameter value is the data throughput of the first physical queue or the message request number of the first physical queue.
In a message queue system, a production message interface or a consumption message interface is called every production message or consumption message, one call is a request, and is usually measured by the number of requests per second, and the larger the number is, the higher the requirement on the processing energy of the queue system is.
For example, a first physical queue may process up to 3 ten thousand requests per second, with a first threshold set at 70% of the 3 ten thousand requests (i.e., 21000 message requests), and a user queue that is now bound to the first physical queue may have 2 ten thousand 5 thousand requests per second and more than 21000 requests per second, at which time the queue may be re-bound to a physical queue that may process up to 10 ten thousand requests per second. At this time, the physical queue that can process 10 ten thousand requests at most per second is one of the physical queue group capable of processing 10 ten thousand requests in the first resource pool.
For step 500, collecting, at each monitoring time point, operating parameter values of N user queues having a binding relationship with the first physical queue, where the parameter values for each user queue may include, but are not limited to, the following parameters: a number of requests to produce messages, a number of requests to consume messages, an average message size, a user queue message data throughput, a number of concurrencies to produce, a number of concurrencies to consume, a rate of message production, a rate of message consumption, a number of message piles, and the like.
Wherein, the user queue message data throughput is the number of production message requests × the average message size + the number of consumption message requests × the average message size.
In addition, the number of the user queue requests can be obtained according to the parameters, and the number of the user queue requests is the number of the production message requests plus the number of the consumption message requests.
The monitoring time point here may collect the parameter values of the above parameters in units of 1 minute, 15 minutes, 1 hour, and the like.
For step 500, the operation statistics of the first physical queue are obtained according to the operation parameter values, which may include, but are not limited to, the following operation statistics: physical queue data throughput, number of physical queue message requests, maximum number of single user queue requests, maximum single user queue message data throughput, and the like.
For example, assuming that the first physical queue has a binding relationship with N user queues, the number of production message requests, the number of consumption message requests, and the average message size for each user queue are collected every 1 minute, and the message data throughput of the user queues and the number of user queue requests of the corresponding queues are calculated. Further, according to the obtained operation parameter values of the N user queues, the maximum request quantity of a single user queue and the maximum message data throughput of the single user queue in the N user queues can be screened out, the sum of the message data throughputs of the N user queues is calculated to be used as the data throughput of the first physical queue, and the sum of the request quantities of the N user queues is calculated to be used as the message request quantity of the first physical queue.
In addition, the maximum number of requests of the single physical queue, the maximum message data throughput of the single physical queue, the minimum number of requests of the single physical queue, the minimum message data throughput of the single physical queue and the like can be counted for each physical queue group. The maximum number of requests of the queue group, the maximum message data throughput of the queue group and the like can be counted for each resource pool.
In one possible design, for step 510, when re-binding at least one user queue having a binding relationship with the first physical queue to the target physical queue, the selection of the at least one user queue may employ, but is not limited to, the following method:
and selecting the first S user queues with the throughput from the N user queues having the binding relationship with the first physical queue from the N user queues with the throughput from large to small or the message request number from large to small to be bound to the target physical queue again, wherein S is a positive integer.
In one possible design, taking an ith user queue of the at least one user queue having a binding relationship with the first physical queue as an example, the ith user queue is any one of the at least one user queue, and the specific steps of re-binding the ith user queue to the target physical queue are as follows:
firstly, setting the state parameter of the ith user queue from a first state to a second state, wherein the first state is used for indicating that the ith user queue is in a normal binding state, and the second state is used for indicating that the ith user queue is in a rebinding state.
Then, the production message request for the ith user queue is sent to the target physical queue, that is, the production message request for the ith user queue is not stored in the first physical queue any more, but is stored in the target physical queue, so that the first physical queue does not receive a new production message request any more.
Meanwhile, since the first physical queue also stores the message of the ith user queue, all messages stored in the first physical queue by the ith user queue need to be read out, specifically, the message consumption request for the ith user queue is sent to the first physical queue until all messages stored in the first physical queue by the ith user queue are read out, at this time, the message of the ith user queue is not stored in the first physical queue any more, and therefore, the subsequent message consumption request for the ith user queue needs to be sent to the target physical queue.
Finally, the state parameter of the ith user queue is set from the second state to the first state.
In addition, the binding relationship between the ith user queue and the target physical queue needs to be recorded, and the previously recorded binding relationship between the ith user queue and the first physical queue needs to be deleted, and specifically, the binding relationship information may be recorded in the metadata manager shown in fig. 2.
Furthermore, with respect to step 510, if the rebinding process is triggered based on only one calculation of the preset statistical parameter value being greater than the first threshold, a large number of rebinding processes may be triggered, some of which are not needed and may severely impact the overall performance of the system. Therefore, in order to avoid the above-described situation, the following methods can be adopted but are not limited to:
after the preset statistical parameter value is determined to be larger than the first threshold value for the first time, continuously counting M preset statistical parameter values corresponding to the following M continuous monitoring time points, wherein M is a positive integer, when K preset statistical parameter values in the M preset statistical parameter values are all larger than the first threshold value, at least one user queue having a binding relationship with the first physical queue is bound to the target physical queue again, and K is a positive integer.
For example, when all 7 preset statistical parameter values are greater than the first threshold, at least one user queue having a binding relationship with the first physical queue is re-bound to the target physical queue, or when all 3 preset statistical parameter values of the 7 preset statistical parameter values are greater than the first threshold, at least one user queue having a binding relationship with the first physical queue is re-bound to the target physical queue, which may be understood as a fast re-binding policy, or when all 15 preset statistical parameter values are greater than the first threshold, at least one user queue having a binding relationship with the first physical queue is re-bound to the target physical queue, which may be understood as a slow re-binding policy.
It should be appreciated that if there is no target physical queue that satisfies the condition, for example, the current first physical queue is the highest-processing physical queue in the first queue resource pool, and there is no physical queue in the first queue resource pool that has higher processing capacity than the first physical queue group, then a higher-processing physical queue group, for example, a higher-partition physical queue group, may be newly created in the first queue resource pool, and a new physical queue may be allocated for the new physical queue group, and then at least one user queue having a binding relationship with the first physical queue may be re-bound to one physical queue in the new physical queue group.
It should be understood that, when the preset statistical parameter value included in the running statistical result of the first physical queue is less than or equal to the preset minimum threshold value, the user queue having a binding relationship with the first physical queue may also be re-bound to one physical queue in the physical queue group with a processing capacity lower than that of the first physical queue group in the first queue resource pool.
For example, the first physical queue may process 3 ten thousand requests at most per second, the preset minimum threshold is set to 10% of the 3 ten thousand requests (i.e., 3000 message requests), when the number of requests per second of the first physical queue is less than 3000 times, and K preset statistical parameter values among M preset statistical parameter values corresponding to the subsequent M continuous monitoring time points are all less than or equal to 3000 times, the user queue in the first physical queue may be rebinding to one physical queue in the physical queue group having a lower processing capability than the first physical queue group in the first queue resource pool, and the physical resource occupied by the first physical queue is released.
It should be noted that all the above processes related to the user queue rebinding are automatically triggered by the message queue system.
In addition to determining whether the user queue needs to be rebinding according to the preset statistical parameter value included in the operation statistical result of the first physical queue, the method can also be used for judging whether a new user queue can be continuously bound for the first physical queue.
In one possible design, when a preset statistical parameter value included in the running statistical result of the first physical queue is less than or equal to a second threshold value, a new user queue is allowed to be bound for the first physical queue, and the first threshold value is greater than the second threshold value.
For example, two thresholds are set based on the kth physical queue, which are Threshold _ Entry (i.e. the second Threshold) and Threshold _ Upper (i.e. the first Threshold), and when the preset statistical parameter value of the kth physical queue reaches Threshold _ Entry, no new user queue is allocated to the physical queue; when the preset statistical parameter value of the kth physical queue reaches Threshold _ Upper, the physical queue needs to be expanded, or at least one user queue having a binding relationship with the kth physical queue is bound to other queues in the same queue resource pool again.
Wherein the value of Threshold _ Entry is smaller than Threshold _ Upper. Assuming that Threshold _ Entry is set to 2 ten thousand TPS (i.e. 2 ten thousand requests per second) and Threshold _ Upper is set to 3 ten thousand TPS (i.e. 3 ten thousand requests per second), 1000 user queues are now allocated to the kth physical queue, the total number of requests of the 1000 user queues reaches 21000TPS, at this time, the new user queue will not be bound to the kth physical queue, and meanwhile, the total number of requests of the existing 1000 user queues is dynamically changed, for example, one of the queues is 500TPS before and is later increased to 1000 TPS. When the total request number of the 1000 user queues reaches 31000TPS, namely, is greater than Threshold _ Upper, at least one user queue having a binding relationship with the kth physical queue is bound to other queues in the same queue resource pool again.
The following describes embodiments of the present application in detail with reference to fig. 6.
S601: a user queue is created.
Specifically, the process of creating a user queue is implemented at the access interface layer.
S602: and allocating a queue resource pool for the user queue.
Specifically, the user queue is allocated to a queue resource pool of the same queue type as the user queue according to the queue type of the user queue.
S603: and allocating a first partition number physical queue group for the user queue.
Specifically, the user queue may be allocated to a physical queue group with a corresponding processing capacity according to the processing requirement of the user queue, for example, when the processing requirement of the user queue is low, the user queue may be allocated to a physical queue group with a low partition number.
S604: and judging whether the first partition number physical queue group has residual resources, if so, executing S606, otherwise, executing S605.
Specifically, it is determined whether there is any physical queue in the first partition number physical queue that can establish a binding relationship with the user queue, for example, it may be determined whether there is a physical queue whose preset statistical parameter value is less than or equal to the second threshold value according to a preset statistical parameter value included in the operation statistical result of each physical queue in the first partition number physical queue.
S605: a new physical queue is allocated for the first partition number physical queue group.
S606: and establishing a binding relationship between the user queue and a first physical queue in the first partition number physical queue group.
The first physical queue may be a physical queue with a preset statistical parameter value less than or equal to a second threshold value in the first partition number physical queue group, or a new physical queue in S605.
In addition, when a plurality of physical queues with preset statistical parameter values less than or equal to the second threshold exist, the physical queue with the lowest load can be selected to establish a binding relationship with the user queue.
S607: and each monitoring time point acquires the operation parameter values of the N user queues having the binding relation with the first physical queue to obtain the operation statistical result of the first physical queue.
S608: and evaluating whether a user queue needing to be bound again exists, if so, executing S609, and otherwise, returning to S607.
S609: and re-binding at least one user queue having a binding relationship with the first physical queue to the target physical queue.
Here, S608 and S609 are the same as the method shown in fig. 5, and the repetition is not repeated.
Based on the same concept, the present application further provides a queue scheduling apparatus, which may be configured to execute the method embodiment corresponding to fig. 5, so that the implementation of the queue scheduling apparatus provided in the embodiment of the present application may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 7, an embodiment of the present application provides a queue scheduling apparatus, which is applied to a message queue system, where the message queue system includes queue resource pools of at least one queue type, each queue resource pool includes at least two physical queue groups with different processing capabilities, and each physical queue group includes at least one physical queue, and the apparatus includes:
the acquisition processing unit 710 is configured to acquire, at each monitoring time point, operation parameter values of N user queues having a binding relationship with a first physical queue, and obtain an operation statistical result of the first physical queue according to the operation parameter values, where the first physical queue belongs to a first physical queue group in a first queue resource pool, the first queue resource pool is any queue resource pool in a message queue system, the first physical queue group is any physical queue group in the first queue resource pool, and N is a positive integer;
the scheduling unit 720 is configured to, when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold, re-bind at least one user queue having a binding relationship with the first physical queue to a target physical queue, where the target physical queue is one of a group of physical queues having a processing capability higher than that of the first group of physical queues in the first queue resource pool.
In one possible design, the queue types of the N user queues are all the same as the queue type of the first queue resource pool.
In one possible design, the scheduling unit 720 is further configured to:
and when the preset statistical parameter value included in the operation statistical result of the first physical queue is smaller than or equal to a second threshold value, allowing a new user queue to be bound for the first physical queue, wherein the first threshold value is larger than the second threshold value.
In a possible design, when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold, the scheduling unit 720 is specifically configured to rebind at least one user queue having a binding relationship with the first physical queue to a target physical queue:
continuously counting M preset statistical parameter values corresponding to the following M continuous monitoring time points after the preset statistical parameter value is determined to be larger than the first threshold for the first time, wherein M is a positive integer;
and when K preset statistical parameter values in the M preset statistical parameter values are all larger than the first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, wherein K is a positive integer.
In a possible design, the preset statistical parameter value is a data throughput of the first physical queue or a message request number of the first physical queue.
In a possible design, when at least one user queue having a binding relationship with the first physical queue is re-bound to a target physical queue, the scheduling unit 720 is specifically configured to:
and selecting the first S user queues with the throughput from the N user queues having the binding relationship with the first physical queue from the N user queues with the throughput from large to small or the message request number from large to small to be bound to the target physical queue again, wherein S is a positive integer.
In a possible design, when at least one user queue having a binding relationship with the first physical queue is re-bound to a target physical queue, the scheduling unit 720 is specifically configured to:
aiming at an ith user queue in at least one user queue having a binding relationship with the first physical queue, wherein the ith user queue is any one queue in the at least one user queue, executing:
setting the state parameter of the ith user queue from a first state to a second state, wherein the first state is used for indicating that the ith user queue is in a normal binding state, and the second state is used for indicating that the ith user queue is in a rebinding state;
sending a production message request for the ith user queue to the target physical queue;
sending a consumption message request aiming at the ith user queue to the first physical queue until all messages stored in the first physical queue by the ith user queue are read out;
after all messages stored in the first physical queue by the ith user queue are read out, sending a consumption message request aiming at the ith user queue to the target physical queue;
setting the state parameter of the ith user queue from the second state to the first state.
Further, based on the apparatus shown in fig. 7, an embodiment of the present invention further provides a queue scheduling apparatus, and each unit is further divided. Referring to fig. 8, the queue scheduling apparatus includes a physical queue manager, a user queue attribute manager, and a queue resource scheduler.
The physical queue manager, the user queue manager, and the queue attribute manager are equivalent to the acquisition processing unit in fig. 7, and the queue resource scheduler is equivalent to the scheduling unit in fig. 7.
The physical queue manager is responsible for storing metadata facing the resource side, and the metadata comprises a queue resource pool ID, a physical queue group ID, a physical queue ID, a user queue list bound with each physical queue, the number of partitions, the number of copies, the size of a message block and the like. Where the message block size sets the maximum size of the underlying single file.
The user queue manager is responsible for storing queue metadata facing the user side, such as user queue name, tenant to which the user belongs, queue type, queue quota, access right and the like. The queue quota is the maximum number of messages or the maximum storage space allowed to be held in the user's queue.
Each user queue has a user queue attribute manager for recording the operating parameter values of the user queue, including average message size, number of production concurrences, number of consumption concurrences, message production rate, message consumption rate, number of message piles, and the like. The message queue mainly comprises two processes of producing messages and consuming messages, if only the messages are produced and the messages are not consumed, or the speed of producing the messages is far higher than the speed of consuming the messages, the messages which are not processed are more and more, and the number of the messages which are not processed or consumed is the message stack number.
And the queue resource scheduler is responsible for acquiring each user queue attribute manager and resource-oriented metadata stored in the physical queue manager to acquire the running statistical result of each physical queue, and rebinding at least one user queue having a binding relationship with the physical queue to a corresponding target physical queue when a preset statistical parameter value included in the running statistical result of a certain physical queue is greater than a corresponding first threshold value.
It should be understood that the specific division of the modules described above is by way of example only and is not limiting of the present application.
Based on the same concept, the present application further provides a server, where the server may be configured to execute the method embodiment corresponding to fig. 5, so that the implementation of the terminal provided in the embodiment of the present application may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 9, the present application provides a server 900 comprising a communication interface 910, a processor 920, and a memory 930. The communication interface, the processor and the memory can be connected through a bus system. The memory is used for storing programs, instructions or codes, and the processor is used for executing the programs, the instructions or the codes in the memory so as to specifically execute the following steps: acquiring running parameter values of N user queues having a binding relationship with a first physical queue at each monitoring time point, and obtaining a running statistical result of the first physical queue according to the running parameter values, wherein the first physical queue belongs to a first physical queue group in a first queue resource pool, the first queue resource pool is any queue resource pool in a message queue system, the first physical queue group is any physical queue group in the first queue resource pool, and N is a positive integer; and when the preset statistical parameter value included in the operation statistical result of the first physical queue is greater than a first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, wherein the target physical queue is one of a group of physical queues having higher processing capacity than the first group of physical queues in the first queue resource pool.
It should be understood that, in the embodiment of the present application, the processor 920 may be a Central Processing Unit (CPU), and may also be other general processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 930 may include a read-only memory and a random access memory, and provides instructions and data to the processor 920. A portion of the memory 930 may also include non-volatile random access memory. For example, the memory 930 may also store device type information.
The bus system may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus.
In implementation, the steps in the method of the corresponding embodiment in fig. 5 may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 920. The steps of the message processing method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 930, and the processor 920 reads the information in the memory 930, and completes the steps in the method of the embodiment corresponding to fig. 5 in combination with the hardware thereof. To avoid repetition, it is not described in detail here.
It should be noted that, in a specific embodiment, the functions of the acquisition processing unit 710 in fig. 7 may be implemented by using the communication interface 910 and the processor 920 in fig. 9, and the functions of the scheduling unit 720 may be implemented by using the processor 920 in fig. 9.
Referring to fig. 2, the present application provides a message queue system, including: a physical storage layer, a metadata manager, a resource scheduler, and an access interface layer.
In one specific implementation, the resource scheduler may be comprised of multiple servers as shown in FIG. 9.
To sum up, compared with the problem that the resource utilization rate is not high because the user queues occupy fixed queue resources when creating the queues in the prior art, by adopting the method provided by the embodiment of the application, the queue resources are not solidified when creating the queues, the operation parameter value of each user queue is acquired, the operation statistical result of each physical queue is monitored, when the preset statistical parameter value included in the operation statistical result of a certain physical queue is greater than the first threshold value, at least one user queue which has a binding relationship with the physical queue is bound to the target physical queue again, the resource utilization rate is effectively improved, the initial investment is low, when the user queues just come on line, accurate resource planning is not required to be done in advance, and the bound physical queues can be gradually adjusted along with the actual operation condition of the user queues. In addition, when the queue resource pool is distributed to the user queue, the queue resource pool to which the queue resource pool is distributed is determined according to the queue type of the user queue, so that the overall performance of the system is higher, and the resource utilization rationality is improved.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (14)

1. A queue scheduling method applied to a message queue system, the message queue system including queue resource pools of at least one queue type, each queue resource pool including at least two physical queue groups with different processing capabilities, each physical queue group including at least one physical queue, the method comprising:
acquiring running parameter values of N user queues having a binding relationship with a first physical queue at each monitoring time point, and obtaining a running statistical result of the first physical queue according to the running parameter values, wherein the first physical queue belongs to a first physical queue group in a first queue resource pool, the first queue resource pool is any queue resource pool in the message queue system, the first physical queue group is any physical queue group in the first queue resource pool, and N is a positive integer;
and when a preset statistical parameter value included in the operation statistical result of the first physical queue is greater than a first threshold value, rebinding at least one user queue having a binding relationship with the first physical queue to a target physical queue, wherein the target physical queue is one of a group of physical queues having higher processing capacity than the first group of physical queues in the first queue resource pool.
2. The method of claim 1, wherein the queue types of the N user queues are all the same as the queue type of the first queue resource pool.
3. The method of claim 1 or 2, further comprising:
and when the preset statistical parameter value included in the operation statistical result of the first physical queue is smaller than or equal to a second threshold value, allowing a new user queue to be bound for the first physical queue, wherein the first threshold value is larger than the second threshold value.
4. The method of claim 1 or 2, wherein when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue comprises:
continuously counting M preset statistical parameter values corresponding to the following M continuous monitoring time points after the preset statistical parameter value is determined to be larger than the first threshold for the first time, wherein M is a positive integer;
and when K preset statistical parameter values in the M preset statistical parameter values are all larger than the first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, wherein K is a positive integer.
5. The method of claim 1 or 2, wherein the preset statistical parameter value is a data throughput of the first physical queue or a number of message requests of the first physical queue.
6. The method of claim 1 or 2, wherein re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue comprises:
and selecting the first S user queues with the throughput from the N user queues having the binding relationship with the first physical queue from the N user queues with the throughput from large to small or the message request number from large to small to be bound to the target physical queue again, wherein S is a positive integer.
7. The method of claim 1 or 2, wherein re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue comprises:
aiming at an ith user queue in at least one user queue having a binding relationship with the first physical queue, wherein the ith user queue is any one queue in the at least one user queue, executing:
setting the state parameter of the ith user queue from a first state to a second state, wherein the first state is used for indicating that the ith user queue is in a normal binding state, and the second state is used for indicating that the ith user queue is in a rebinding state;
sending a production message request for the ith user queue to the target physical queue;
sending a consumption message request aiming at the ith user queue to the first physical queue until all messages stored in the first physical queue by the ith user queue are read out;
after all messages stored in the first physical queue by the ith user queue are read out, sending a consumption message request aiming at the ith user queue to the target physical queue;
setting the state parameter of the ith user queue from the second state to the first state.
8. A queue scheduling apparatus, applied to a message queue system, where the message queue system includes queue resource pools of at least one queue type, each queue resource pool includes at least two physical queue groups with different processing capabilities, and each physical queue group includes at least one physical queue, the apparatus comprising:
the system comprises an acquisition processing unit and a processing unit, wherein the acquisition processing unit is used for acquiring running parameter values of N user queues having a binding relationship with a first physical queue at each monitoring time point and acquiring a running statistical result of the first physical queue according to the running parameter values, the first physical queue belongs to a first physical queue group in a first queue resource pool, the first queue resource pool is any queue resource pool in the message queue system, the first physical queue group is any physical queue group in the first queue resource pool, and N is a positive integer;
and a scheduling unit, configured to, when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold, re-bind at least one user queue having a binding relationship with the first physical queue to a target physical queue, where the target physical queue is one of a group of physical queues having a processing capability higher than that of the first group of physical queues in the first queue resource pool.
9. The apparatus of claim 8, wherein the queue types of the N user queues are all the same as the queue type of the first queue resource pool.
10. The apparatus of claim 8 or 9, wherein the scheduling unit is further configured to:
and when the preset statistical parameter value included in the operation statistical result of the first physical queue is smaller than or equal to a second threshold value, allowing a new user queue to be bound for the first physical queue, wherein the first threshold value is larger than the second threshold value.
11. The apparatus according to claim 8 or 9, wherein when a preset statistical parameter value included in the running statistical result of the first physical queue is greater than a first threshold value, at least one user queue having a binding relationship with the first physical queue is re-bound to a target physical queue, and the scheduling unit is specifically configured to:
continuously counting M preset statistical parameter values corresponding to the following M continuous monitoring time points after the preset statistical parameter value is determined to be larger than the first threshold for the first time, wherein M is a positive integer;
and when K preset statistical parameter values in the M preset statistical parameter values are all larger than the first threshold value, re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, wherein K is a positive integer.
12. The apparatus of claim 8 or 9, wherein the preset statistical parameter value is a data throughput of the first physical queue or a number of message requests of the first physical queue.
13. The apparatus according to claim 8 or 9, wherein when re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, the scheduling unit is specifically configured to:
and selecting the first S user queues with the throughput from the N user queues having the binding relationship with the first physical queue from the N user queues with the throughput from large to small or the message request number from large to small to be bound to the target physical queue again, wherein S is a positive integer.
14. The apparatus according to claim 8 or 9, wherein when re-binding at least one user queue having a binding relationship with the first physical queue to a target physical queue, the scheduling unit is specifically configured to:
aiming at an ith user queue in at least one user queue having a binding relationship with the first physical queue, wherein the ith user queue is any one queue in the at least one user queue, executing:
setting the state parameter of the ith user queue from a first state to a second state, wherein the first state is used for indicating that the ith user queue is in a normal binding state, and the second state is used for indicating that the ith user queue is in a rebinding state;
sending a production message request for the ith user queue to the target physical queue;
sending a consumption message request aiming at the ith user queue to the first physical queue until all messages stored in the first physical queue by the ith user queue are read out;
after all messages stored in the first physical queue by the ith user queue are read out, sending a consumption message request aiming at the ith user queue to the target physical queue;
setting the state parameter of the ith user queue from the second state to the first state.
CN201710149931.1A 2017-03-14 2017-03-14 Queue scheduling method and device Active CN108574645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710149931.1A CN108574645B (en) 2017-03-14 2017-03-14 Queue scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710149931.1A CN108574645B (en) 2017-03-14 2017-03-14 Queue scheduling method and device

Publications (2)

Publication Number Publication Date
CN108574645A CN108574645A (en) 2018-09-25
CN108574645B true CN108574645B (en) 2020-08-25

Family

ID=63577189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710149931.1A Active CN108574645B (en) 2017-03-14 2017-03-14 Queue scheduling method and device

Country Status (1)

Country Link
CN (1) CN108574645B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3905620A4 (en) * 2019-02-03 2022-01-12 Huawei Technologies Co., Ltd. Message scheduling method, scheduler, network device and network system
CN111131082A (en) * 2019-12-25 2020-05-08 广东电科院能源技术有限责任公司 Charging facility data transmission dynamic control method and system
CN113138860B (en) * 2020-01-17 2023-11-03 中国移动通信集团浙江有限公司 Message queue management method and device
CN111625358B (en) * 2020-05-25 2023-06-20 浙江大华技术股份有限公司 Resource allocation method and device, electronic equipment and storage medium
CN111935658B (en) * 2020-07-16 2022-01-18 北京思特奇信息技术股份有限公司 Method and system for solving congestion in message interaction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1182553A2 (en) * 2000-08-24 2002-02-27 Cognos Incorporated Efficient assignment of processing resources in a fair queuing system
US6711607B1 (en) * 2000-02-04 2004-03-23 Ensim Corporation Dynamic scheduling of task streams in a multiple-resource system to ensure task stream quality of service
CN102546379A (en) * 2010-12-27 2012-07-04 中国移动通信集团公司 Virtualized resource scheduling method and system
CN102932279A (en) * 2012-10-30 2013-02-13 北京邮电大学 Multidimensional resource scheduling system and method for cloud environment data center
CN103179048A (en) * 2011-12-21 2013-06-26 中国电信股份有限公司 Method and system for changing main machine quality of service (QoS) strategies of cloud data center
CN103458052A (en) * 2013-09-16 2013-12-18 北京搜狐新媒体信息技术有限公司 Resource scheduling method and device based on IaaS cloud platform
CN105824625A (en) * 2016-03-14 2016-08-03 北京中电普华信息技术有限公司 Business application construction device and method based on cloud environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6711607B1 (en) * 2000-02-04 2004-03-23 Ensim Corporation Dynamic scheduling of task streams in a multiple-resource system to ensure task stream quality of service
EP1182553A2 (en) * 2000-08-24 2002-02-27 Cognos Incorporated Efficient assignment of processing resources in a fair queuing system
CN102546379A (en) * 2010-12-27 2012-07-04 中国移动通信集团公司 Virtualized resource scheduling method and system
CN103179048A (en) * 2011-12-21 2013-06-26 中国电信股份有限公司 Method and system for changing main machine quality of service (QoS) strategies of cloud data center
CN102932279A (en) * 2012-10-30 2013-02-13 北京邮电大学 Multidimensional resource scheduling system and method for cloud environment data center
CN103458052A (en) * 2013-09-16 2013-12-18 北京搜狐新媒体信息技术有限公司 Resource scheduling method and device based on IaaS cloud platform
CN105824625A (en) * 2016-03-14 2016-08-03 北京中电普华信息技术有限公司 Business application construction device and method based on cloud environment

Also Published As

Publication number Publication date
CN108574645A (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN108574645B (en) Queue scheduling method and device
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
CN106452818B (en) Resource scheduling method and system
US20140082170A1 (en) System and method for small batching processing of usage requests
CN110704173A (en) Task scheduling method, scheduling system, electronic device and computer storage medium
CN106713396B (en) Server scheduling method and system
CN103019853A (en) Method and device for dispatching job task
CN112749056A (en) Application service index monitoring method and device, computer equipment and storage medium
CN111966289A (en) Partition optimization method and system based on Kafka cluster
Watanabe et al. Adaptive group-based job scheduling for high performance and reliable volunteer computing
CN111522786A (en) Log processing system and method
CN111930493A (en) NodeManager state management method and device in cluster and computing equipment
CN112579692B (en) Data synchronization method, device, system, equipment and storage medium
CN114490078A (en) Dynamic capacity reduction and expansion method, device and equipment for micro-service
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN107038067B (en) Management method and device for processing resources in distributed stream processing
CN111008071A (en) Task scheduling system, method and server
CN109586970B (en) Resource allocation method, device and system
CN107426012B (en) Fault recovery method and device based on super-fusion architecture
CN111400241B (en) Data reconstruction method and device
CN111158896A (en) Distributed process scheduling method and system
EP4206915A1 (en) Container creation method and apparatus, electronic device, and storage medium
US10540341B1 (en) System and method for dedupe aware storage quality of service
CN104503846B (en) A kind of resource management system based on cloud computing system
CN114090201A (en) Resource scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220118

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Patentee after: xFusion Digital Technologies Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.