CN102789394B - Method, device and nodes for parallelly processing information and server cluster - Google Patents

Method, device and nodes for parallelly processing information and server cluster Download PDF

Info

Publication number
CN102789394B
CN102789394B CN201110131543.3A CN201110131543A CN102789394B CN 102789394 B CN102789394 B CN 102789394B CN 201110131543 A CN201110131543 A CN 201110131543A CN 102789394 B CN102789394 B CN 102789394B
Authority
CN
China
Prior art keywords
message
thread
allocation rule
threads
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110131543.3A
Other languages
Chinese (zh)
Other versions
CN102789394A (en
Inventor
李彦超
桑植
韩众鸼
雷继斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Singapore Holdings Pte Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201110131543.3A priority Critical patent/CN102789394B/en
Publication of CN102789394A publication Critical patent/CN102789394A/en
Priority to HK12113233.6A priority patent/HK1172436A1/en
Application granted granted Critical
Publication of CN102789394B publication Critical patent/CN102789394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method, a device and nodes for parallelly processing information and a server cluster. The method includes: obtaining a thread distribution rule and an information distribution rule from a preset configuration database, wherein the thread distribution rule is used for indicating the number of threads owned by each processing node in the cluster, and the information distribution rule is used for indicating which thread is used for processing information to be processed; building a plurality of threads corresponding to each processing node according to the thread distribution rule; and triggering the plurality of built threads, obtaining information from an information source according to the information distribution rule and processing the information. By adopting the method, the device and the nodes for parallelly processing information and the server cluster, the technical problems that information obtaining through a single thread in the prior art influences information processing timeliness and server throughput can be solved.

Description

A kind of method of parallel processed messages, device, node and server cluster
Technical field
The application relates to network data processing field, particularly a kind of method of parallel processed messages, device, node and server cluster.
Background technology
At P4P (Pay for performance, by effect pay) system can process P4P user in platform of bidding (BP) due to operation promotion message produce message, and by this message real-time update in search engine, complete information popularization in a search engine fast to make user.Because platform of bidding is available to the concurrent WEB system of user's use, so user message amount is relatively large, and temporal succession must be ensured for the message of unique user, so how to ensure that the real-time of whole P4P system and handling capacity just become a difficult problem.
In prior art, when ensureing real-time and the handling capacity of P4P system, generally by the following method: be the group being divided into each different by the difference of thread in advance by message arrangement, the separate threads of one group of corresponding this group message of process of message; When the request of the processing messages that P4P system acceptance sends to user, each separate threads is got and is performed to the processing messages of its configuration in advance.Wherein, all Message Processing in the message groups that separate threads is corresponding in chronological order, thus ensure that locally coherence; Then complete parallel process between multiple separate threads.
Wherein, thread is the base unit of CPU scheduling in operating system, represents a CPU operation process.Then, allow multiple processing unit (such as thread) independently carry out service computation, parallel mechanism makes present computer utility handling capacity be increased dramatically to parallel implication simultaneously.Above-mentioned locally coherence can be understood as, in whole message big collection, there is a lot of small set, and these small sets divide by certain business rule (such as by thread), the Message Processing order of each small set inside needs consistent with the time sequencing that message produces, but the Message Processing between different small set does not have tandem, locally coherence that Here it is.
But above-mentioned prior art exists a problem, there is the separate threads that is got message specially exactly, this separate threads is by message subset grouping in order, then correspondence gives multi-threaded parallel execution.This just makes to get process before multiple thread process message, can when message magnanimity increases, and become slowly with consuming time, this will certainly affect the real-time of Message Processing and the handling capacity of server.
In a word, the technical matters needing those skilled in the art urgently to solve at present is exactly: the method for a kind of parallel processed messages of proposition how can innovate, to solve the single-threaded technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause in prior art.
Summary of the invention
Technical problems to be solved in this application are to provide a kind of method of parallel processed messages, in order to solve the single-threaded technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause in prior art.
Present invention also provides a kind of device of parallel processed messages, node and server cluster, in order to ensure said method implementation and application in practice.
In order to solve the problem, this application discloses a kind of method of parallel processed messages, comprising:
Thread allocation rule and message allocation rule is obtained from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Multiple threads corresponding to each processing node are created according to described thread allocation rule;
The multiple threads triggering described establishment get message according to described message allocation rule from message source, row relax of going forward side by side.
Preferably, a described thread gets corresponding message according to described message allocation rule from message source, comprising:
The user ID numbering of request processing messages is carried out modulo operation according to described number of threads;
The message that described user ID of the thread that thread number is mated with described operation result getting from described message source triggers.
Preferably, also comprise:
Described thread allocation rule and message allocation rule are upgraded.
Preferably, also comprise:
According to the CPU quantity of described processing node and/or memory parameters, described thread allocation rule is set.
This application discloses a kind of device of parallel processed messages, comprising:
Acquisition module, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Creation module, for creating multiple threads corresponding to each processing node according to described thread allocation rule;
Trigger module, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment.
Preferably, described trigger module comprises operator module and triggers module, wherein:
Described operator module is used for the user ID numbering of request processing messages to carry out modulo operation according to described number of threads;
Described triggers module gets for the thread triggering thread number and mate with described operation result the message that described user ID triggers from described message source.
Preferably, also comprise:
Update module, for upgrading described thread allocation rule and message allocation rule.
Preferably, also comprise:
Module is set, for arranging described thread allocation rule according to the CPU quantity of described processing node and/or memory parameters.
This application discloses a kind of node of parallel processed messages, comprising: the device of aforementioned any one parallel processed messages.
This application discloses a kind of server cluster, comprising: the node of at least two aforesaid parallel processed messages.
Compared with prior art, the application comprises following advantage:
In this application, by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.Further, can also more convenient and efficient realization to server cluster dilatation, be more adapted to current network application scene.Certainly, the arbitrary product implementing the application might not need to reach above-described all advantages simultaneously.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present application, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the application, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the process flow diagram of the embodiment of the method 1 of a kind of parallel processed messages of the application;
Fig. 2 is the process flow diagram of step 103 in embodiment of the method 1;
Fig. 3 is the process flow diagram of the embodiment of the method 2 of a kind of parallel processed messages of the application;
Fig. 4 is embodiment of the method 2 structured flowchart in actual applications of the application;
Fig. 5 is the structured flowchart of the device embodiment 1 of a kind of parallel processed messages of the application;
Fig. 6 is the structured flowchart of trigger module 503 in device embodiment 1;
Fig. 7 is the structured flowchart of the device embodiment 2 of a kind of parallel processed messages of the application.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, be clearly and completely described the technical scheme in the embodiment of the present application, obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of the application's protection.
The application can be used in numerous general or special purpose calculation element environment or configuration.Such as: personal computer, server computer, handheld device or portable set, laptop device, multi-processor device, the distributed computing environment comprising above any device or equipment etc.
The application can describe in the general context of computer executable instructions, such as program module.Usually, program module comprises the routine, program, object, assembly, data structure etc. that perform particular task or realize particular abstract data type.Also can put into practice the application in a distributed computing environment, in these distributed computing environment, be executed the task by the remote processing devices be connected by communication network.In a distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium comprising memory device.
One of main thought of the application can comprise, by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.
With reference to figure 1, show the process flow diagram of the embodiment of the method 1 of a kind of parallel processed messages of the application, can comprise the following steps:
Step 101: obtain thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed.
Configuration database preset in the embodiment of the present application, be used for specially preserving pre-configured thread allocation rule and message allocation rule, the number of threads that described thread allocation rule has for each processing node represented in cluster, such as, when the number of threads that each processing node in the cluster that thread allocation rule represents has is 80, be used for parallel processed messages with regard to needing for the processing node in server cluster configures 80 threads.Described message allocation rule is for representing pending message by which thread is processed, and such as, user A asks the thread that the message of process is 38 by thread number to process.
Message allocation rule wherein needs the integrated performance index considering node when configuring, comprise software environment and hardware environment.If be applicable to the scene that computation requirement is higher, the number of threads of each node can be distributed by the quantity of CPU core, such as, peer distribution 80 threads of 8 cores, and machine assignment 160 threads of 16 cores; If be applicable to the scene that memory requirements is higher, then can distribute by memory parameters (performance state of such as internal memory and capacity), also can consider the actual conditions such as internal memory, CPU, input and output (IO) or network.
Message allocation rule wherein when configuring, then can be calculated by HASH delivery and realize, such as, be obtained by following formula:
Message_lable_id%total_thread_count==thread_id, " message_lable_id " in this formula represents the mark of message sequence subset, the message of same user is assigned in same ordered subsets, to ensure that the message of same user can be processed in order, the message of different user does not then need to process in order; The number of threads that " total_thread_count " is wherein each node, " thread_id " is wherein thread number.Namely be that then the message of this message sequence subset is got by the thread that this thread number is corresponding and processed if the mark of message sequence subset is thread number to total Thread Count delivery result.When specific implementation, message allocation rule can have various ways, independently can choose any mode of message distributed uniform that can make and realize.
Step 102: create multiple threads corresponding to each processing node according to described thread allocation rule.
Each processing node in cluster will create several threads according to thread allocation rule when initialization, and such as each processing node creates 50 threads etc., can getting from message source and processing messages according to message allocation rule for thread is follow-up of establishment.
Step 103: the multiple threads triggering described establishment get message according to described message allocation rule from message source, row relax of going forward side by side.
Message source mentioned in the present embodiment, be the set that all users received in a server cluster ask the message processed, any one thread all needs to get message in this data source.
Wherein, a described thread gets the step of corresponding message from message source according to described message allocation rule, shown in figure 2, specifically can comprise:
Step 201: the user ID numbering of request processing messages is carried out modulo operation according to described number of threads;
In the present embodiment, because the message of same user request belongs to same ordered subsets, and the message of different user request belongs to different ordered subsets, so also can carry out delivery according to the user ID numbering of request processing messages to number of threads.Wherein, user ID numbering can increase progressively according to the natural modes such as 1,2,3, generally can realizing from increasing major key by database, can be used for identifying different ordered subsets.
Step 202: the thread that thread number is mated with described operation result gets the message that described user ID triggers from described message source.
If the operation result of step 201 is identical with some thread number, then the thread that this thread number is corresponding is just responsible for the message that user corresponding to process above-mentioned user ID numbering asks.Visible, by the order of user oneself being associated with modulo operation, what can realize that message gets is parallel, is namely that different threads can get message according to modulo operation result in principal and subordinate's message source.
Adopt the embodiment of the present invention, by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.
With reference to figure 3, show the process flow diagram of the embodiment of the method 2 of a kind of parallel processed messages of the application, can comprise the following steps:
Step 301: described thread allocation rule is set according to the CPU quantity of processing node in server cluster and/or memory parameters.
First according to actual conditions such as the CPU quantity of processing node and/or memory parameters in the present embodiment, described thread allocation rule is set.Concrete set-up mode can description in reference example 1, does not repeat them here.
Step 302: message allocation rule is set, and described thread allocation rule and message allocation rule are stored to configuration database.
Meanwhile, the message allocation rule each thread is being set needs to follow, and the thread allocation rule set and message allocation rule are stored to configuration database.
Step 303: obtain thread allocation rule and message allocation rule from described configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed.
Step 304: create multiple threads corresponding to each processing node according to described thread allocation rule.
Suppose that thread allocation rule is each Joint Enterprise 80 threads, then for each Joint Enterprise in server cluster is 80 threads, if totally 10 nodes in server cluster, then thread number can be then 1 ~ 800.
Step 305: multiple threads of described establishment get message according to described message allocation rule from message source, row relax of going forward side by side.
When described multiple thread gets message, by the number of threads delivery of user ID numbering by each node, result equals those message of thread number by composition ordered subsets, and the message in this ordered subsets is processed by the order getting out by the thread that this thread number is corresponding.Such as, node A thread is numbered the thread of 5, and when getting, high-ranking military officer gets the message in the ordered subsets of the user ID numbering of Cust_id%800=5, and wherein " Cust_id " represents that user ID is numbered.
Step 306: described thread allocation rule and message allocation rule are upgraded.
It should be noted that, when subsequent applications, can also upgrade pre-configured thread allocation rule and message allocation rule.Such as, size of message increase too fast need to carry out dilatation to server cluster time, can modify to the thread allocation rule in preset configuration database and message allocation rule, and this specific implementation dynamically updated can be: call the instruction being deployed in each node in advance, no longer message is got after the complete current message of each node processing in order in triggered clusters, and reinitialize each node in order, thread allocation rule and message allocation rule, each node may stop service a period of time (1-5 minute) in actual applications, but this of short duration time does not have impact to requirement of real-time in the application of a minute level (1-5 minute).
The present embodiment not only can solve the technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause, can also more convenient and efficient realization to server cluster dilatation, be more adapted to current network application scene.
Shown in figure 4, disclosed in the embodiment of the present application, the method for parallel processed messages is at the application schematic diagram of practical application, wherein, a server cluster includes n processing node, a processing node then creates M thread according to the thread allocation rule in preset configuration database, and simultaneously each thread is got message according to message allocation rule and to be gone forward side by side row relax from message source.
For aforesaid each embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the application is not by the restriction of described sequence of movement, because according to the application, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in instructions all belongs to preferred embodiment, and involved action and module might not be that the application is necessary.
Corresponding with the method that the embodiment of the method 1 of a kind of parallel processed messages of above-mentioned the application provides, see Fig. 5, present invention also provides a kind of device embodiment 1 of parallel processed messages, in the present embodiment, this device can comprise:
Acquisition module 501, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed.
Creation module 502, for creating multiple threads corresponding to each processing node according to described thread allocation rule.
Trigger module 503, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment.
Wherein, shown in figure 6, described trigger module 503 specifically can comprise operator module 601 and triggers module 602, and described operator module 601 may be used for the user ID numbering of request processing messages to carry out modulo operation according to described number of threads; Described triggers module 602 may be used for triggering the thread that mates with described operation result of thread number from described message source, gets the message that described user ID triggers.Adopt the device described in the embodiment of the present invention, can by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.
Corresponding with the method that the embodiment of the method 2 of a kind of parallel processed messages of above-mentioned the application provides, see Fig. 7, present invention also provides a kind of device embodiment 2 of parallel processed messages, in the present embodiment, this device can comprise:
Module 701 is set, for arranging described thread allocation rule according to the CPU quantity of described processing node and/or memory parameters.
Acquisition module 501, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Creation module 502, for creating multiple threads corresponding to each processing node according to described thread allocation rule;
Trigger module 503, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment.
Update module 702, for upgrading described thread allocation rule and message allocation rule.
It should be noted that, when the method described in the application adopts software simulating, as a function newly-increased in node, also can write separately corresponding program, the implementation of the application's not definition of said device.
By device disclosed in the present embodiment, not only can solve the technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause, can also more convenient and efficient realization to server cluster dilatation, be more adapted to current network application scene.
In addition, the embodiment of the present application also discloses a kind of node of parallel processed messages, and described node specifically can comprise: aforementioned means embodiment 1 or the device described in embodiment 2.Related introduction for device with reference to preceding method embodiment and device embodiment, can not repeat them here.
The embodiment of the present application also discloses a kind of server cluster simultaneously, and described server cluster specifically can comprise: the node of described parallel processed messages disclosed at least two the embodiment of the present application.Related introduction for node with reference to preceding method embodiment and device embodiment, can not repeat them here.
It should be noted that, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually see.For device class embodiment, due to itself and embodiment of the method basic simlarity, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
Finally, also it should be noted that, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Above the method for a kind of parallel processed messages that the application provides, device, node and server cluster are described in detail, apply specific case herein to set forth the principle of the application and embodiment, the explanation of above embodiment is just for helping method and the core concept thereof of understanding the application; Meanwhile, for one of ordinary skill in the art, according to the thought of the application, all will change in specific embodiments and applications, in sum, this description should not be construed as the restriction to the application.

Claims (8)

1. a method for parallel processed messages, is characterized in that, the method comprises:
Thread allocation rule and message allocation rule is obtained from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Multiple threads corresponding to each processing node are created according to described thread allocation rule;
The multiple threads triggering described establishment get message according to described message allocation rule from message source, row relax of going forward side by side; Described thread gets corresponding message according to described message allocation rule from message source, comprising: the user ID numbering of request processing messages is carried out modulo operation according to described number of threads; The message that described user ID of the thread that thread number is mated with described operation result getting from described message source triggers, wherein, the message that same described user ID triggers is in same ordered subsets.
2. method according to claim 1, is characterized in that, also comprises:
Described thread allocation rule and message allocation rule are upgraded.
3. method according to claim 1, is characterized in that, also comprises:
According to the CPU quantity of described processing node and/or memory parameters, described thread allocation rule is set.
4. a device for parallel processed messages, is characterized in that, this device comprises:
Acquisition module, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Creation module, for creating multiple threads corresponding to each processing node according to described thread allocation rule;
Trigger module, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment; Described trigger module comprises operator module and triggers module, wherein: described operator module is used for the user ID numbering of request processing messages to carry out modulo operation according to described number of threads; Described triggers module gets for the thread triggering thread number and mate with described operation result the message that described user ID triggers from described message source; Wherein, the message of same described user ID triggering is in same ordered subsets.
5. device according to claim 4, is characterized in that, also comprises:
Update module, for upgrading described thread allocation rule and message allocation rule.
6. device according to claim 4, is characterized in that, also comprises:
Module is set, for arranging described thread allocation rule according to the CPU quantity of described processing node and/or memory parameters.
7. a node for parallel processed messages, is characterized in that, described node comprises: device described in any one of claim 4 ~ 6.
8. a server cluster, is characterized in that, described cluster comprises: the node of at least two parallel processed messages according to claim 7.
CN201110131543.3A 2011-05-19 2011-05-19 Method, device and nodes for parallelly processing information and server cluster Active CN102789394B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110131543.3A CN102789394B (en) 2011-05-19 2011-05-19 Method, device and nodes for parallelly processing information and server cluster
HK12113233.6A HK1172436A1 (en) 2011-05-19 2012-12-21 Method, device, node and server cluster for parallel message processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110131543.3A CN102789394B (en) 2011-05-19 2011-05-19 Method, device and nodes for parallelly processing information and server cluster

Publications (2)

Publication Number Publication Date
CN102789394A CN102789394A (en) 2012-11-21
CN102789394B true CN102789394B (en) 2014-12-24

Family

ID=47154802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110131543.3A Active CN102789394B (en) 2011-05-19 2011-05-19 Method, device and nodes for parallelly processing information and server cluster

Country Status (2)

Country Link
CN (1) CN102789394B (en)
HK (1) HK1172436A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468330B (en) * 2014-12-03 2018-09-18 北京国双科技有限公司 The data processing method and device of Distributed Message Queue system
CN104767753A (en) * 2015-04-08 2015-07-08 无锡天脉聚源传媒科技有限公司 Method and device for processing message requests through server
CN107193539B (en) * 2016-03-14 2020-11-24 北京京东尚科信息技术有限公司 Multithreading concurrent processing method and multithreading concurrent processing system
CN105701257B (en) * 2016-03-31 2019-05-21 北京奇虎科技有限公司 Data processing method and device
CN109428901B (en) * 2017-08-22 2021-07-30 中国电信股份有限公司 Message processing method and message processing device
CN111490963B (en) * 2019-01-25 2022-07-29 上海哔哩哔哩科技有限公司 Data processing method, system, equipment and storage medium based on QUIC protocol stack
CN113014528B (en) * 2019-12-19 2022-12-09 厦门网宿有限公司 Message processing method, processing unit and virtual private network server
CN114036031B (en) * 2022-01-05 2022-06-24 阿里云计算有限公司 Scheduling system and method for resource service application in enterprise digital middleboxes

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859122A (en) * 2006-02-23 2006-11-08 华为技术有限公司 Method and device for realizing classified service to business provider
CN1874538A (en) * 2005-07-20 2006-12-06 华为技术有限公司 Concurrent method for treating calling events
US7877573B1 (en) * 2007-08-08 2011-01-25 Nvidia Corporation Work-efficient parallel prefix sum algorithm for graphics processing units

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874538A (en) * 2005-07-20 2006-12-06 华为技术有限公司 Concurrent method for treating calling events
CN1859122A (en) * 2006-02-23 2006-11-08 华为技术有限公司 Method and device for realizing classified service to business provider
US7877573B1 (en) * 2007-08-08 2011-01-25 Nvidia Corporation Work-efficient parallel prefix sum algorithm for graphics processing units

Also Published As

Publication number Publication date
CN102789394A (en) 2012-11-21
HK1172436A1 (en) 2013-04-19

Similar Documents

Publication Publication Date Title
CN102789394B (en) Method, device and nodes for parallelly processing information and server cluster
Yan et al. Blogel: A block-centric framework for distributed computation on real-world graphs
US10558664B2 (en) Structured cluster execution for data streams
Li et al. Traffic-aware geo-distributed big data analytics with predictable job completion time
Gu et al. Memory or time: Performance evaluation for iterative operation on hadoop and spark
CN104506620A (en) Extensible automatic computing service platform and construction method for same
CN103347055B (en) Task processing system in cloud computing platform, Apparatus and method for
CN105068874A (en) Resource on-demand dynamic allocation method combining with Docker technology
CN103237037A (en) Media format conversion method and system based on cloud computing architecture
CN107729138B (en) Method and device for analyzing high-performance distributed vector space data
CN103873587B (en) A kind of method and device that scheduling is realized based on cloud platform
CN112114950A (en) Task scheduling method and device and cluster management system
Sethi et al. P-FHM+: Parallel high utility itemset mining algorithm for big data processing
CN107291536B (en) Application task flow scheduling method in cloud computing environment
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
CN103414767A (en) Method and device for deploying application software on cloud computing platform
Li et al. Performance model for parallel matrix multiplication with dryad: Dataflow graph runtime
CN103257852A (en) Method and device for building development environment of distributed application system
CN105338045A (en) Cloud computing resource processing device, method and cloud computing system
Kijsipongse et al. A hybrid GPU cluster and volunteer computing platform for scalable deep learning
CN115495221A (en) Data processing system and method
CN104753706A (en) Distributed cluster configuration management method and distributed cluster configuration management device
Li et al. Wide-area spark streaming: Automated routing and batch sizing
CN111240822B (en) Task scheduling method, device, system and storage medium
CN102158545A (en) Resource pool management method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1172436

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1172436

Country of ref document: HK

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240221

Address after: # 01-21, Lai Zan Da Building 1, 51 Belarusian Road, Singapore

Patentee after: Alibaba Singapore Holdings Ltd.

Country or region after: Singapore

Address before: Cayman Islands Grand Cayman capital building, a four storey No. 847 mailbox

Patentee before: ALIBABA GROUP HOLDING Ltd.

Country or region before: Cayman Islands