CN104363300B - Task distribution formula dispatching device is calculated in a kind of server cluster - Google Patents
Task distribution formula dispatching device is calculated in a kind of server cluster Download PDFInfo
- Publication number
- CN104363300B CN104363300B CN201410690581.6A CN201410690581A CN104363300B CN 104363300 B CN104363300 B CN 104363300B CN 201410690581 A CN201410690581 A CN 201410690581A CN 104363300 B CN104363300 B CN 104363300B
- Authority
- CN
- China
- Prior art keywords
- processing
- server
- processing server
- headend equipment
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1073—Registration or de-registration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses task distribution formula dispatching device is calculated in a kind of server cluster, the calculating task for coming from headend equipment is dispatched in the processing server in the server cluster and carries out calculating processing, registration including being used to receive processing server, and receive computing resource information that it reports and the registration management module of load information, for the processing server for headend equipment distribution binding, and the processing server of binding is issued to the distribution module of headend equipment, and for receiving the assistance request sent during registered processing server overload, according to the processing capacity and load information of other processing servers, coordinate other processing servers with rest processing capacity to assist to complete the Coordination module for the calculating task that assistance request includes.The dispatching device of the present invention reduces the operating pressure of management server, and the also automatic adjust automatically for realizing processing server overload has ensured the load balancing of server cluster.
Description
Technical field
The invention belongs to the scheduling of task distribution formula is calculated in data service technology field more particularly to a kind of server cluster
Device.
Background technology
In the Intelligent traffic management systems of City-level, the headend equipments such as a large amount of high definition bayonets, electronic police are generated
Magnanimity crosses vehicle picture and performs analyzing and processing, becomes more and more urgent demand.Intellectual analysis is carried out to crossing vehicle picture, extraction is each
The structured messages such as car plate, vehicle, logo, body color of vehicle are crossed, on the one hand can facilitate and suspicion of crime vehicle is joined
It is dynamic to deploy to ensure effective monitoring and control of illegal activities, capture in real time.Meanwhile monitoring center carries out depth analysis to crossing vehicle information caused by picture analyzing, can realize
Deck analysis, Vehicle tracing, the data application business with transportation industries such as vehicle correlation analyses.
For large-scale Intelligent traffic management systems, since the vehicle picture total amount of crossing of generation per second has reached number
A thousand sheets even up to ten thousand, therefore single equipment node can not complete the calculating task of magnanimity bayonet picture, must dispose
More computing device handles picture.In the case of largely deployment computing device, if cannot be reasonably by magnanimity
Picture is assigned to each equipment and is handled, and on the one hand causes the waste of device resource, while it is a large amount of that picture processing is caused to be related to
Calculating task can not be completed in time, influence to utilize monitoring system processing social security, the response speed of accident.
A kind of better simply allocation strategy is in the prior art:The management user of Intelligent traffic management systems passes through intelligent friendship
Siphunculus manages the management configuration interface of system, and according to the strategy of load balancing, by the bayonet of substantially equivalent quantity, configuration is tied to every
One computing device.Such as:The Intelligent traffic management systems of 1000 decometer mouths and 50 processing server compositions, according to every place
The picture for managing 20 decometer mouth of server process carries out binding configuration.I.e.:1 to 20th decometer mouth, is configured to processing server 1;The
21 to 40 decometer mouths, are configured to processing server 2, and so on.Above-mentioned configuration strategy will be issued to bayonet, be generated in bayonet
Vehicle picture simultaneously stores picture to after central storage device, can send picture processing request to corresponding processing server and disappear
It ceases, the URL routing informations of picture is carried in message.After processing server receives request message, picture indices letter is primarily based on
Breath reads picture file data from central storage device, then carries out Car license recognition scheduling algorithm processing to picture file.
But crossing vehicle flowrate due to bayonet has the characteristics that apparent region, timeliness, such as the card of city different location
Mouthful vehicle flowrate of crossing have a bigger difference, daytime peak period on and off duty and the vehicle flowrate of crossing in the late into the night also have bigger difference.Simultaneously
Even same bayonet crosses vehicle flowrate, also can be affected by various factors, and causes to vary widely.Such as:Bad weather,
Road maintenance, city hot spot region change, urban fringe extension etc., all cause some roads bayonet cross vehicle flowrate occur it is larger
Variation.Just because of these factors, bayonet crosses vehicle picture in practical application generation quantity is dynamic change, picture calculation amount
And dynamic change, it can not accomplish accurately to be estimated in configuration.
Therefore, this static task scheduling mode that bayonet configuration is disposably tied to image processing device, scheme spirit
Activity is poor, it is impossible to the computing device of made a slip of the tongue according to actual card vehicle flowrate, the dynamically processing of adjustment picture, it is likely that cause some meters
It is not busy to calculate device busy, some computing devices, it is impossible to make full use of the computing resource of entire cluster.
The another technical solution of the prior art is not for bayonet configuration binding processing server, the place of all bayonet pictures
Server is managed, dynamically distributes are all carried out by the management node of cluster.All processing servers of cluster are united by cluster management node
One management, management node periodically collect all processing servers latest computed resource situation (such as CPU usage, residue in
Deposit), the information such as equipment state (normal work, malfunction).Vehicle picture was opened in bayonet generation one and deposited picture write-in
After storing up equipment, picture processing request message is sent to management node, the URL paths of picture file are carried in message, are asked to this
Picture is handled.Resource of the management node based on collected computing device, status information, according to load balancing principle,
The processing server for selecting current computing resource most sufficient.After the completion of selection, picture processing request message is forwarded to corresponding
Processing server.After processing server receives request message, it is primarily based on picture URL addresses and reads picture text from storage device
Then part carries out Car license recognition scheduling algorithm processing to picture file.
However the shortcomings that this technical solution is also it will be apparent that each pictures of all bayonets are in processing server
Before processing, the processing that management node is allocated processing server will be converged to.For large-scale intelligent traffic system, deployment
The high definition bayonets of thousands of or even up to ten thousand, electronic police equipment, the vehicle picture total amount per second crossed during peak value reach up to ten thousand.
The processing request of each pictures, which is required for converging to management node, is allocated processing, it is easy to cause management node as whole
The processing bottleneck of a system.Meanwhile city high definition bayonet system scale is also constantly expanding, city vehicle quantity is also incited somebody to action on constantly
Rise, the performance bottleneck of this managed node of scheme limits, and scalability is relatively low.And if using expansion management number of nodes
Method expands to n platforms to solve the problems, such as this, such as by management node from 1, then how to be collected between multiple management nodes
Group's synchronizing information, the scheduling strategy for how coordinating more management nodes, become very scabrous technological difficulties.
Therefore, each equipment how bayonet picture is reasonably distributed to cluster is handled, and makes full use of each equipment
Computing resource is one of key issue that Intelligent traffic management systems must solve.
The content of the invention
The present invention provides task distribution formula dispatching device is calculated in a kind of server process cluster, using static binding and
The mechanism being combined dynamically is adjusted, while both having avoided management server becomes the processing bottleneck of whole system, is in turn avoided
During static binding, processing server processing task is unbalanced, the problem of can not making full use of PC cluster resource.
To achieve these goals, technical solution of the present invention is as follows:
Task distribution formula dispatching device is calculated in a kind of server cluster, applied to the management clothes in the server cluster
Business device, the calculating task for coming from headend equipment is dispatched in the processing server in the server cluster and is carried out at calculating
Reason, the dispatching device include registration management module, distribution module and Coordination module, wherein:
Registration management module for receiving the registration of processing server, and receives the computing resource that processing server reports
Information and load information;
Distribution module, for estimating out the processing registered clothes according to the computing resource information for the processing server registered
Be engaged in device processing capacity and calculating task quantity caused by headend equipment is estimated, for headend equipment distribution binding
Processing server, and the processing server of binding is issued to headend equipment;
Coordination module, for receiving the assistance request sent during registered processing server overload, according to other processing
The processing capacity and load information of server coordinate other processing servers with rest processing capacity to assist to complete the association
Help the calculating task that request includes.
Further, the Coordination module is additionally operable to coordinate successfully or lose to the processing server return for sending assistance request
The response message lost.
In the present invention, the load information includes the input of processing capacity, total input maximum rate, each headend equipment
Maximum rate, the registration management module is additionally operable to receive the load information that processing server reports, and combines the processing service
The assistance number of requests and bound headend equipment information that device is sent form load and processing capacity table.
Further, the load and processing capacity table that the distribution module is formed also according to registration management module, are looked into
The processing server of rest processing capacity maximum is looked for, the headend equipment newly increased is tied to found rest processing capacity
Maximum processing server;The load and processing capacity table that the distribution module is formed also according to registration management module, root
The assistance number of requests sent according to processing server judges the overload processing server that need to be adjusted, which is handled
Headend equipment on server beyond its processing capacity is tied on other processing servers with rest processing capacity.
By establishing load and processing capacity table, the dynamic adjustment capability of server cluster is further increased, for new
The processing server for increasing and overloading, carries out new binding.
The invention also provides task distribution formula dispatching device is calculated in a kind of server cluster, applied to the server
Processing server in cluster, the calculating task for coming from headend equipment are scheduled for carrying out at calculating in the processing server
Reason, the server cluster further include management server, and the dispatching device includes registration and resource reporting module, calculating task
Processing module and request assistance module, wherein:
Registration and resource reporting module, for being registered to management server, the computing resource information of taken at regular intervals itself is born
Information carrying ceases, and is reported to management server;
Calculating task processing module, for receiving the calculating task from headend equipment and being cached to request queue, successively
Calculating processing is carried out to the calculating task in request queue;
Assistance module is asked, according to the request queue of calculating task processing module, when the length of request queue is more than regulation
Threshold value when, to management server send assist request, assist request include coming the meter after the threshold value in request queue
Calculation task.
Further, the request assistance module is additionally operable to successful in the coordination for receiving the management server return
After response message, the calculating task processing module is notified to delete the calculating task for assisting request entrained in request queue;
After the response message for the hormany failure that the management server returns is received, the calculating task processing module is notified to keep
Position of the calculating task for assisting request entrained in request queue, continuation are handled successively.
Calculating task of the present invention from headend equipment includes the corresponding storage location index letter of calculating task data
Breath, the calculating task processing module first read calculating task number in calculating processing according to the storage location index information
According to then carrying out calculating processing to it.In calculating task with the corresponding storage index information of calculating task data rather than
Calculating task data are in itself, it is possible to reduce headend equipment to processing server send data size, processing server only into
During row task computation, calculating task data are just transferred, reduce network transmission burden.
Task distribution formula dispatching device is calculated in a kind of server process cluster proposed by the present invention, passes through static binding plan
Slightly by headend equipment elder generation static binding to corresponding processing server, then loaded and believed according to the calculating task of processing server
Breath, by dynamic adjustment mechanism, coordinates other processing server assist process.The dispatching device of the present invention reduces management service
The operating pressure of device, the also automatic adjust automatically for realizing processing server overload, has ensured the load balancing of server cluster.
Description of the drawings
Fig. 1 is Intelligent traffic management systems networking schematic diagram of the embodiment of the present invention;
Fig. 2 is one structure diagram of dispatching device embodiment of the present invention;
Fig. 3 is two structure diagram of dispatching device embodiment of the present invention.
Specific embodiment
Technical solution of the present invention is described in further details with reference to the accompanying drawings and examples, following embodiment is not formed
Limitation of the invention.
For calculating task of the magnanimity from headend equipment, often appointed using server cluster to complete to calculate in a distributed manner
The processing of business.The present embodiment is specifically described a kind of server cluster and falls into a trap by taking Intelligent traffic management systems as shown in Figure 1 as an example
Calculate task distribution formula dispatching device.Intelligent traffic management systems is including headend equipment, storage device and management server and extremely
A few processing server, each equipment are connected by wide area network.Multiple processing servers form server cluster, are responsible for processing and come
From the calculating task of headend equipment, management server is responsible for each processing server in coordination service device cluster.Management clothes
Business device is usually located at the administrative center of Intelligent traffic management systems, and storage device and processing server can be put together at management
Center can also be distributed in the administrative center of each stub area of Intelligent traffic management systems or is placed on headend equipment
Together, the invention is not restricted to specific network construction forms.
Headend equipment is located at each bayonet of Intelligent traffic management systems or the crossing of electric police grasp shoot, generally includes to take the photograph
Camera, encoder etc., the present embodiment consider using all devices of a bayonet as a headend equipment.Headend equipment is used for
The video pictures of vehicle were captured, the picture of candid photograph is stored in storage device, processing request is sent to processing server.In intelligence
In traffic control system, the processing request from headend equipment seeks to carry out the calculating task of distributed scheduling.
Embodiment one calculates task distribution formula dispatching device in a kind of server cluster, applied to management server.The dress
It puts including registration management module, distribution module and Coordination module, specifically describes individually below.
Registration management module for receiving the registration of processing server, and receives the computing resource that processing server reports
Information and load information.
By taking Intelligent traffic management systems as an example, processing server is registered to the registration management mould of management server on startup
Block, and keep-alive information keep-alive is sent, multiple processing servers are registered to management server as server cluster.Processing server
The computing resource information and load information of oneself are periodically reported to the registration management module of management server.So as to management server
Be provided with each processing server IP address and by keep-alive message know the state of processing server it is whether online or from
Line information, and pass through and periodically receive processing server and report the computing resource information and load information for knowing processing server.
The computing resource information of wherein processing server refers to the CPU and memory information of the processing server, can also further comprise
The occupancy rate information of CPU and memory.And load information includes processing capacity, total input maximum rate, each headend equipment
Input maximum rate.
The wherein processing capacity of processing server does not occur in processing server in the case that calculating task overstocks, place
The processing capacity of reason server is exactly the max calculation task quantity of processing per second.The length of task request queue is calculated in caching
When degree is no more than defined threshold value, the max calculation task quantity of processing server processing per second.And occur in processing server
In the case that calculating task is overstock, that is, when the length of task request queue is calculated in caching more than defined threshold value, processing service
The processing capacity of device i.e. its maximum processing capability.Total input maximum rate refers to that all front ends bound in processing server are set
The summation of the maximum input rate of the standby calculating task for being input to processing server, i.e. processing server be per second to be received to come from and respectively ties up
Determine the total quantity of the calculating task of headend equipment.The input maximum rate of each headend equipment refers to bound in processing server
Each headend equipment is input to the maximum input rate of the calculating task of processing server, i.e. per second receive of processing server comes from
The quantity of the calculating task of each binding headend equipment.
Distribution module, for estimating out the processing registered clothes according to the computing resource information for the processing server registered
Business device estimates processing capacity and calculating task quantity caused by headend equipment is estimated, and is tied up for headend equipment distribution
Fixed processing server, and the processing server of binding is issued to headend equipment.
In the present embodiment, the vehicle of headend equipment monitoring process captures video pictures, video pictures is stored in storage and are set
It is standby, and send processing request to bound processing server.Therefore the processing request from headend equipment is sought to point
The calculating task of cloth scheduling, it is necessary first to according to the location where the headend equipment and historical data, estimate by the ground
The vehicle flow of section, i.e., corresponding processing number of requests.It further can be according to the computing resource for the processing server registered
Information estimates the processing capacity of estimating of registered processing server, i.e., manageable calculating task quantity per second.So as to
Principle based on load balancing for the processing server of headend equipment distribution binding, specifies the front end during distribution in configuration file
The processing server of apparatus bound.It should be noted that the present embodiment is in the processing server bound for headend equipment distribution
The load balancing of use, is the calculating task quantity relative equilibrium made handled by each processing server, and processing capacity is bigger
Processing server processing calculating task it is more, each processing server CPU operationally and memory usage relative equilibrium,
So as to avoid a processing server extremely busy, the very idle situation of some processing servers.
For example, there are 2 processing servers, 30 headend equipments in Intelligent traffic management systems.The calculating of processing server 1
Resource information is:Intel E32 core CPU, 4G memories, 300 pictures/second can be handled by estimating;The calculating money of processing server 2
Source information is:Intel I74 core CPU, 8G memories, 500 pictures/second can be handled by estimating.It is total to estimate headend equipment 1-12
It handles number of requests and is no more than 300 pictures/second, the processing petition total headend equipment 13-30 is no more than 500 pictures/second.
Headend equipment 1-12 is then distributed into processing server 1, headend equipment 13-30 is distributed into processing server 2.
The video management server of Intelligent traffic management systems can be registered to after reaching the standard grade due to headend equipment, distribution module can
To obtain the information of headend equipment from the video management server of Intelligent traffic management systems, so that management server is under it
The information of processing server bound in hair, so as to when headend equipment generates calculating task, be sent out to bound processing server
Processing is sent to ask.Distribution module knows that the method for headend equipment information is not limited to transfer from video management server, can also be from
It captures in database or is directly configured in administrative center.
Specifically in Intelligent traffic management systems, carried in the processing request that headend equipment is sent and store video pictures
Picture indices information, request processing server video pictures are identified, identify the structured message of video pictures, than
Such as Car license recognition, vehicle-logo recognition, vehicle cab recognition.Here picture indices information refers to the video pictures storage of headend equipment candid photograph
To after storage device, the corresponding storage location index information in storage device is generally represented with URL paths.
So as to which after headend equipment registration is reached the standard grade, distribution module is read from configuration file bound in the headend equipment
Server info is managed, the headend equipment is issued to by configuring message.Headend equipment receives bound processing server information
Afterwards, it is possible to the video pictures of candid photograph are stored in storage device, and processing request is sent to bound processing server, i.e.,
Calculating task is sent to processing server.
Coordination module, for receiving the assistance request sent during registered processing server overload, according to other processing
The processing capacity and load information of server coordinate other processing servers with rest processing capacity to assist to complete the association
Help the calculating task that request includes.
In the present embodiment, processing request message that processing server receiving front-end equipment is sent, dissection process request message
In picture indices information, and corresponding video pictures are read from storage device according to URL paths, carry out picture recognition, carry
Take the structured message in picture, such as Car license recognition, vehicle-logo recognition, vehicle cab recognition etc..
Extended yet with vehicle flowrate excessively by bad weather, road maintenance, the change of city hot spot region, urban fringe etc.,
The vehicle flowrate of crossing of some roads will be caused to vary widely, when these conditions occur, the processing that headend equipment is sent please
Ask quantity that can undergo mutation, the processing server load excessive processing of binding does not come.When processing server load excessive,
Pending processing request message is excessive, and when causing the request message queue to occur a degree of overstocked, processing server is to management
Server, which is sent, assists request.After the Coordination module of management server receives the assistance request that processing server is sent, according to
The load of each processing server (processing server assisted except request), ability, the place for selecting most lightly loaded, surplus capacity most
Reason server, the picture indices information for needing assist process that selected processing server carries in being asked according to assistance, from
Video pictures are read in storage device and carry out picture recognition processing.
Specifically, Coordination module compares the negative of other processing servers (in addition to the processing server for sending assistance request)
Information carrying ceases, and finds treatable calculating task quantity per second in server cluster and adds assistance please more than the calculating task of itself input
The processing server of the quantity of the calculating task included in asking, and the processing server that the calculating task per second that receives is minimum, come
Handle the calculating task in assistance request.It will assist in the calculating task in request if finding and issue found processing clothes
Business device assists the processing server asked to return to the successful response message of coordination, otherwise to the place for sending assistance request to sending
Manage the response message that server returns to hormany failure.
Embodiment two calculates task distribution formula dispatching device in a kind of server cluster, applied to processing server.The dress
Put including:
Registration and resource reporting module, for being registered to management server, the computing resource information of taken at regular intervals itself is born
Information carrying ceases, and is reported to management server.
By taking Intelligent traffic management systems as an example, the registration of the processing server in server cluster and resource reporting module exist
Management server is registered to during startup, and sends keep-alive information keep-alive.And computing resource information, the load letter of taken at regular intervals itself
Breath, is reported to management server.So as to which management server will appreciate that the computing resource information of each processing server and load are believed
Breath after being asked in the coordination for receiving other processing servers, is coordinated according to the load information of each processing server.
Calculating task processing module receives the calculating task from headend equipment and is cached to request queue, successively to asking
The calculating task in queue is asked to carry out calculating processing.
In the present embodiment, sort in the buffer to the calculating task from headend equipment first, form request queue, with
Just subsequently judging whether processing server overloads according to the length of request queue.
Calculating task processing module first needs to parse the picture indices letter included in processing request in specific processing
Breath, and corresponding image data is read according to the picture indices information, carry out picture recognition.It parses and is taken in processing request
The URL paths of the video pictures of band, and image data is read from storage device according to the URL paths, carry out picture recognition.
Assistance module is asked, according to the request queue of calculating task processing module, when the length of request queue is more than regulation
Threshold value when, to management server send assist request, assist request include coming the meter after the threshold value in request queue
Calculation task.
When the request queue of calculating task is more than certain threshold value, such as 128, then will be more than the threshold value (128)
Calculating task (such as 32) is copied in assistance request, is sent to management server.
Response message from management server of the assistance module also according to reception is asked, notifies calculating task processing module
Do following processing:
If receiving successful response message, the calculating task for assisting request entrained is deleted in request queue;
If receiving the response message of failure, the calculating task for keeping assisting request entrained is in request queue
Position, continuation are handled successively.
In the present embodiment, since the quantity of the calculating task of each headend equipment may constantly change, particularly hold
In the case of continuous growth, the processing server of binding may be continuously in overload conditions, at this time need to service processing
The binding relationship of device and headend equipment is adjusted into Mobile state.Or when there is newly-increased headend equipment, it is necessary to be newly-increased front end
Apparatus bound processing server.These situations are required for grasping the load information of each processing server, therefore processing server
Need periodically (such as per hour) collect the load information of itself, the wherein calculating task processing capacity of processing server, total
The input maximum rate of input maximum rate and each headend equipment is by calculating task quantity/in units of the second, in this implementation
It it is exactly the picture number/second for handling number of requests/second namely being identified in example.Processing server interval certain cycle is such as
Per hour, the statistics in this cycle is reported into management server.
Management server forms load and the place of each processing server after these data that processing server reports are received
Capability list is managed, as shown in table 1.The number of the assistance request transmitted by each processing server is also recorded in table 1 simultaneously.
Table 1
After the registration management module of the present embodiment receives the load information that processing server reports, with reference to processing server institute
The assistance number of requests and the headend equipment information of binding sent, forms the load of processing server and processing capacity table.Point
The data of table 1 are known from registration management module with module, to the processing server of newly-increased headend equipment and overload into action
State adjusts, adjustment process described separately below.
For newly-increased headend equipment:When headend equipment is newly increased in Intelligent traffic management systems, management server
Distribution module is based on table 1, and newly-increased headend equipment is tied to the processing server of rest processing capacity maximum automatically.It is such as right
In table 1, processing server 3 has rest processing capacity, then newly-increased headend equipment is tied to processing server 3.
It should be noted that the rest processing capacity that the present embodiment is mentioned, refer to the maximum processing capability of processing server
With the difference of total input maximum rate.After headend equipment is newly increased, if each processing server all load it is higher such as all full
Load operationally, without rest processing capacity or rest processing capacity handles estimating for newly-increased headend equipment not enough
Calculating task, then distribution module need to send warning information, administrative staff is notified to increase new processing server.
For the processing server of overload:Distribution module ties up headend equipment and processing server according to the data of table 1
Determine relation and make dynamic adjustment, adjustable strategies include two kinds:
The first, is once judged every some cycles (such as 1 month), and the place of request is assisted for not sending
Server is managed, illustrates that the calculating task of the processing server all in the range of its normal processing capacity, need not be adjusted;
For having the processing server for assisting to ask, illustrate its calculating task more than its processing capacity, it is necessary to trigger adjustment process.
Another kind is when the assistance number of requests of all processing servers is more than that certain threshold value (such as 10 times) triggering is adjusted
It has suffered journey.
Specifically, the processing server of request is assisted during adjustment for sending, which is exceeded it
The part headend equipment of processing capacity, distributing to other processing capacities has remaining processing server.
Wherein, for the processing server newly bound, the calculating task maximum input rate for the headend equipment newly bound with
The sum of existing total maximum input rate of the processing server, no more than the maximum processing capability of the processing server.Work as processing
When request occurs assisting not yet in server, maximum processing capability can not be assessed, therefore can not commented in the present embodiment
When estimating, the maximum capacity of processing server is equal to default value, that is, the maximum capacity value for the processing server for assisting request occurs
In minimum value.
Below by taking table 1 as an example, it is specifically described:
Since processing server 1 occurs assisting request, it is therefore desirable to which processing server 1 is adjusted.In table 1, place
Reason server 3 is that calculating task inputs most light processing server, and after headend equipment 3 is tied to processing server 3, processing
The calculating task input maximum rate of server 3 does not exceed its maximum processing capability (320/ second).Therefore headend equipment 3 is tied up
Surely processing server 3 is arrived, while updates table 1.
After the adjustment, headend equipment 3 is tied to the information of processing server 3 by management server, by configuring under message
Headend equipment 3 is dealt into, after headend equipment 3 receives, subsequent processing request will be dealt into processing server 3 and handle.Update
Load and processing capacity table afterwards is as shown in table 2:
Table 2
Therefore distribution module, the assistance number of requests sent first according to processing server judge what need to be adjusted
Processing server is overloaded, the headend equipment for exceeding its processing capacity on the overload processing server then is tied to other has
On the processing server of rest processing capacity.
Equally, if the rest processing capacity of other processing servers is not enough handled on overload processing server beyond it
During the maximum input rate of the headend equipment of processing capacity, then distribution module needs to send warning information, and administrative staff is notified to increase
Add new processing server.
The above embodiments are merely illustrative of the technical solutions of the present invention rather than is limited, without departing substantially from essence of the invention
In the case of refreshing and its essence, those skilled in the art make various corresponding changes and change in accordance with the present invention
Shape, but these corresponding changes and deformation should all belong to the protection domain of appended claims of the invention.
Claims (3)
1. task distribution formula dispatching device is calculated in a kind of server cluster, applied to the management service in the server cluster
The calculating task for coming from headend equipment is dispatched in the processing server in the server cluster and carries out at calculating by device
Reason, which is characterized in that the dispatching device includes registration management module, distribution module and Coordination module, wherein:
Registration management module for receiving the registration of processing server, and receives the computing resource information that processing server reports
And load information;
Distribution module estimates out the processing server registered for the computing resource information according to the processing server registered
Processing capacity and calculating task quantity caused by headend equipment is estimated, for headend equipment distribution binding processing
Server, and the processing server of binding is issued to headend equipment;
Coordination module for receiving the assistance request sent during registered processing server overload, is serviced according to other processing
The processing capacity and load information of device are coordinated other processing servers with rest processing capacity and are asked to assist to complete the assistance
Seek the calculating task included;
The registration management module is additionally operable to receive the load information that processing server reports, and is sent out with reference to the processing server
The assistance number of requests and bound headend equipment information gone out forms load and processing capacity table;
The load and processing capacity table that the distribution module is formed also according to registration management module, search rest processing capacity
The headend equipment newly increased, is tied to the processing service of found rest processing capacity maximum by maximum processing server
Device;
The load and processing capacity table that the distribution module is formed also according to registration management module, send out according to processing server
The assistance number of requests gone out judges the overload processing server that need to be adjusted, and will exceed it on the overload processing server
The headend equipment of processing capacity is tied on other processing servers with rest processing capacity.
2. dispatching device according to claim 1, which is characterized in that the Coordination module be additionally operable to send assist request
Processing server return coordinate success or failure response message.
3. dispatching device according to claim 1, which is characterized in that the load information includes processing capacity, total defeated
Enter the input maximum rate of maximum rate, each headend equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410690581.6A CN104363300B (en) | 2014-11-26 | 2014-11-26 | Task distribution formula dispatching device is calculated in a kind of server cluster |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410690581.6A CN104363300B (en) | 2014-11-26 | 2014-11-26 | Task distribution formula dispatching device is calculated in a kind of server cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104363300A CN104363300A (en) | 2015-02-18 |
CN104363300B true CN104363300B (en) | 2018-06-05 |
Family
ID=52530526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410690581.6A Active CN104363300B (en) | 2014-11-26 | 2014-11-26 | Task distribution formula dispatching device is calculated in a kind of server cluster |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104363300B (en) |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106302608B (en) | 2015-06-08 | 2020-02-04 | 阿里巴巴集团控股有限公司 | Information processing method and device |
CN104933167A (en) * | 2015-06-30 | 2015-09-23 | 深圳走天下科技有限公司 | Picture processing system, device and method |
CN105007336B (en) * | 2015-08-14 | 2018-06-29 | 深圳市云舒网络技术有限公司 | The load-balancing method and its system of server |
CN106557310B (en) * | 2015-09-30 | 2021-08-20 | 北京奇虎科技有限公司 | Remote desktop management method and system |
CN106559467B (en) * | 2015-09-30 | 2021-02-05 | 北京奇虎科技有限公司 | Remote desktop management method and system |
CN106657191B (en) * | 2015-11-02 | 2020-10-16 | 杭州华为企业通信技术有限公司 | Load balancing method and related device and system |
CN106302734B (en) * | 2016-08-16 | 2019-03-26 | 北京控制工程研究所 | A kind of autonomous evolution implementation method of satellite counting system |
CN107885594B (en) * | 2016-09-30 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Distributed resource scheduling method, scheduling node and access node |
CN106603695B (en) * | 2016-12-28 | 2020-10-02 | 北京奇艺世纪科技有限公司 | Method and device for adjusting query rate per second |
WO2018195899A1 (en) | 2017-04-28 | 2018-11-01 | Beijing Didi Infinity Technology And Development Co., Ltd. | System and method for task scheduling and device management |
CN107622117B (en) * | 2017-09-15 | 2020-05-12 | Oppo广东移动通信有限公司 | Image processing method and apparatus, computer device, computer-readable storage medium |
JP6427697B1 (en) * | 2018-01-22 | 2018-11-21 | 株式会社Triart | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM |
CN110417831B (en) * | 2018-04-27 | 2022-07-29 | 杭州海康威视数字技术股份有限公司 | Intelligent equipment computing resource allocation method, device and system |
CN108776934B (en) * | 2018-05-15 | 2022-06-07 | 中国平安人寿保险股份有限公司 | Distributed data calculation method and device, computer equipment and readable storage medium |
CN113852787A (en) * | 2018-08-09 | 2021-12-28 | 华为技术有限公司 | Intelligent application deployment method, device and system |
CN110889569A (en) * | 2018-09-10 | 2020-03-17 | 杭州萤石软件有限公司 | Computing power allocation method and device |
CN110944146B (en) | 2018-09-21 | 2022-04-12 | 华为技术有限公司 | Intelligent analysis equipment resource adjusting method and device |
CN109194976A (en) * | 2018-10-22 | 2019-01-11 | 网宿科技股份有限公司 | Video processing, dissemination method, storage management, Content Management Platform and system |
CN109800204B (en) * | 2018-12-27 | 2021-03-05 | 深圳云天励飞技术有限公司 | Data distribution method and related product |
CN109873858B (en) * | 2018-12-27 | 2021-03-30 | 中科曙光南京研究院有限公司 | Service data distributed monitoring method and distributed monitoring cluster |
CN110062038A (en) * | 2019-04-09 | 2019-07-26 | 网宿科技股份有限公司 | A kind of data transmission scheduling method and system |
CN112218251B (en) * | 2019-07-09 | 2022-01-07 | 普天信息技术有限公司 | Method and device for processing broadband cluster concurrent service |
CN112422598A (en) * | 2019-08-22 | 2021-02-26 | 中兴通讯股份有限公司 | Resource scheduling method, intelligent front-end equipment, intelligent gateway and distributed system |
CN110659180A (en) * | 2019-09-05 | 2020-01-07 | 国家计算机网络与信息安全管理中心 | Data center infrastructure management system based on cluster technology |
CN112954264B (en) * | 2019-12-10 | 2023-04-18 | 浙江宇视科技有限公司 | Platform backup protection method and device |
CN111106971B (en) * | 2019-12-31 | 2023-04-18 | 深圳市九洲电器有限公司 | Device registration management method, device and computer-readable storage medium |
CN111447113B (en) * | 2020-03-25 | 2021-08-27 | 中国建设银行股份有限公司 | System monitoring method and device |
CN113556372B (en) * | 2020-04-26 | 2024-02-20 | 浙江宇视科技有限公司 | Data transmission method, device, equipment and storage medium |
CN111885350A (en) * | 2020-06-10 | 2020-11-03 | 北京旷视科技有限公司 | Image processing method, system, server and storage medium |
CN113992493B (en) * | 2020-07-08 | 2024-09-06 | 阿里巴巴集团控股有限公司 | Video processing method, system, equipment and storage medium |
CN112383585A (en) * | 2020-10-12 | 2021-02-19 | 广州市百果园网络科技有限公司 | Message processing system and method and electronic equipment |
CN113055480B (en) * | 2021-03-17 | 2023-05-23 | 网宿科技股份有限公司 | Scheduling method and device |
CN113687947A (en) * | 2021-08-25 | 2021-11-23 | 京东方科技集团股份有限公司 | Edge box optimization method and device, storage medium and electronic equipment |
CN114070728B (en) * | 2021-11-12 | 2024-04-09 | 上海华信长安网络科技有限公司 | Method and device for grading configuration of telephone |
CN116389502B (en) * | 2023-02-28 | 2024-02-23 | 港珠澳大桥管理局 | Cross-cluster scheduling system, method, device, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1315018A (en) * | 1997-07-29 | 2001-09-26 | 凯萨罗恩产品公司 | Computerized system and method for optimally controlling storage and transfer of computer programs on a computer network |
CN1738244A (en) * | 2004-08-17 | 2006-02-22 | 北京亿阳巨龙智能网技术有限公司 | Method for setting application server by proxy server in soft switching system |
CN1863202A (en) * | 2005-10-18 | 2006-11-15 | 华为技术有限公司 | Method for improving load balance apparatus and servicer processing performance |
CN1873613A (en) * | 2005-05-30 | 2006-12-06 | 英业达股份有限公司 | Load balanced system and method of preloading files |
CN101534244A (en) * | 2009-02-09 | 2009-09-16 | 华为技术有限公司 | Method, device and system for load distribution |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020103886A1 (en) * | 2000-12-04 | 2002-08-01 | International Business Machines Corporation | Non-local aggregation of system management data |
US20060059251A1 (en) * | 2002-05-01 | 2006-03-16 | Cunha Gary D | Method and system for request management processing |
-
2014
- 2014-11-26 CN CN201410690581.6A patent/CN104363300B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1315018A (en) * | 1997-07-29 | 2001-09-26 | 凯萨罗恩产品公司 | Computerized system and method for optimally controlling storage and transfer of computer programs on a computer network |
CN1738244A (en) * | 2004-08-17 | 2006-02-22 | 北京亿阳巨龙智能网技术有限公司 | Method for setting application server by proxy server in soft switching system |
CN1873613A (en) * | 2005-05-30 | 2006-12-06 | 英业达股份有限公司 | Load balanced system and method of preloading files |
CN1863202A (en) * | 2005-10-18 | 2006-11-15 | 华为技术有限公司 | Method for improving load balance apparatus and servicer processing performance |
CN101534244A (en) * | 2009-02-09 | 2009-09-16 | 华为技术有限公司 | Method, device and system for load distribution |
Also Published As
Publication number | Publication date |
---|---|
CN104363300A (en) | 2015-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104363300B (en) | Task distribution formula dispatching device is calculated in a kind of server cluster | |
CN104798356B (en) | Method and apparatus for the utilization rate in controlled level expanding software application | |
CN103532873B (en) | flow control policy applied to distributed file system | |
CN107395739A (en) | A kind of data exchange shared platform | |
CN109213792A (en) | Method, server-side, client, device and the readable storage medium storing program for executing of data processing | |
CN109067835A (en) | Casualty data processing method based on block chain | |
CN101246646A (en) | E-Police system structure based on Web service | |
JP2014102691A (en) | Information processing device, camera with communication function, and information processing method | |
CN106453460A (en) | File distributing method, apparatus and system | |
CN106537824A (en) | Method and apparatus for reducing response time in information-centric networks | |
CN110188872A (en) | A kind of isomery cooperative system and its communication means | |
CN107995017A (en) | A kind of uplink bandwidth allocation method, apparatus and system | |
CN108495096A (en) | A kind of mobile law enforcement system and method | |
CN113840330B (en) | Connection establishment method, gateway equipment, network system and dispatching center | |
CN103236168B (en) | Traffic data on-line acquisition system and method | |
CN104766494B (en) | Distributed type time-staggered parking system | |
WO2021017968A1 (en) | Method, apparatus and system for processing access request in content delivery system | |
CN104871499A (en) | Communication node, control device, method for managing control information entries, and program | |
CN110908939B (en) | Message processing method and device and network chip | |
CN106506072B (en) | A kind of collecting method and device | |
CN109922313B (en) | Image processing method, mobile terminal and cloud server | |
CN106789179A (en) | A kind of resource allocation methods based on SDN frameworks | |
JP5271737B2 (en) | Data collection system and transmission control device | |
CN111988348A (en) | Data acquisition method and Internet of vehicles control system | |
CN103945004A (en) | Dispatching method and system for data between data centers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |