CN105049485B - A kind of Load-aware cloud computing system towards real time video processing - Google Patents
A kind of Load-aware cloud computing system towards real time video processing Download PDFInfo
- Publication number
- CN105049485B CN105049485B CN201510330962.8A CN201510330962A CN105049485B CN 105049485 B CN105049485 B CN 105049485B CN 201510330962 A CN201510330962 A CN 201510330962A CN 105049485 B CN105049485 B CN 105049485B
- Authority
- CN
- China
- Prior art keywords
- load
- worker
- cpu
- gpu
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000008878 coupling Effects 0.000 claims abstract description 7
- 238000010168 coupling process Methods 0.000 claims abstract description 7
- 238000005859 coupling reaction Methods 0.000 claims abstract description 7
- 230000008447 perception Effects 0.000 claims abstract description 5
- 230000005540 biological transmission Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 239000000686 essence Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0882—Utilisation of link capacity
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The present invention proposes a kind of Load-aware cloud computing system towards real time video processing, including:Storm clusters, the service of providing infrastructures;Video stream generator is used for the generation of video flowing, receives and sends;Streaming server, buffered video data simultaneously provide unified message interface, reduce the coupling between component;Detector is loaded, binding Load-aware algorithm is with the computational load of perception task, and by the analysis of CPU, GPU and memory service condition to node where task, notice Storm clusters should selection processor type;Parameter controller, the analysis and assessment of the performance for cluster;Video processor provides the interface based on CPU and GPU processing respectively;Agreement supplier provides various agreements, and by agreement supplier, the parameter controller is interacted with Storm clusters, realizes the message exchange under different agreement, while releasing the coupling of intermodule.
Description
Technical field
The present invention relates to cloud computing big data, field of video processing, and in particular to a kind of negative towards real time video processing
Carry perception cloud computing system.
Background technology
With flourishing for the business models such as internet, mobile device, smart home, intelligent transportation and research direction,
Magnanimity real time data needs are utilized effectively, and wherein because of real-time height, the features such as data volume is big, video data seems particularly
It is important.The newest reports " The Digital Universe in 2020 " of International Data Corporation refer to
Go out, the half of global big data in 2012 is all video data, and by 2015, the ratio was up to 65%.
For CPU, calculating speed can be greatly improved using GPU processing videos.However current GPU processing platforms
It is disadvantageous in that, although the task based on GPU consumes less CPU, but can occupy a large amount of memory.Including especially
Deposit it is insufficient in the case of, this bottleneck can limit treatment effeciency significantly.
Therefore, how efficiently using mass data and the information fully excavated therebetween is asked at this field is urgently to be resolved hurrily
Topic.
Invention content
For solve under cloud environment intelligent information between live video stream excavate caused by resource consumption it is big, data volume is big
Problem, the present invention based on cloud computing and big data platform on the basis of proposing a kind of load sense towards real time video processing
Know cloud computing system, cloud node resource can be maximally utilized, improves calculating speed.
The technical proposal of the invention is realized in this way:
A kind of Load-aware cloud computing system towards real time video processing, including:
Storm clusters, the service of providing infrastructures;
Video stream generator is used for the generation of video flowing, receives and sends;
Streaming server, buffered video data simultaneously provide unified message interface, reduce the coupling between component;
Detector is loaded, binding Load-aware algorithm is with the computational load of perception task, by node where task
CPU, GPU and memory service condition analysis, notice Storm clusters should selection processor type;
Parameter controller, the analysis and assessment of the performance for cluster;
Video processor provides the interface based on CPU and GPU processing respectively;
Agreement supplier provides various agreements, and by agreement supplier, the parameter controller is carried out with Storm clusters
Interaction realizes the message exchange under different agreement, while releasing the coupling of intermodule.
Optionally, the agreement supplier provides AOP, RMI, http protocol.
Optionally, the load dynamic sensing algorithm first before topological operation submission count by each in statistics cloud environment
The computing resource of operator node;After this topology is submitted, obtain the resource request of topology each worker, by this ask with it is each
The available resources of a cloud node compare, and when the available resources of certain cloud node are more than the request of the worker, which is advised
It draws and arrives this node;Then, CPU and the different of GPU resource are asked by comparing the worker, is calculated separately out using CPU
With use GPU in the case of the worker can be received number, select the processor that can accommodate most numbers.
The beneficial effects of the invention are as follows:
(1) a kind of efficient cloud computing system towards magnanimity real time video processing is constructed, realizes and magnanimity is regarded in real time
The Intelligent treatment of frequency;
(2) a kind of load dynamic sensing algorithm that CPU is combined with GPU is proposed, cloud node computing resource is realized
It maximally utilizes.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
Obtain other attached drawings according to these attached drawings.
Fig. 1 is the functional block diagram of the Load-aware cloud computing system of the invention towards real time video processing;
Fig. 2 is the operational flow diagram of the Load-aware cloud computing system of the invention towards real time video processing;
Fig. 3 is the load dynamic sensing algorithm flow chart that the CPU of the present invention is combined with GPU.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, Load-aware cloud computing system of the present invention towards real time video processing includes:Storm clusters regard
Frequency flow generator, streaming server, load detector, parameter controller, video processor and agreement supplier.
Storm clusters are as core of the invention component, for the service of providing infrastructures.
Video stream generator is responsible for the generation of video flowing, receives and sends.
Streaming server is for buffered video data and provides unified message interface, reduces the coupling between component.
Detector binding load dynamic sensing algorithm is loaded with the computational load of perception task.By to node where task
CPU, GPU and memory service condition analysis, notice Storm clusters should selection processor type.
Analysis and assessment of the parameter controller for the performance of cluster.
Video processor provides the interface based on CPU and GPU processing respectively.
Agreement supplier provides various agreements, including AOP, RMI, HTTP etc..
Fig. 2 is the operational flow diagram of the Load-aware cloud computing system of the invention towards real time video processing.
After a new operation is submitted to the system of the present invention, whether system first checks for Storm platforms also more
Virtual machine process run this operation.
If it is then the present invention system just meet the requested all worker of this task, when all worker all
After being assigned, each worker just starts the operation of oneself.
It should be noted that some worker will not include the work of calculating task weight, so the system of the present invention is also wrapped
The load dynamic sensing algorithm that CPU is combined with GPU is included, first checks for whether a certain worker includes video calculating task.Such as
Fruit includes, then being carried out Load-aware algorithm, determines that this task, which is distributed to CPU, calculates still GPU calculating.
After all worker are traversed, entire operation just starts to execute.
Simultaneously in order to detect whether this new operation uploaded influences the trouble-free operation of other operations, system of the invention can also
The performance of entire Storm platforms is detected, if entire platform property is affected, just proposes alarm.
Different calculating tasks is likely to be suited for CPU processing, it is also possible to be suitable for GPU processing, so the present invention proposes one
The load dynamic sensing algorithm that kind CPU is combined with GPU, realizes the Coordination Treatment of CPU-GPU.Cloud computing node RAM or CPU exist
The case where excessively occupancy, performance can be caused to decline.And in order to avoid such case, two threshold value CPU usage α are arranged in we
With RAM utilization rates β.As shown in figure 3, according to the loading condition of node, carrying out the algorithm that CPU-GPU is flexibly selected can define such as
Under:
N={ ni}:Node set;
TCi, UCi:The overall CPU of the node i and CPU used;
TRi, URi:The overall RAM of the point i and RAM used;
AC, AR:Node can use CPU and RAM;
Topo={ tj}:Using the set of deployment distribution map (topologies);
Wj={ wJ, k}:The process (workers) distributed is needed from j-th of figure (topology);
grcJ, k, grrJ, k:Process wJ, kThe CPU and RAM needed when using GPU;
crcJ, k, crrJ, k:wJ, kThe CPU and RAM needed when using CPU.
The CPU of the present invention counts cloud before topological operation submission first with the load dynamic sensing algorithm that GPU is combined
The computing resource of each calculate node in environment;After this topology is submitted, the resource request of each worker of topology is obtained,
By this request compared with the available resources of each cloud node, when the available resources of certain cloud node are more than the request of this worker
When, this worker is planned for this node;Then, the difference of cpu and gpu resources is asked by comparing this worker, respectively
Number can be received by calculating this worker using cpu and using gpu, select the place that can accommodate most numbers
Manage device.
The Load-aware cloud computing system towards real time video processing of the present invention, realizes the intelligence to magnanimity real-time video
It can processing;A kind of load dynamic sensing algorithm that CPU is combined with GPU is proposed, the maximum of cloud node computing resource is realized
Change and utilizes.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
With within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention god.
Claims (2)
1. a kind of Load-aware cloud computing system towards real time video processing, which is characterized in that including:
Storm clusters, the service of providing infrastructures;
Video stream generator is used for the generation and transmission of video flowing;
Streaming server, buffered video data simultaneously provide unified message interface, reduce the coupling between component;
Detector is loaded, binding load dynamic sensing algorithm is with the computational load of perception task, by node where task
CPU, GPU and memory service condition analysis, notice Storm clusters should selection processor type;The load dynamic is felt
Know that algorithm includes the following steps:
The computing resource of each calculate node in cloud environment is counted before topological operation submission first;
After this topology is submitted, obtain the resource request of topology each worker, by the request and each cloud node can
Compared with resource, when the available resources of certain cloud node are more than the request of the worker, which is planned for this node;
Then, CPU and the different of GPU resource are asked by comparing the worker, calculates separately out and is using CPU and use
The worker can be received number in the case of GPU, select the processor that can accommodate most numbers;
Parameter controller, the analysis and assessment of the performance for cluster;
Video processor provides the interface based on CPU and GPU processing respectively;
Agreement supplier provides various agreements, and by agreement supplier, the parameter controller is interacted with Storm clusters,
It realizes the message exchange under different agreement, while releasing the coupling of intermodule;
After a new operation is submitted to this system, system first check for Storm platforms whether also have more virtual machines into
Journey runs this operation;If it is, system meets the requested all worker of this task, when all worker are assigned
Later, each worker just starts the operation of oneself;Whether the load dynamic sensing algorithm first checks for a certain worker
Including video calculating task, if including, load dynamic sensing algorithm is executed, determines that this task, which is distributed to CPU, to be calculated still
GPU is calculated;After all worker are traversed, entire operation just starts to execute.
2. the Load-aware cloud computing system towards real time video processing as described in claim 1, which is characterized in that the association
It discusses supplier and AOP, RMI, http protocol is provided.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510330962.8A CN105049485B (en) | 2015-06-09 | 2015-06-09 | A kind of Load-aware cloud computing system towards real time video processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510330962.8A CN105049485B (en) | 2015-06-09 | 2015-06-09 | A kind of Load-aware cloud computing system towards real time video processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105049485A CN105049485A (en) | 2015-11-11 |
CN105049485B true CN105049485B (en) | 2018-10-16 |
Family
ID=54455688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510330962.8A Expired - Fee Related CN105049485B (en) | 2015-06-09 | 2015-06-09 | A kind of Load-aware cloud computing system towards real time video processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105049485B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106095573B (en) * | 2016-06-08 | 2019-10-22 | 北京大学 | A kind of Storm platform operations of work nest perception divide equally dispatching method |
CN106878671B (en) * | 2016-12-29 | 2019-07-26 | 中国农业大学 | A kind of farm's multiple target video analysis method and its system |
CN108037995A (en) * | 2017-11-22 | 2018-05-15 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Distributed electromagnetic situation simulation computing system based on GPU |
CN109857560A (en) * | 2019-01-28 | 2019-06-07 | 中国石油大学(华东) | A kind of collaboration parallelization mechanism based on CPU/GPU isomerous environment |
CN110035297B (en) * | 2019-03-08 | 2021-05-14 | 视联动力信息技术股份有限公司 | Video processing method and device |
CN112346863B (en) * | 2020-10-28 | 2024-06-07 | 河北冀联人力资源服务集团有限公司 | Method and system for processing dynamic adjustment data of computing resources |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1677190A2 (en) * | 2004-12-30 | 2006-07-05 | Microsoft Corporation | Systems and methods for virtualizing graphics subsystems |
CN103699656A (en) * | 2013-12-27 | 2014-04-02 | 同济大学 | GPU-based mass-multimedia-data-oriented MapReduce platform |
CN104125165A (en) * | 2014-08-18 | 2014-10-29 | 浪潮电子信息产业股份有限公司 | Job scheduling system and method based on heterogeneous cluster |
-
2015
- 2015-06-09 CN CN201510330962.8A patent/CN105049485B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1677190A2 (en) * | 2004-12-30 | 2006-07-05 | Microsoft Corporation | Systems and methods for virtualizing graphics subsystems |
CN103699656A (en) * | 2013-12-27 | 2014-04-02 | 同济大学 | GPU-based mass-multimedia-data-oriented MapReduce platform |
CN104125165A (en) * | 2014-08-18 | 2014-10-29 | 浪潮电子信息产业股份有限公司 | Job scheduling system and method based on heterogeneous cluster |
Also Published As
Publication number | Publication date |
---|---|
CN105049485A (en) | 2015-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105049485B (en) | A kind of Load-aware cloud computing system towards real time video processing | |
CN105593823B (en) | Method, system and computer readable storage medium for the data packet flows between the virtual machine VM in monitoring data center | |
Enayet et al. | A mobility-aware optimal resource allocation architecture for big data task execution on mobile cloud in smart cities | |
CN105323099B (en) | Business network flowmeter factor method, network resource scheduling method and network element | |
CN104580524A (en) | Resource scaling method and cloud platform with same | |
CN105491329B (en) | A kind of extensive monitoring video flow assemblage method based on streaming computing | |
Rahman et al. | Edge computing assisted joint quality adaptation for mobile video streaming | |
CN104536804A (en) | Virtual resource dispatching system for related task requests and dispatching and distributing method for related task requests | |
CN103856337A (en) | Resource occupation rate acquiring method, providing method, system and server thereof | |
CN104901989A (en) | Field service providing system and method | |
CN112217725B (en) | Delay optimization method based on edge calculation | |
Guo et al. | On-demand resource provision based on load estimation and service expenditure in edge cloud environment | |
CN115002681A (en) | Computing power sensing network and using method and storage medium thereof | |
Yin et al. | An advanced decision model enabling two-way initiative offloading in edge computing | |
CN115689004A (en) | Method and system for constructing multi-source virtual flexible aggregation and hierarchical cooperative control platform | |
CN115134371A (en) | Scheduling method, system, equipment and medium containing edge network computing resources | |
CN116016221A (en) | Service processing method, device and storage medium | |
CN104301241B (en) | A kind of SOA dynamic load distributing methods and system | |
US20150109915A1 (en) | Network traffic management | |
CN108282526A (en) | Server dynamic allocation method and system between double clusters | |
CN114490049A (en) | Method and system for automatically allocating resources in containerized edge computing | |
CN109347982A (en) | A kind of dispatching method and device of data center | |
CN106997310A (en) | The apparatus and method of load balancing | |
CN107908459A (en) | System is dispatched in a kind of cloud computing | |
CN105138391B (en) | The multitasking virtual machine distribution method of cloud system justice is distributed towards wide area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20181016 |