CN114897663A - Method, system and storage medium for separating CPU and GPU processing video stream - Google Patents

Method, system and storage medium for separating CPU and GPU processing video stream Download PDF

Info

Publication number
CN114897663A
CN114897663A CN202210432296.9A CN202210432296A CN114897663A CN 114897663 A CN114897663 A CN 114897663A CN 202210432296 A CN202210432296 A CN 202210432296A CN 114897663 A CN114897663 A CN 114897663A
Authority
CN
China
Prior art keywords
cluster
video stream
processing
cpu
gpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210432296.9A
Other languages
Chinese (zh)
Inventor
苗炜
李东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huachuang Future Suzhou Technology Co ltd
Original Assignee
Huachuang Future Suzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huachuang Future Suzhou Technology Co ltd filed Critical Huachuang Future Suzhou Technology Co ltd
Priority to CN202210432296.9A priority Critical patent/CN114897663A/en
Publication of CN114897663A publication Critical patent/CN114897663A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a method, a system and a storage medium for separating a CPU and a GPU to process video streams. The method for separating the CPU and the GPU to process the video stream comprises the following steps: classifying, namely classifying the video stream files of the easily-processed groups into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing; servers with computing space allocated within each cluster using K8S; a step of communication between the CPU cluster and the GPU cluster, wherein Kafka is used as a communication mechanism between the CPU cluster and the GPU cluster; and a step of circular processing, wherein the current task to be processed is finished when the number of the items is 0, otherwise, the step of video stream classification is returned. According to the method and the system, the CPU cluster is used for processing simple tasks, the GPU cluster is used for processing complex tasks, the background processing capacity is greatly increased, and the server cost is reduced.

Description

Method, system and storage medium for separating CPU and GPU processing video stream
Technical Field
The present application relates to the field of video stream processing technologies, and in particular, to a method, a system, and a storage medium for separating a CPU and a GPU from each other to process a video stream.
Background
In video stream processing, both simple image processing (e.g., OpenCV) and complex deep learning models (e.g., CNN-type image processing models) are often required. The CNN image processing model comprises: these models are composed of a plurality of computation layers, among which the convolution computation layer is essential; the specific computation layer architectures of the algorithm models are different, but all aim at the image processing problem and all follow training modes such as back propagation and the like, so the algorithm models are collectively called Convolutional Neural Networks (CNN). When the calculation scale is not large, the two types of calculation are generally transferred to a GPU server for processing.
However, when such demands on the line increase substantially, if both are handled by the GPU server, the cost increases substantially; the GPU server is a novel server, is mainly used for processing complex artificial intelligence algorithm models, video streams and natural languages, and is expensive. The CPU server is a traditional server, is low in price and generally processes text type and digital type information. Therefore, the processing speed needs to be further increased by reasonably distributing the CPU and the GPU server. However, when the CPU and the GPU server are allocated, the following problems are encountered: firstly, the communication problem among different clusters exists; second, the problem of balancing the number of servers between different clusters.
Disclosure of Invention
The embodiment of the application provides a method, a system and a storage medium for separating a CPU (central processing unit) and a GPU (graphics processing unit) to process video streams, which are used for solving the communication problem in the process of cooperatively processing the video streams by the CPU and the GPU, realizing reasonable allocation of processing tasks of a CPU cluster and a GPU cluster, greatly increasing background processing capacity and reducing server cost.
The embodiment of the application provides a method for separating a CPU (Central processing Unit) and a GPU (graphics processing Unit) to process video streams, which comprises the following steps:
classifying, namely, putting the video stream files of the easy-to-process group into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing;
a step of communication between the CPU cluster and the GPU cluster, which is to use Kafka as a communication mechanism between the CPU cluster and the GPU cluster, and the CPU cluster and the GPU cluster are communicated with each other once after each processing task is completed; and
and a circulating processing step, namely judging whether the number of the current task to be processed is 0, if so, finishing processing the video stream file, and if not, returning to the video stream classification step.
In some embodiments, before the step of classifying, further comprising: and a video stream classification step, namely dividing the video stream file into an easy-processing group or a difficult-processing group according to each processing task and the corresponding required computing resource.
In some embodiments, in the video stream classification step, a classification threshold is set according to the consumption of computational resources; when the video memory, the memory or other computing resources required by the corresponding processing task of one video stream file are less than or equal to the classification threshold, dividing the video stream file into easy processing groups; when the video memory, the memory or other computing resources required by the processing task corresponding to one of the video stream files are larger than the classification threshold, the video stream file is divided into the difficult-to-process groups.
In some embodiments, in the classification processing step, a field result file of the video processed by the CPU cluster and the GPU cluster is stored in the Mysql database.
In some embodiments, in the step of communicating the CPU cluster and the GPU cluster, the communication content between the CPU cluster and the GPU cluster only includes the number of the corresponding video stream file, the number of the processing task, and the field of the processing result, the video stream files are stored in the NFS, and the corresponding video stream file can be acquired from the NFS through the numbers of the video stream files.
The present application further provides a system comprising:
the classification processing unit is used for classifying the video stream files of the easy-to-process group into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing; after the CPU cluster or the GPU cluster receives the video stream file, if respective computing resources need to be allocated, a server of a computing space is allocated in each cluster by using K8S; the CPU cluster comprises a plurality of CPU servers, and the GPU cluster comprises a plurality of GPU servers;
the CPU cluster and GPU cluster communication unit is used for utilizing Kafka as a communication mechanism between the CPU cluster and the GPU cluster, and the CPU cluster and the GPU cluster are communicated with each other once after each processing task is completed; and
and the circulating processing unit is used for judging whether the number of the current task to be processed is 0, if so, finishing processing the video stream file, and if not, returning to the video stream classification unit for continuous processing.
In some embodiments, the system further comprises a video stream classification unit for classifying the video stream files into a tractable group or a refractory group according to the corresponding required computational resources according to each processing task.
In some embodiments, the system further includes a PU cluster and a GPU cluster, and after receiving the video stream file, if respective computing resources need to be allocated to the CPU cluster or the GPU cluster, a server with computing space is allocated in the respective cluster by using K8S; the Mysql database stores field result files of the video processed by the CPU cluster and the GPU cluster in the Mysql database; and the resource scheduling and communication equipment is used for loading the k8s and kafka services, and is independent from the CPU cluster and the GPU cluster.
The present application further provides a system, comprising: a memory and a processor; the memory stores a computer program for execution by the processor for implementing the steps of the method for processing a video stream by separating a CPU and a GPU as described in any of the preceding.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the steps of the method for processing a video stream by separating a CPU and a GPU as described in any of the above.
The application has the advantages that: the kafka and the kubernets are used for establishing the division clusters and supporting communication among different clusters, the CPU clusters are used for processing simple tasks, and the GPU clusters are used for processing complex tasks. The method and the system can be widely applied to the architecture of live broadcast and video large-scale service, and are suitable for multiple industries such as security, transportation, entertainment, industrial manufacturing and the like. The scheme greatly increases the background processing capacity and reduces the cost of the server.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for separating a CPU and a GPU from processing a video stream according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a system according to an embodiment of the present application.
Fig. 3 is a schematic diagram of processing a video stream by separating a CPU and a GPU according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Specifically, referring to fig. 1, the present embodiment provides a method for separating a CPU and a GPU from processing a video stream, including the following steps:
s1, video stream classification step, dividing the video stream file into easy-processing groups or difficult-processing groups according to each processing task and the corresponding required computing resources;
s2, classification processing, namely, putting the video stream files of the easy-to-process group into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing; after the CPU cluster or the GPU cluster receives the video stream file, if respective computing resources need to be allocated, a server of a computing space is allocated in each cluster by using K8S;
s3, a step of communication between the CPU cluster and the GPU cluster, which is to use Kafka as a communication mechanism between the CPU cluster and the GPU cluster, and the CPU cluster and the GPU cluster communicate with each other once after each processing task is completed; and
and S4, circulating processing, namely judging whether the number of the current task to be processed is 0, if so, finishing processing the video stream file, and if not, returning to the video stream classification step.
Wherein K8S is called kubernets completely, K8S is a portable container arrangement management tool for container service, and the K8S cluster is composed of Master nodes and node (worker) nodes.
Kafka is a novel message queue processing mechanism, adopts a distributed architecture, and supports high-throughput transmission of information.
Kubernets is an open source system developed based on the Go language for automatically deploying, extending and managing containerized applications.
The method and the system utilize kafka and kubernets to establish the division clusters and support communication among different clusters, and respectively utilize a CPU cluster to process simple tasks and a GPU cluster to process complex tasks. The method and the device can greatly increase the background processing capacity and reduce the server cost.
In some embodiments, in the video stream classification step S1, a classification threshold is set according to a consumption amount of computing resources (such as video memory occupancy or memory occupancy); when the video memory, the memory or other computing resources required by the corresponding processing task of one video stream file are less than or equal to the classification threshold, dividing the video stream file into easy processing groups; when the video memory, the memory or other computing resources required by the processing task corresponding to one of the video stream files are larger than the classification threshold, the video stream file is divided into the difficult-to-process groups.
Preferably, in the video stream classification step S1, when a classification threshold is set according to the video memory occupation amount, the classification threshold is 100 million.
In some embodiments, in the sorting process step S2, the field result file of the video processed by the CPU cluster and the GPU cluster is stored in the Mysql database.
In some embodiments, in the step S3, the communication content between the CPU cluster and the GPU cluster only includes the number of the corresponding video stream file, the number of the processing task, and the field of the processing result, the video stream file itself is stored in the NFS, and the corresponding video stream file can be obtained from the NFS through the number of the video stream file.
NFS is the abbreviation of network File System and network File System. The NFS functions primarily to allow sharing of files or directories between different host systems over a local area network. Nfs (network File system), a network File system, is one of the File systems supported by FreeBSD, which allows sharing of resources among computers in a network. In the application of NFS, a client application of a local NFS can transparently read and write files located on a remote NFS server, just like accessing local files.
The video stream file in the embodiment of the application is an SRS (sounding reference signal) video, and an SRS (simple RTMP Server) can be used for various scenes such as live broadcast/recorded broadcast/video customer service, and the positioning is an operation-level Internet live broadcast server cluster. The method and the system can be widely applied to the architecture of live broadcast and video large-scale service, and are suitable for multiple industries such as security, transportation, entertainment, industrial manufacturing and the like.
Referring to fig. 2, the present application further provides a system 10, comprising:
the video stream classification unit 1 is used for dividing the video stream files into an easy-processing group or a difficult-processing group according to the corresponding required computing resources according to each processing task;
the classification processing unit 2 is used for classifying the video stream files of the easy-to-process group into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing; after the CPU cluster or the GPU cluster receives the video stream file, if respective computing resources need to be allocated, a server of a computing space is allocated in each cluster by using K8S; the CPU cluster comprises a plurality of CPU servers, and the GPU cluster comprises a plurality of GPU servers;
the CPU cluster and GPU cluster communication unit 3 is used for utilizing Kafka as a communication mechanism between the CPU cluster and the GPU cluster, and the CPU cluster and the GPU cluster are communicated with each other once after each processing task is completed; and
and the cyclic processing unit 4 is used for judging whether the number of the current task to be processed is 0, if so, finishing processing the video stream file, and if not, returning to the video stream classification unit for continuous processing.
In some embodiments, the system 10 further comprises a Mysql database 5; and storing the field result file of the video processed by the CPU cluster and the GPU cluster in the Mysql database 5. A Database (Database) is a repository that organizes, stores, and manages data according to a data structure. Each database has one or more different APIs for creating, accessing, managing, searching, and copying the stored data. MySQL is a Relational Database (Relational Database Management System), and this so-called "Relational" can be understood as the concept of "table", a Relational Database consisting of one or several tables.
In some embodiments, the system 10 further comprises a resource scheduling and communication device 6 for loading the k8s and kafka services, and the resource scheduling and communication device is independent from the CPU cluster and the GPU cluster, and is only used for processing the computing resource scheduling and communication, which is to prevent interference with other traffic services.
It can be understood that, both the CPU cluster and the GPU cluster belong to the system 10, and fig. 3 shows the CPU cluster and the GPU cluster, and after the CPU cluster or the GPU cluster receives the video stream file, if respective computing resources need to be allocated, a server of a computing space is allocated in each cluster by using K8S; and Kafka is utilized as a communication mechanism between the CPU cluster and the GPU cluster.
In another embodiment, the present application further provides a system comprising: a memory and a processor; the memory stores a computer program for execution by the processor for implementing the steps of the method for processing a video stream by separating a CPU and a GPU as described in any of the preceding.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the steps of the method for processing a video stream by separating a CPU and a GPU as described in any of the above.
In combination with the schematic diagram of the CPU and the GPU shown in fig. 3 for processing the video stream, in actual application, each process of the video stream is divided manually, and the preprocessing (such as video tone adjustment) with less consumption of computing resources (such as video memory and internal memory) is merged and included in the task to be processed by the CPU; the algorithm processing (such as CNN image processing model) which consumes more computing resources is classified into the GPU to-be-processed task.
The division standard is characterized in that a threshold value is flexibly set by self experience of a framework worker, and if the calculation that the occupied video memory is less than 100 million is included in the CPU for processing; otherwise, the method is classified into GPU processing. Meanwhile, the threshold value can be adjusted at any time according to the overall operation condition of the server.
And loading the k8s and kafka services by using a resource scheduling and communication device. The service is independent of other CPU and GPU cluster processing tasks and is only used to handle computing resource scheduling and communication. It is separated to prevent interference with other traffic services.
Using Kafka as a communication mechanism between the CPU cluster and the GPU cluster: the CPU cluster and the GPU cluster are communicated with each other, one party synchronizes the other party after completing tasks, and information is guaranteed not to be lost. After the CPU or GPU cluster message is received, if respective computing resources are needed, devices of computing space are allocated within the cluster using K8S. For example, after the CPU cluster completes simple processing for one video, information such as a processed video ID and a video processing analysis field may be sent to the GPU cluster through Kafka; after the GPU cluster receives the Kafka sending information, the GPU cluster knows that the corresponding video is processed by the CPU, if the GPU cluster has spare computing resources through the judgment of k8s, the spare GPU is distributed to obtain the video and the result processed by the CPU through the video ID, the processing result and the like sent back by the Kafka, further algorithm model processing is carried out on the video and the result, and the processed video ID and the processing result are sent out again through the Kafka. If the processed video needs to be processed again in the CPU cluster, the above process is repeated. In this mode, Kafka becomes an intermediary for communication between different computing clusters; k8s allocates computing resources within the respective clusters.
Meanwhile, all message communication does not relate to the audio and video files, and only records the serial numbers of the audio and video files. And the audio and video files are stored in the NFS and are respectively acquired from the NFS through the serial numbers when the audio and video files are distributed to the computing resources.
In addition, the scales of the CPU computing cluster and the GPU computing cluster need to be balanced according to different tasks in the process. CPU clusters are typically larger in size than GPU clusters.
And finally, storing the video result processed by the CPU and the GPU cluster into a Mysql database. The processed video results may be different from one processing task to another. If the recognition algorithm processing result is target information and a target occurrence time point; the coordinate points corresponding to the appearance targets are added to the detection class content. In addition, the processing results of the CPU and the GPU cluster may be used as input information of the other, and further, more detailed processing may be performed.
The application has the advantages that: the kafka and the kubernets are used for establishing the division clusters and supporting communication among different clusters, the CPU clusters are used for processing simple tasks, and the GPU clusters are used for processing complex tasks. The method and the system can be widely applied to the architecture of live broadcast and video large-scale service, and are suitable for multiple industries such as security, transportation, entertainment, industrial manufacturing and the like. The scheme greatly increases the background processing capacity and reduces the cost of the server.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above embodiments of the present application are described in detail, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the description of the above embodiments is only used to help understand the technical solutions and core ideas of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (10)

1. A method for separating a CPU and a GPU to process video streams is characterized by comprising the following steps:
classifying, namely, putting the video stream files of the easy-to-process group into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing;
a step of communication between the CPU cluster and the GPU cluster, which is to use Kafka as a communication mechanism between the CPU cluster and the GPU cluster, and the CPU cluster and the GPU cluster are communicated with each other once after each processing task is completed; and
and a circulating processing step, namely judging whether the number of the current task to be processed is 0, if so, finishing processing the video stream file, and if not, returning to the video stream classification step.
2. A method of separating a CPU and a GPU from processing a video stream as claimed in claim 1, further comprising, before the step of classifying:
and a video stream classification step, namely dividing the video stream file into an easy-processing group or a difficult-processing group according to each processing task and the corresponding required computing resource.
3. The method for separating CPU and GPU processing video streams of claim 2,
in the video stream classification step, a classification threshold value is set according to the consumption of computing resources;
when the consumption of the computing resources required by the processing task corresponding to one video stream file is less than or equal to the classification threshold, dividing the video stream file into an easy processing group;
and when the consumption amount of the computing resources required by the corresponding processing task of one video stream file is larger than the classification threshold, the video stream file is divided into the intractable groups.
4. The method of separating CPU and GPU processed video streams of claim 1, wherein in the sorting process step, the field result file of the video processed by the CPU cluster and the GPU cluster is stored in Mysql database.
5. The method of claim 1, wherein in the step of classifying, after the CPU cluster or the GPU cluster receives the video stream file, if the respective computing resources are to be allocated, the server having computing space in the respective cluster is allocated by using K8S.
6. The method for separating the video streams processed by the CPU and the GPU as claimed in claim 1, wherein in the step of communicating the CPU cluster and the GPU cluster, the communication content between the CPU cluster and the GPU cluster only includes the number of the corresponding video stream file, the number of the processing task, and the field of the processing result, the video stream files themselves are all stored in the NFS, and the corresponding video stream file can be acquired from the NFS by the number of the video stream file.
7. A system, comprising:
the classification processing unit is used for classifying the video stream files of the easy-to-process group into a CPU cluster for processing; classifying video stream files of the difficult-to-process group into a GPU cluster for processing;
the CPU cluster and GPU cluster communication unit is used for utilizing Kafka as a communication mechanism between the CPU cluster and the GPU cluster, and the CPU cluster and the GPU cluster are communicated with each other once after each processing task is completed; and
and the circulating processing unit is used for judging whether the number of the current task to be processed is 0, if so, finishing processing the video stream file, and if not, returning to the video stream classification unit for continuous processing.
8. The system of claim 7, further comprising a video stream classification unit for partitioning video stream files into tractable groups or refractory groups according to each processing task according to the corresponding required computational resources.
9. The system of claim 7, wherein the system comprises
After the CPU cluster or the GPU cluster receives the video stream file, if respective computing resources need to be allocated, servers with computing spaces are allocated in the respective clusters by using K8S;
the Mysql database stores field result files of the video processed by the CPU cluster and the GPU cluster in the Mysql database; and
and the resource scheduling and communication equipment is used for loading the k8s and the kafka services, and is independent from the CPU cluster and the GPU cluster.
10. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method of separating a CPU and a GPU from a video stream according to any of claims 1-6.
CN202210432296.9A 2022-04-22 2022-04-22 Method, system and storage medium for separating CPU and GPU processing video stream Pending CN114897663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210432296.9A CN114897663A (en) 2022-04-22 2022-04-22 Method, system and storage medium for separating CPU and GPU processing video stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210432296.9A CN114897663A (en) 2022-04-22 2022-04-22 Method, system and storage medium for separating CPU and GPU processing video stream

Publications (1)

Publication Number Publication Date
CN114897663A true CN114897663A (en) 2022-08-12

Family

ID=82717487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210432296.9A Pending CN114897663A (en) 2022-04-22 2022-04-22 Method, system and storage medium for separating CPU and GPU processing video stream

Country Status (1)

Country Link
CN (1) CN114897663A (en)

Similar Documents

Publication Publication Date Title
CN110134636B (en) Model training method, server, and computer-readable storage medium
CN114861911B (en) Deep learning model training method, device, system, equipment and medium
CN110413776B (en) High-performance calculation method for LDA (text-based extension) of text topic model based on CPU-GPU (Central processing Unit-graphics processing Unit) collaborative parallel
EP3816877A1 (en) Model-based prediction method and device
US12001448B2 (en) Machine learning systems and methods for data placement in distributed storage
CN103310460A (en) Image characteristic extraction method and system
CN111858034B (en) Resource management method, system, device and medium
CN113553178A (en) Task processing method and device and electronic equipment
CN116450355A (en) Multi-cluster model training method, device, equipment and medium
CN111324429A (en) Micro-service combination scheduling method based on multi-generation ancestry reference distance
EP3499378A1 (en) Method and system of sharing product data in a collaborative environment
CN113347238A (en) Message partitioning method, system, device and storage medium based on block chain
CN114897663A (en) Method, system and storage medium for separating CPU and GPU processing video stream
CN111597035A (en) Simulation engine time advancing method and system based on multiple threads
US20230419166A1 (en) Systems and methods for distributing layers of special mixture-of-experts machine learning models
US8548994B2 (en) Reducing overheads in application processing
CN113535673B (en) Method and device for generating configuration file and data processing
CN115016911A (en) Task arrangement method, device, equipment and medium for large-scale federal learning
US20210149746A1 (en) Method, System, Computer Readable Medium, and Device for Scheduling Computational Operation Based on Graph Data
CN115129466A (en) Cloud computing resource hierarchical scheduling method, system, device and medium
CN108182241A (en) A kind of optimization method of data interaction, device, server and storage medium
CN112948087A (en) Task scheduling method and system based on topological sorting
CN112559628A (en) Multi-cluster message synchronization method, device, medium and electronic equipment
CN113535410B (en) Load balancing method and system for GIS space vector distributed computation
US20230418681A1 (en) Intelligent layer derived deployment of containers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination