WO2021036784A1 - 一种媒体数据处理方法、装置、媒体服务器及计算机可读存储介质 - Google Patents
一种媒体数据处理方法、装置、媒体服务器及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021036784A1 WO2021036784A1 PCT/CN2020/108467 CN2020108467W WO2021036784A1 WO 2021036784 A1 WO2021036784 A1 WO 2021036784A1 CN 2020108467 W CN2020108467 W CN 2020108467W WO 2021036784 A1 WO2021036784 A1 WO 2021036784A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- media data
- data service
- cpu
- gpu
- video
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/127—Prioritisation of hardware or computational resources
Definitions
- the embodiments of the present invention relate to the field of communications, and in particular, to a media data processing method, device, media server, and computer-readable storage medium.
- the embodiments of the present invention provide a media data processing method, device, media server, and computer-readable storage medium, which are intended to solve at least one of the technical problems in the above-mentioned technology to a certain extent, including the consumption of CPU for video media processing by CPU The problem of a lot of resources.
- an embodiment of the present invention provides a media data processing method, applied to a media server, including: judging the service type of the service to be processed by the central processing unit CPU, and the service type includes audio media data service and video media data service And assign the audio media data service to the CPU for processing, and assign the video media data service to the graphics processor GPU for processing; wherein the CPU is set on the media server or is distributed on a physical machine, And/or, the GPU is set on the media server or is deployed on a physical machine in a distributed manner.
- the embodiment of the present invention also provides a media data processing device, including: a media control unit, configured to determine the service type of the service to be processed through the CPU, the service type including audio media data service and video media data service; resource scheduling unit, Used to allocate the audio media data service to a CPU for processing, and allocate the video media data service to a graphics processor GPU for processing; wherein the CPU is set on the media server or distributed on a physical machine, And/or, the GPU is set on the media server or is deployed on a physical machine in a distributed manner.
- a media control unit configured to determine the service type of the service to be processed through the CPU, the service type including audio media data service and video media data service
- resource scheduling unit Used to allocate the audio media data service to a CPU for processing, and allocate the video media data service to a graphics processor GPU for processing
- the CPU is set on the media server or distributed on a physical machine
- the GPU is set on the media server or is deployed on a physical machine in a distributed manner.
- the embodiment of the present invention also provides a computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to realize the steps of any one of the media data processing methods described above.
- the embodiment of the present invention also provides a media server, including: a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the computer program is executed by the processor When realizing the steps of any one of the above-mentioned methods.
- Figure 1 is a flowchart of a media data processing method in the first embodiment of the present invention
- FIG. 2 is a schematic diagram of the external network architecture of the media server in the first embodiment of the present invention.
- Fig. 3 is a schematic diagram of a maximum load interference algorithm for resource scheduling in the first embodiment of the present invention
- FIG. 4 is a schematic diagram of the CPU+GPU media processing work in the first embodiment of the present invention.
- Figure 5 is a schematic diagram of a transcoding playback flow in the first embodiment of the present invention.
- Figure 6 is a schematic diagram of a transcoding video recording process in the first embodiment of the present invention.
- FIG. 7 is a schematic diagram of a meeting process in the first embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of a media data processing device in the second embodiment of the present invention.
- FIG. 9 is a schematic diagram of the first media server architecture in the second embodiment of the present invention.
- FIG. 10 is a schematic diagram of a second media server architecture in the second embodiment of the present invention.
- FIG. 11 is a schematic diagram of a third media server architecture in the second embodiment of the present invention.
- FIG. 12 is a schematic diagram of a fourth media server architecture in the second embodiment of the present invention.
- Fig. 13 is a schematic diagram of a media server according to an embodiment of the present invention.
- embodiments of the present invention provide a media data processing method.
- the following describes the embodiments of the present invention in further detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, and do not limit the present invention.
- the first embodiment of the present invention provides a media data processing method, which is applied to a media server.
- the flow of the method is shown in Fig. 1, and includes steps S101 to S102:
- S101 Judge the service type of the service to be processed by the CPU, where the service type includes audio media data service and video media data service;
- the received message is parsed by the CPU to obtain the service to be processed corresponding to the message, and the service type of the service to be processed is determined.
- the service type includes audio media data service and video media service.
- the CPU in this embodiment of the present invention receives a message sent by a user terminal from an application server (AS) or a proxy server (SIP PROXY), and the message to be processed can be obtained by parsing the message.
- AS application server
- SIP PROXY proxy server
- FIG 2 is a schematic diagram of the external network architecture of the media server according to the embodiment of the present invention. It can be seen from Figure 2 that the message of the user terminal is transmitted to the media server via the core network and the application server, and the CPU of the media server receives the message from the user terminal. After the message, the message is parsed.
- the user terminal can be a mobile phone, a computer, a hard terminal and other user terminal equipment; the core network, the user terminal accesses the core network through the network, the core network is the core of the bearer message receiving and sending; the application server is the access
- the application logic control unit before the media server is responsible for coordinating service signaling control and call logic control between various user terminals and the media server.
- S102 Allocate the audio media data service to a CPU for processing, and allocate the video media data service to a graphics processor GPU for processing.
- the CPU allocates audio media data services to the CPU for processing according to the service types of the services to be processed, and allocates the video media data services to the Graphics Processing Unit (GPU) for processing, that is,
- the processing of signaling services and audio media data services is completed through the CPU, and the processing of video media data services is completed through the GPU, so as to give full play to the advantages of GPU processing video media data services, thereby effectively avoiding the use of CPU Deal with the problem of excessive CPU consumption caused by video frequency media data services.
- the video media data service is processed on the video media data through the GPU, and the processing result is sent to the CPU to which the video media data service is allocated, so as to give full play to the GPU's ability to process the video media data service.
- the video media data processing includes one or more of the following: encoding, decoding, scaling and synthesis.
- the audio media data service includes one or more of the following: audio playback service, audio conference service, and audio recording service
- the video media data service includes one of the following One or more: video playback service, video conference service and video recording service.
- the CPU is set on the media server, or is deployed on a physical machine in a distributed manner.
- the GPU in the embodiment of the present invention may also be set on the media server, or It is distributed on a physical machine.
- the CPU and GPU in the embodiment of the present invention may be directly set on the media server, or may be set on other physical machines, and the physical machine is connected to the media server to realize the control of the CPU and GPU by the media server.
- CPU and GPU in the embodiment of the present invention can be set to multiple to better meet the needs of different users.
- GPU is designed to perform complex mathematical and geometric calculations. In terms of floating-point calculations, parallel calculations, etc., GPUs can provide tens or even hundreds of times the performance of CPUs. Based on these features of GPUs, embodiments of the present invention Use GPU to improve the ability and quality of video media processing, and currently common GPU manufacturers generally provide sdk tools related to media processing. However, devices with GPU hardware resources are more precious, and GPU resources will be wasted if they are unreasonably used.
- the core of the embodiments of the present invention is to process signaling services and audio streams with low performance on the CPU resource pool, and process video streams with high performance on the GPU resource pool to achieve service processing and video media.
- Separation of processing not only takes full advantage of the CPU's precise scheduling of resources on the virtualization platform, but also gives full play to the advantages of GPU in video media processing, avoids the deficiencies of GPU on the virtualization platform, and improves the service capabilities of the software media server And service quality.
- the embodiment of the present invention adopts a CPU+GPU heterogeneous media data processing method, completely abandoning the traditional processing method of business and video processing under the same resource pool (that is, when the resource pool includes CPU, memory, I/O, etc.)
- the processing method on the server and the local CPU resource pool is used to handle lightweight tasks such as signaling services and audio processing, and the media processing module with GPU hardware resources is deployed to process video media streams, which realizes service control and
- the separation of video media processing makes the local CPU resource pool reduce the task level, avoid reaching performance bottlenecks, and give full play to the advantages of GPU in video image processing.
- allocating the audio media data service to the CPU for processing includes: calculating the remaining resources of each CPU, and processing the audio media data service according to the remaining resources of each CPU. Make an assignment.
- the embodiment of the present invention allocates the audio media data service based on the calculated remaining resources of each CPU.
- the embodiment of the present invention obtains the resource capacity consumed by processing the audio media data service by performing resource quantitative calculation on the audio media data service;
- the embodiment of the present invention allocates audio media data services according to the remaining resources of each CPU and the resource capacity consumed by processing the audio media data services.
- calculating the remaining resources of each CPU and allocating the audio media data service according to the remaining resources of each CPU and the resource capacity consumed by the audio media data service includes:
- Calculate the remaining resources of each CPU store the remaining resources of each CPU in the first resource scheduling management list, and allocate according to the preset according to the first resource scheduling management list and the resource capacity consumed by the audio media data service
- the rules distribute the audio media data services.
- the embodiment of the present invention is to calculate the remaining resource status of each CPU in real time, and store the remaining resource status of each CPU in the first resource scheduling management list, and subsequently according to the first resource scheduling management list and processing
- the resource capacity consumed by the audio media data service realizes the allocation of the audio media data service.
- the first resource scheduling management list in the embodiment of the present invention also needs to record the online and offline status of the CPU in real time.
- the embodiment of the present invention will query the CPU node in real time or regularly. Whenever a CPU goes online (or goes offline), the change of this node will be queried in time and added (deleted) to the first resource scheduling management list.
- allocating the video media data service to GPU processing includes: calculating the remaining resources of each GPU, and allocating the audio media data service according to the remaining resources of each GPU.
- the method described in the embodiment of the present invention further includes: performing resource quantitative calculation on the video media data service to obtain the resource capacity actually consumed by the video media data service;
- the embodiment of the present invention allocates the video media data service based on the remaining resources of each GPU and the resource capacity consumed by processing the video media data service.
- the embodiment of the present invention calculates the remaining resources of each GPU, stores the remaining resources of each GPU in a second resource scheduling management list, and according to the second resource scheduling management list and the actual video media data service
- the consumed resource capacity is allocated to the video media data service according to a preset allocation rule.
- the embodiment of the present invention sets a second resource scheduling management list, and stores the remaining resources of each GPU in the second resource scheduling management list, and then according to the second resource scheduling management list.
- the resource scheduling management list and the resource capacity actually consumed by the video media data service are allocated to the video media data service according to a preset allocation rule.
- the second resource scheduling management list of the embodiment of the present invention also needs to record the online and offline status of the GPU in real time.
- the embodiment of the present invention will query GPU nodes in real time or at regular intervals, and whenever a GPU goes online (or offline), the changes of this node will be queried in time and added (deleted) to the second resource scheduling management list.
- the preset allocation rule includes: selecting the GPU with the largest remaining capacity for the video media data service to be processed based on the maximum load interference algorithm, and when the remaining capacity of the GPU cannot meet the requirements of the video media data service During processing, the CPU with the largest remaining capacity is selected to process the to-be-processed video media data service. To ensure faster processing speed of audio media data services to obtain better processing results.
- the resource quantitative calculation rules include one or more of the following:
- the power consumption of 1 channel of H264hp@720P HD video is equal to the power consumption of 1 CPU;
- the consumption of ultra-definition video capacity of 1 channel of H264hp@1080P is equal to the consumption of HD video capacity of 2 channels of H264hp@720P;
- H264hp@720P is equal to 1/2 channel H264hp@720P HD video capacity consumption
- the consumption of HD video capacity of 1 channel of H265@720P is equal to the consumption of HD video capacity of 2 channels of H264hp@720P;
- the consumption of ultra-definition video capacity of 1 channel of H265@1080P is equal to the consumption of HD video capacity of 2 channels of H265@720P;
- Video capacity consumption lower than H265@720P is equal to 1/2 channel H265@720P high-definition video capacity consumption.
- the embodiment of the present invention uses the foregoing calculation method to perform resource quantitative calculations on user terminal capabilities, and performs real-time management of the remaining capabilities of the local virtualized CPU resource pool and the resource pool with GPU hardware to implement subsequent access to new services.
- Load balancing Based on this load balancing, real-time monitoring of newly added (or deleted) CPU resource nodes and GPU nodes increases the dynamic scalability of the system.
- Figure 3 is a schematic diagram of the maximum carrier interference algorithm for resource scheduling according to an embodiment of the present invention.
- the maximum carrier interference algorithm selects the resource node with the largest remaining capacity for each access user terminal to process.
- the maximum carrier interference algorithm is described below in conjunction with Figure 3 Give a detailed description.
- the media server needs to query each deployed CPU resource node and GPU node and save it in the management list (specifically, store the remaining resources of the CPU in the first resource scheduling management list, and store the remaining resources of each GPU in the first resource scheduling management list. 2. In the resource scheduling management list), and obtain the remaining capacity value of each node in real time, and register it in the corresponding table;
- the media server determines whether the user terminal is a video terminal or a pure audio terminal based on the content negotiated with the user terminal media, that is, determines whether the service preprocessed by the user terminal is audio based on the received user terminal message Media data service or video media data service;
- the selected CPU resource node needs to save the relevant parameter information of the user terminal, otherwise the user is prompted that the computing resources are insufficient and the user is denied access.
- the video terminal includes audio stream and video stream processing.
- audio stream follow the above-mentioned processing mode for pure audio end; for video stream processing, you need to manage the list from the GPU (ie , Select a GPU media processing node with the largest remaining capacity in the second resource scheduling management list), and determine whether the remaining capacity of this node allows this video terminal to access.
- the selected GPU node needs to save the user terminal If you can’t access the relevant parameter information, it means that the GPU hardware resource pool does not have a sufficient resource node, and then select a CPU resource node with the largest remaining capacity from the CPU management list (ie, the first resource scheduling management list), and It is judged whether the remaining capacity of this node is satisfied. If so, the selected CPU resource node needs to save the relevant parameter information of the user terminal, otherwise the user is prompted that the computing resources are insufficient and the user is denied access.
- the CPU management list ie, the first resource scheduling management list
- FIG. 4 is a schematic diagram of a processing flow of a video media data service by a CPU+GPU according to an embodiment of the present invention, which will be described below with reference to FIG. 4.
- the decoding, scaling, synthesis and encoding of video media data services are carried out in the GPU, and the data is stored in the video memory.
- the embodiment of the present invention utilizes the advantages of GPU to process video media data services, and uses GPU to process the decoding, scaling, synthesis and encoding of video media data services, thereby avoiding the excessive use of processing video media data services.
- FIG. 5 is a schematic diagram of a process of transcoding and playing a video according to an embodiment of the present invention. The process of transcoding and playing a video according to an embodiment of the present invention will be described in detail below with reference to FIG. 5.
- the audio and video channel resources are allocated, and the capacity consumption of the service to be processed of the user terminal is calculated. If it is a pure audio terminal, only local CPU resources can be selected for processing; if it is a video terminal, select local While processing the audio stream with CPU resources, you must also select the GPU to process the video stream, and send the channel parameters to the GPU.
- the audio and video format of the file to be played is obtained. If the audio and video format negotiated by the user terminal is consistent, no transcoding is required; otherwise, transcoding is required to play.
- the GPU After the GPU receives the video playback instruction and related parameters, it reads the video data from the playback file. If the GPU does not need to be transcoded, the video data is directly packaged and sent to the user terminal. Otherwise, the video data is first decoded into YUV data. Then zoom, and finally encode, package, and send to the user terminal. The encoding, decoding, zooming and other operations must be performed in the assigned GPU.
- the GPU After receiving the terminal playback end instruction, the GPU is notified to end playback, and the allocated audio and video channel resources are released, and the resource scheduling unit updates the capacity consumption at the same time.
- the GPU is used to complete the encoding process of the file to be played on the user terminal.
- Fig. 6 is a schematic diagram of a transcoding video recording process according to an embodiment of the present invention, which will be described in detail below with reference to the drawings.
- the GPU After receiving the terminal recording request, the GPU compares whether the audio and video format of the file is consistent with the audio and video format of the user terminal. If it is consistent with the audio and video format of the user terminal, transcoding is not required; otherwise, transcoding is required.
- the GPU After the GPU receives the video instruction and related parameters, it receives the video data from the user terminal. If transcoding is not required, it writes the video data directly into the video file. Otherwise, it first decodes the video data into yuv data, then zooms, and finally Re-encoding is written into the video file, and the encoding, decoding, scaling and other operations must be performed in the allocated GPU.
- the GPU After receiving the terminal video recording end instruction, the GPU is notified to end the playback, and the allocated audio and video channel resources are released, and the capacity consumption is updated at the same time.
- Fig. 7 is a schematic diagram of a conference process implemented by the present invention. The conference process of an embodiment of the present invention will be described in detail below with reference to Fig. 7.
- the terminal access request After receiving the terminal access request, allocate audio and video channel resources and calculate the capacity consumption of the user terminal. If it is a pure audio terminal, only select local CPU resources for processing; if it is a video terminal, select local CPU resource processing At the same time as the audio stream, the GPU must be selected to process the video stream, and the channel parameters should be sent to the GPU.
- the GPU receives the terminal’s request to join the meeting. Since the meeting scene needs to be synthesized, the video data received from the user terminal needs to be decoded into YUV data first, then the decoded YUV data is synthesized again, and finally the synthesized YUV data is re-synthesized. The code is sent to the user terminal.
- the user terminal After receiving the request to join the meeting, the user terminal is added to the designated meeting, and the video data sent by the user terminal is received. First, the video data is decoded into YUV data, then scaled, then synthesized according to certain rules, and finally encoded and sent to In the user terminal, the encoding, decoding, scaling, and composition must be allocated to the GPU.
- the GPU is notified to exit the meeting, and the allocated audio and video channel resources and computing resources are released, and the capacity consumption is updated.
- the second embodiment of the present invention provides a media data processing device.
- the device is set on a media server, and the device includes:
- the media control unit is configured to determine the service type of the service to be processed through the CPU, and the service type includes audio media data service and video media data service;
- the resource scheduling unit is configured to allocate the audio media data service to a CPU for processing, and allocate the video media data service to a graphics processor GPU for processing;
- the media control unit uses the CPU to determine the service type of the service to be processed, and the resource scheduling unit allocates the audio media data service to the CPU for processing, and the video media data service to Graphics processor GPU processing, in order to give full play to the advantages of GPU processing video media data services, thereby effectively avoiding the problem of excessive CPU consumption caused by video media data services at the CPU.
- the CPU is set on the media server, or is deployed on a physical machine in a distributed manner
- the GPU is set on the media server, or is distributed on the media server. Deploy on a physical machine.
- the CPU and GPU in the embodiment of the present invention may be directly set on the media server, or may be set on other physical machines, and the physical machine is connected to the media server to realize the control of the CPU and GPU by the media server.
- the media control unit described in the embodiment of the present invention is further configured to parse the received message to obtain the service to be processed corresponding to the message.
- the resource scheduling unit is further configured to calculate the remaining resources of each CPU, and allocate the audio media data service according to the remaining resources of each CPU.
- the embodiment of the present invention uses the media control unit to perform resource quantification calculation on the audio media data service to obtain the resource capacity consumed by the audio media data service; and calculates the remaining resources of each CPU through the resource scheduling unit To allocate the audio media data service according to the remaining resources of each CPU and the resource capacity consumed by the audio media data service.
- the resource scheduling unit implemented in the embodiment of the present invention calculates the remaining resources of each CPU, stores the remaining resources of each CPU in the first resource scheduling management list, and according to the first resource scheduling management list And the ability to process the resources consumed by the audio media data service, and allocate the audio media data service according to a preset allocation rule.
- the embodiment of the present invention allocates audio media data services according to the remaining resources of each CPU and the resource capacity consumed by processing the audio media data services.
- the resource scheduling unit is also used to calculate the remaining resources of each GPU, and allocate the audio media data service according to the remaining resources of each GPU.
- the media control unit first performs resource quantification calculation on the video media data service to obtain the resource capacity consumed by the video media data service; and calculates the remaining amount of each GPU through the resource scheduling unit Resources, the video media data service is allocated according to the remaining resources of each GPU and the resource capacity consumed by processing the video media data service.
- the resource scheduling unit in the embodiment of the present invention calculates the remaining resources of each GPU, stores the remaining resources of each GPU in the second resource scheduling management list, and according to the second resource scheduling management list and The ability to process the resources consumed by the video media data service, and allocate the video media data service according to a preset allocation rule.
- the embodiment of the present invention uses the resource scheduling unit to uniformly manage the deployed GPU hardware resources, reasonably schedule, select the appropriate GPU to process the video media stream, and complete the decoding, encoding, scaling and synthesis of the video media stream. and many more.
- Figure 9 is the first media server architecture of the embodiment of the present invention.
- the media control unit is a module that interacts with the external application server AS, and is responsible for parsing the SIP signaling issued by the AS into internal specific service commands. At the same time, the internal response or request is converted into SIP signaling and sent to the AS.
- the media control unit in the embodiment of the present invention logically controls the service according to specific service commands (such as video services such as playing, recording, conference, etc.), and calculates the resource capacity actually consumed by this service, and finds the corresponding resource for subsequent resource scheduling.
- specific service commands such as video services such as playing, recording, conference, etc.
- GPU resource nodes or GPU nodes provide conditions.
- the resource scheduling unit manages all CPU resource pools and GPU nodes. Through the regular query node function, whenever a GPU node goes online (offline), the resource scheduling unit will query this node in time and add (delete) the resource scheduling management list in. In addition, if the CPU resource processing is to deploy multiple nodes in a distributed manner, it is also necessary to add, delete, and schedule the CPU resource nodes in real time.
- the resource scheduling unit implements the unified management of all nodes and implements reasonable and effective adjustment and use of GPU nodes in accordance with a certain scheduling algorithm.
- GPU node distributed deployment of multiple GPU nodes (can be various physical machines carrying GPU image processors, such as various hardware devices), select one or more nodes through the resource scheduling unit, and call the SDK provided by the GPU manufacturer Tools to process video media streams (including video decoding, encoding, scaling and synthesis, etc.), and finally complete various video media forwarding, transcoding, recording and conference functions.
- Transcoding video playback function find the media file under the play signaling path, read the video data from the file, if transcoding is not needed, directly package the video data and send it to the terminal through the RTP protocol, otherwise, first send the video data It is decoded into YUV data, then scaled, and finally encoded and packaged and sent to the terminal.
- the encoding, decoding, scaling and other operations must be performed in the GPU node allocated by the resource scheduling unit.
- Recording function receiving the video data from the terminal, if transcoding is not required, then write the video data directly into the recording file, otherwise, first decode the video data into YUV data, then zoom, and finally encode and write the recording file into the recording file. Operations such as encoding, decoding, and scaling must be performed in the GPU nodes allocated by the resource scheduling unit.
- the business processing module receives the request to join the conference, joins the video terminal to the specified conference, receives the video data sent by the terminal, first, decodes the video data into YUV data, then zooms, and then synthesizes according to certain rules. Finally, the encoding is sent to the terminal, and the encoding, decoding, scaling and other operations must be performed in the GPU node allocated by the resource scheduling unit.
- FIG. 10 is a second media server architecture according to an embodiment of the present invention.
- the first media server architecture is a media control unit and a resource scheduling unit
- the external network element only needs to interact with a single media control unit; while the second media server architecture is a separate media server system, which media control unit is controlled by the external network element.
- the meta may be a proxy server (SIPPROXY) or an application server (AS). According to FIG.
- the media control unit and the scheduling unit are provided in multiple, and the media control unit corresponds to the scheduling unit one by one, forming a group, and Each group is provided with a GPU uniquely corresponding to the group.
- the second media server architecture is often used when the number of users is particularly large. Since the GPU media processing resource pool in this architecture is only subordinate to a certain media control unit, GPU resources may be wasted.
- FIG. 11 is a third media server architecture according to an embodiment of the present invention.
- the media control unit includes multiple, the scheduling unit is one, and each media control unit is connected to the scheduling unit.
- the third media server architecture realizes the separation of business and computing power
- GPU hardware is not simply subordinate to a certain media control unit. Instead, it can be connected to multiple media control units, and the final GPU node can be selected by deploying a separate resource scheduling unit to reasonably schedule GPU node resources.
- GPU hardware resources can work for multiple media control units at the same time. Considering the preciousness of GPU resources, GPU hardware resources in the same area can be deployed and managed centrally to maximize the utilization rate of GPU hardware resources and make these GPU hardware resources efficient Use it wisely.
- the third type of media server architecture is virtualized "cluster" deployment by the upper-level media control unit, but this "cluster” can be independent of each other, independent of each other, or even remotely deployed, and only for those with low business and performance consumption.
- the audio code stream is processed.
- Concentrated GPU hardware resources serve these virtualization “clusters”, but only process video media streams (including video decoding, encoding, scaling, and synthesis, etc.), which not only realizes the separation of business processing and GPU media computing, but also It avoids the disadvantages of GPU in virtualization technology, and finally effectively improves the service capability and service quality of the media server.
- Figure 12 is the fourth media server architecture in the embodiment of the present invention.
- the architecture settings are mainly based on the consideration that with the continuous development of network technology, the implementation of GPU virtualization is also achievable, and the cloud-based operating system can follow certain algorithms.
- this media server architecture can not only use GPU to process video media data, but it is also lighter and simpler to deploy and implement media data processing systems.
- the GPU in the embodiment of the present invention mainly performs video media data processing on video media data services, and feeds back the processing results to the CPU to which the video media data services are allocated, wherein the video media Data processing includes one or more of the following: encoding, decoding, scaling, and synthesis.
- the third embodiment of the present invention provides a computer-readable storage medium storing a computer program, where the computer program is executed by a processor to implement any one of the media data processing methods in the first embodiment of the present invention A step of.
- the specific content can be understood with reference to the first embodiment of the present invention, and will not be discussed in detail here.
- the fourth embodiment of the present invention provides a media server, including: a memory 1301, a processor 1302, and a computer program stored on the memory 1301 and running on the processor 1302, wherein When the computer program is executed by the processor 1302, the steps of any one of the methods in the first embodiment of the present invention are implemented.
- a media server including: a memory 1301, a processor 1302, and a computer program stored on the memory 1301 and running on the processor 1302, wherein
- the computer program is executed by the processor 1302, the steps of any one of the methods in the first embodiment of the present invention are implemented.
- the specific content can be understood with reference to the first embodiment of the present invention, and will not be discussed in detail here.
- the CPU allocates the audio media data service to the CPU for processing according to the service type of the service to be processed, and allocates the video media data service to the Graphics Processing Unit (GPU) for processing, that is, the present invention
- the embodiment uses the CPU to process the signaling service and the audio media data service, and uses the GPU to complete the video media data service processing, so as to give full play to the advantages of the GPU processing video media data service, thereby effectively avoiding the CPU processing video media data The problem of excessive CPU consumption caused by business.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims (13)
- 一种媒体数据处理方法,包括:通过中央处理器CPU判断待处理业务的业务类型,所述业务类型包括音频媒体数据业务和视频媒体数据业务;将所述音频媒体数据业务分配给CPU处理,将所述视频媒体数据业务分配给图形处理器GPU处理;其中,所述CPU设置在所述媒体服务器上或者是分布式部署在物理机上,和/或,所述GPU设置在所述媒体服务器上或者是分布式部署在物理机上。
- 根据权利要求1所述的方法,在通过CPU判断待处理业务的业务类型之前,还包括:对接收到的消息进行解析,得到该消息所对应的待处理业务。
- 根据权利要求1所述的方法,其中,当所述媒体服务器包括多个CPU时,将所述音频媒体数据业务分配给CPU处理,包括:计算各个CPU的剩余资源,根据各个CPU的剩余资源对所述音频媒体数据业务进行分配。
- 根据权利要求3所述的方法,还包括:对所述音频媒体数据业务进行资源量化计算,得到处理该音频媒体数据业务消耗的资源能力;计算各个CPU的剩余资源,根据各个CPU的剩余资源对所述音频媒体数据业务进行分配,包括:计算各个CPU的剩余资源,根据各个CPU的剩余资源以及处理所述音频媒体数据业务消耗的资源能力,对所述音频媒体数据业务进行分配。
- 根据权利要求4所述的方法,其中,计算各个CPU的剩余资源,根据各个CPU的剩余资源以及处理所述音频媒体数据业务消耗的资源能力,对所述音频媒体数据业务进行分配,包括:计算各个CPU的剩余资源,将各个CPU的剩余资源存储到第一资源调度管理列表中,并根据所述第一资源调度管理列表以及处理所述音频媒体数据业务消耗的资源能力,按照预设分配规则对所述音频媒体数据业务进行分配。
- 根据权利要求1所述的方法,其中,当所述媒体服务器包括多个GPU时,将视频媒体数据业务分配给GPU处理,包括:计算各个GPU的剩余资源,根据各个GPU的剩余资源对所述视频媒体数据业务进行分配。
- 根据权利要求6所述的方法,还包括:对所述视频媒体数据业务进行资源量化计算,得到处理该视频媒体数据业务消耗的资源能力;其中,计算各个GPU的剩余资源,根据各个GPU的剩余资源对所述视频媒体数据业务进行分配,包括:计算各个GPU的剩余资源,根据各个GPU的剩余资源以及处理所述视频媒体数据业务消耗的资源能力,对所述视频媒体数据业务进行分配。
- 根据权利要求7所述的方法,其中,计算各个GPU的剩余资源,根据各个GPU的剩余资源以及处理所述视频媒体数据业务消耗的资源能力,对所述视频媒体数据业务进行分配,包括:计算各个GPU的剩余资源,将各个GPU的剩余资源存储到第二资源调度管理列表中,并根据所述第二资源调度管理列表以及处理所述视频媒体数据业务消耗的资源能力,按照预设分配规则对所述视频媒体数据业务进行分配。
- 根据权利要求5或8所述的方法,其中,所述预设分配规则包括:基于最大载干算法对待处理的音频媒体数据业务选择剩余能力最大的CPU进行处理;基于最大载干算法对待处理的视频媒体数据业务选择剩余能力最大的GPU进行处理,当该GPU的剩余能力不能满足所述视频媒体数据业务的处理时,则重新选择剩余能力最大的CPU对所述待处理的视频媒体数据业务进行处理。
- 根据权利要求1-8中任一项所述的方法,还包括:通过所述GPU对所述视频媒体数据业务进行视频媒体数据处理,并将处理结果发送给向其分配所述视频媒体数据业务的CPU,其中,所述视频媒体数据处理包括以下中的一种或多种:编码、解码、缩放和合成。
- 一种媒体数据处理装置,包括:媒体控制单元,用于通过CPU判断待处理业务的业务类型,所述业务类型包括音频媒体数据业务和视频媒体数据业务;资源调度单元,用于将所述音频媒体数据业务分配给CPU处理,将所述视频媒体数据业务分配给图形处理器GPU处理;其中,所述CPU设置在所述媒体服务器上或者是分布式部署在物理机上,和/或,所述GPU设置在所述媒体服务器上或者是分布式部署在物理机上。
- 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的媒体数据处理方法的步骤。
- 一种媒体服务器,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述计算机程序被所述处理器执行时实现如权利要求1至10任一项所述的方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910810921.7A CN112445605A (zh) | 2019-08-30 | 2019-08-30 | 一种媒体数据处理方法、装置及媒体服务器 |
CN201910810921.7 | 2019-08-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021036784A1 true WO2021036784A1 (zh) | 2021-03-04 |
Family
ID=74685086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/108467 WO2021036784A1 (zh) | 2019-08-30 | 2020-08-11 | 一种媒体数据处理方法、装置、媒体服务器及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112445605A (zh) |
WO (1) | WO2021036784A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115134628A (zh) * | 2022-06-27 | 2022-09-30 | 深圳市欢太科技有限公司 | 流媒体传输方法、装置、终端设备及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114035935B (zh) * | 2021-10-13 | 2024-07-19 | 上海交通大学 | 面向多阶段ai云服务的高吞吐异构资源管理方法及器件 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080270767A1 (en) * | 2007-04-26 | 2008-10-30 | Kabushiki Kaisha Toshiba | Information processing apparatus and program execution control method |
CN102143386A (zh) * | 2010-01-28 | 2011-08-03 | 复旦大学 | 一种基于图形处理器的的流媒体服务器加速方法 |
CN104952096A (zh) * | 2014-03-31 | 2015-09-30 | 中国电信股份有限公司 | Cpu和gpu混合云渲染方法、装置和系统 |
CN106210036A (zh) * | 2016-07-08 | 2016-12-07 | 中霆云计算科技(上海)有限公司 | 虚拟桌面呈现设备中的视频显示加速方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855218A (zh) * | 2012-05-14 | 2013-01-02 | 中兴通讯股份有限公司 | 数据处理系统、方法及装置 |
JP2017069744A (ja) * | 2015-09-30 | 2017-04-06 | 沖電気工業株式会社 | 通信制御装置及びプログラム、記録媒体、並びに、通信制御方法 |
US11475112B1 (en) * | 2016-09-12 | 2022-10-18 | Verint Americas Inc. | Virtual communications identification system with integral archiving protocol |
CN106534287B (zh) * | 2016-10-27 | 2019-11-08 | 杭州迪普科技股份有限公司 | 一种会话表项的管理方法和装置 |
-
2019
- 2019-08-30 CN CN201910810921.7A patent/CN112445605A/zh active Pending
-
2020
- 2020-08-11 WO PCT/CN2020/108467 patent/WO2021036784A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080270767A1 (en) * | 2007-04-26 | 2008-10-30 | Kabushiki Kaisha Toshiba | Information processing apparatus and program execution control method |
CN102143386A (zh) * | 2010-01-28 | 2011-08-03 | 复旦大学 | 一种基于图形处理器的的流媒体服务器加速方法 |
CN104952096A (zh) * | 2014-03-31 | 2015-09-30 | 中国电信股份有限公司 | Cpu和gpu混合云渲染方法、装置和系统 |
CN106210036A (zh) * | 2016-07-08 | 2016-12-07 | 中霆云计算科技(上海)有限公司 | 虚拟桌面呈现设备中的视频显示加速方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115134628A (zh) * | 2022-06-27 | 2022-09-30 | 深圳市欢太科技有限公司 | 流媒体传输方法、装置、终端设备及存储介质 |
CN115134628B (zh) * | 2022-06-27 | 2024-08-20 | 深圳市欢太科技有限公司 | 流媒体传输方法、装置、终端设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112445605A (zh) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10356365B2 (en) | Framework to support a hybrid of meshed endpoints with non-meshed endpoints | |
Reddy et al. | Qos-Aware Video Streaming Based Admission Control And Scheduling For Video Transcoding In Cloud Computing | |
US10019213B1 (en) | Composition control method for remote application delivery | |
US9241133B2 (en) | Distributed recording of a video based on available disk space | |
US6779181B1 (en) | Micro-scheduling method and operating system kernel | |
WO2017107911A1 (zh) | 一种视频云平台播放视频的方法及装置 | |
CN103167222B (zh) | 一种非线性云编辑系统 | |
WO2021036784A1 (zh) | 一种媒体数据处理方法、装置、媒体服务器及计算机可读存储介质 | |
Jokhio et al. | A computation and storage trade-off strategy for cost-efficient video transcoding in the cloud | |
CN103631634A (zh) | 实现图形处理器虚拟化的方法与装置 | |
CN113157418A (zh) | 服务器资源分配方法和装置、存储介质及电子设备 | |
KR102601576B1 (ko) | 단계 지원 작업 흐름을 위한 방법 및 장치 | |
CN104871132A (zh) | 介质硬件资源分配 | |
WO2015196590A1 (zh) | 桌面云视频的播放处理方法及装置 | |
JPH10164533A (ja) | 画像通信方法及び装置 | |
WO2024082985A1 (zh) | 一种安装有加速器的卸载卡 | |
CN107547909B (zh) | 媒体文件在线播放控制方法、装置和系统 | |
WO2023131076A2 (zh) | 视频处理方法、装置及系统 | |
Lee et al. | Video quality adaptation for limiting transcoding energy consumption in video servers | |
CN112543374A (zh) | 一种转码控制方法、装置及电子设备 | |
KR20130001169A (ko) | 마이크로-서버 클러스터 기반의 적응형 동영상 스트리밍 서버 | |
US20220167023A1 (en) | Information processing apparatus, information processing method, and program | |
WO2020062311A1 (zh) | 一种内存访问方法及装置 | |
Farhad et al. | Dynamic resource provisioning for video transcoding in iaas cloud | |
WO2023108468A1 (zh) | 一种流媒体处理方法、系统及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20856811 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20856811 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 080822) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20856811 Country of ref document: EP Kind code of ref document: A1 |