CN106791957B - Video live broadcast processing method and device - Google Patents

Video live broadcast processing method and device Download PDF

Info

Publication number
CN106791957B
CN106791957B CN201611116392.3A CN201611116392A CN106791957B CN 106791957 B CN106791957 B CN 106791957B CN 201611116392 A CN201611116392 A CN 201611116392A CN 106791957 B CN106791957 B CN 106791957B
Authority
CN
China
Prior art keywords
live broadcast
scheduling
video
memory
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611116392.3A
Other languages
Chinese (zh)
Other versions
CN106791957A (en
Inventor
郭兴宝
单衍景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING HUAXIA DENTSU TECHNOLOGY Co.,Ltd.
Original Assignee
BEIJING HUAXIA DIANTONG TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING HUAXIA DIANTONG TECHNOLOGY Co Ltd filed Critical BEIJING HUAXIA DIANTONG TECHNOLOGY Co Ltd
Priority to CN201611116392.3A priority Critical patent/CN106791957B/en
Publication of CN106791957A publication Critical patent/CN106791957A/en
Application granted granted Critical
Publication of CN106791957B publication Critical patent/CN106791957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2407Monitoring of transmitted content, e.g. distribution time, number of downloads

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a live video processing method and a live video processing device, wherein the live video processing method comprises the following steps: after video live broadcast is started, starting a scheduling thread, a detection thread and a switching thread; the detection thread is used for detecting the I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server; the scheduling thread is used for scheduling a scheduling queue of a video live channel according to the detection data of the detection thread; and the switching thread is used for switching the direct broadcast type according to the scheduling result of the scheduling thread. The invention can ensure the video live broadcast quality and the concurrency of the video path number.

Description

Video live broadcast processing method and device
Technical Field
The invention relates to the technical field of live video, in particular to a live video processing method and device.
Background
In the process of Live video, commonly used Streaming media protocols mainly include HTTP (HyperText transfer Protocol) progressive downloading and Real-Time Streaming media protocols based on RTSP (Real Time Streaming Protocol)/RTP (Real-Time Transport Protocol), which are completely different in principle, and are currently more convenient and applicable to the HTTP progressive downloading method on the public network, and the HTTP Live Streaming (hereinafter abbreviated as HLS) dynamic code rate adaptive technology of Apple company is a representative of the aspect. It was originally developed by apple for mobile devices such as iPhone, iPod, iTouch and iPad, and there are many applications on the desktop at present, for example, HTML5 can directly support the technology.
The most original HLS protocol is implemented based on a small slice mode, a large number of TS (Time-sharing operating System) files are generated on a hard disk, and storing or reading these files may generate a large number of I/O (input/output) operations, which are limited by I/O performance, and sensitivity in a live broadcast process may be reduced, which affects request speed. However, the original HLS has the advantage that a large concurrency can be achieved, and is not limited by the size of the memory, and the applicability of this method is good in the case that the memory is not particularly sufficient in the early development stage of the server. Therefore, the original HLS protocol is more demanding on storage I/O. The HLS live broadcast stored in the disk is called as the landing live broadcast, and the defect of the landing live broadcast is that the landing live broadcast is seriously dependent on I/O (input/output) and the video quality is influenced.
The current latest memory cache data real-time slicing technology ensures that the slicing and packaging capacity of a single server is no longer a bottleneck. The technology does not store the TS slice file into a disk but stores the TS slice file into a memory, so that the I/O frequency of the disk is greatly reduced, and the service life of the server disk is prolonged. The client side directly obtains the data from the memory of the server when requesting the data, so that the response speed of the client side data request is greatly improved, and the live broadcast watching experience of a user is optimized. The disadvantages are that the requirement for memory is very high, and the concurrency and throughput are greatly limited. The HLS live broadcast stored in the memory is called as memory live broadcast. The defect of live broadcast of the memory is that the concurrency cannot be guaranteed due to strong dependence on the memory.
However, how to guarantee both the quality of the video and the amount of concurrency of the number of video paths does not provide an effective solution in the prior art.
Disclosure of Invention
The embodiment of the invention provides a live video processing method, which is used for ensuring the video quality and the concurrency of the video path number, and comprises the following steps:
after video live broadcast is started, starting a scheduling thread, a detection thread and a switching thread; the detection thread is used for detecting the I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server; the scheduling thread is used for scheduling a scheduling queue of a video live channel according to the detection data of the detection thread; the switching thread is used for switching the direct broadcasting type according to the scheduling result of the scheduling thread;
the dispatch thread is further to:
judging whether the available memory in the video live broadcast server is larger than a second set value or not;
if the available memory in the video live broadcast server is larger than a second set value, taking out a floor live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching thread to switch from the floor live broadcast to the memory live broadcast, and deleting the live broadcast channel from the scheduling queue;
if the available memory in the video live broadcast server is not more than a second set value, taking out a memory live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching thread to switch from the memory live broadcast to the ground live broadcast, and deleting the live broadcast channel from the scheduling queue;
the number of the live broadcast paths available for the floor live broadcast in the scheduling queue is the largest in the floor live broadcast in the scheduling queue, and the time length from the last scheduling is longer than the set time length; and/or the live broadcast channel number of the live broadcast of the memory which can be used for scheduling is minimum in the live broadcast of the memory of the scheduling queue, and the time length from the last scheduling is longer than the set time length.
The embodiment of the invention also provides a video live broadcast processing device, which is used for ensuring the video quality and the concurrency of the video path number, and comprises the following components:
the detection module is used for detecting the I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server;
the scheduling module is used for scheduling a scheduling queue of a live video channel according to the detection data of the detection module;
the switching module is used for switching the direct broadcasting type according to the scheduling result of the scheduling module;
the scheduling module is further to:
judging whether the available memory in the video live broadcast server is larger than a second set value or not;
if the available memory in the video live broadcast server is larger than a second set value, taking out a floor live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching module for switching from the floor live broadcast to the memory live broadcast, and deleting the live broadcast channel from the scheduling queue;
if the available memory in the video live broadcast server is not more than a second set value, taking out a memory live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching module to switch from the memory live broadcast to the ground live broadcast, and deleting the live broadcast channel from the scheduling queue;
the number of live broadcast paths available for scheduling of the live broadcast of the ground broadcast is the largest in the live broadcast of the ground broadcast of the scheduling queue, and the time length from the last scheduling is longer than the set time length; and/or the live broadcast channel number of the live broadcast of the memory which can be used for scheduling is minimum in the live broadcast of the memory of the scheduling queue, and the time length from the last scheduling is longer than the set time length.
In the embodiment of the invention, the input/output I/O utilization rate, the available memory and the concurrency of each path of video stream in a video live broadcast server are detected; scheduling a scheduling queue of a live video channel according to the detection data; switching the direct broadcasting type according to the scheduling result; therefore, dynamic management and control of the live video channel are realized, and the quality and the concurrency of the video are optimized by switching the storage state of the video.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
fig. 1 is a schematic diagram of a video live broadcast processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a video live broadcast processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
The embodiment of the invention provides a live broadcast technology between a floor live broadcast and a memory live broadcast, so as to ensure the compatibility of video quality and concurrency. In the embodiment of the invention, the live broadcast and the live broadcast of the storage are dynamically switched by considering various conditions such as the I/O utilization rate, the available storage, the concurrency of each path of video stream and the like, thereby ensuring the smoothness and the stability of the live broadcast process.
Fig. 1 is a schematic view of a live video processing method in an embodiment of the present invention, and as shown in fig. 1, the live video processing method in the embodiment of the present invention may include:
after video live broadcast is started, starting a scheduling thread, a detection thread and a switching thread; the detection thread is used for detecting the I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server; the scheduling thread is used for scheduling a scheduling queue of a video live channel according to the detection data of the detection thread; and the switching thread is used for switching the direct broadcast type according to the scheduling result of the scheduling thread.
As can be seen from fig. 1, the live video processing method according to the embodiment of the present invention can solve the problems of fluency of live video and multiplexing of a live video server. Under the condition that a video live broadcast server is not added, the memory is automatically adjusted according to the number of live broadcast channels and the performance of the video live broadcast server, the live broadcast is automatically dropped, and stable live broadcast service is provided for the outside. When the method is implemented, the live broadcast type is automatically switched and automatically distributed by mainly using HLS-based memory slices and physical slices and combining multiple conditions of bandwidth, memory, I/O storage and the like, so that perfect live broadcast of videos is completed.
In specific implementation, after video live broadcast is started, a scheduling thread, a detection thread and a switching thread are started; the detection thread is used for detecting the I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server. The amount of concurrency for each video stream may be the number of viewers per video stream. In a specific embodiment, the detection thread may be further configured to obtain an I/O usage rate in the video live server by executing an iostat command. Because the iostat command is mainly used for monitoring the I/O load condition of system equipment, when the iostat command is operated for the first time, various statistical information from the start of the system is displayed, then the iostat command is operated to display the statistical information after the command is operated for the last time, and the required statistical information can be obtained by specifying the number of times and time of statistics, therefore, the acquisition of the I/O utilization rate in the video live broadcast server can be obtained by calling the system command iostat, and% util is used as the I/O utilization rate and can indicate how much percent of time in one second is used for I/O operation.
In a specific embodiment, the detection thread may be further configured to obtain the available memory in the live video server by calling a system interface of the live video server. By calling the system interface, the available memory size can be obtained. For example, since the video live broadcast server system mainly runs an intsyslnfo (structsysinfo) service program, the system memory can be regarded as the maximum memory that can be applied by the service program, and the available memory of the program can be 80% of the available memory of the system in consideration of the system occupation.
In a specific embodiment, the detection thread may be further configured to obtain a concurrency amount of each path of video stream in the video live broadcast server by counting connection conditions of sockets (sockets); wherein, for the condition that the IP of the client is the same, the concurrency is counted by the sending times of the same data slice.
Two programs on the network exchange data via a bidirectional communication link, one end of which is called a Socket. At least one pair of port numbers (sockets) is required to establish the network communication connection. Socket is essentially A Programming Interface (API). Socket, also commonly referred to as Socket, is used to describe IP addresses and ports, and is a handle of a communication chain, which can be used to implement communication between different virtual machines or different computers. A host computer on the Internet typically runs a plurality of service software, providing several services simultaneously. Each service opens a Socket and binds to a port, with different ports corresponding to different services.
The concurrency of each path of video stream in the live video server can be obtained through statistics of the connection condition of the Socket. Since the downloading of HLS is a fragmented download, there will be errors depending on client IP statistics. Since there may be many people accessing the same IP, the last concurrency is counted by the number of times of sending the same piece of data for the case where the client IP is the same.
And the scheduling thread is used for scheduling the scheduling queue of the live video channel according to the detection data of the detection thread. The scheduling thread can modify the channel queue parameters according to the detection data and inform the switching thread to switch. In particular implementation, the dispatch thread may be further configured to: after receiving a channel opening instruction, judging whether the I/O utilization rate in a video live broadcast server exceeds a first set value or not; if the I/O utilization rate in the video live broadcast server exceeds a first set value, forbidding a new channel to be added into a scheduling queue; and if the I/O utilization rate in the live video server does not exceed a first set value, adding the new channel into a scheduling queue. The first setting value may be set and adjusted according to actual conditions, for example, may be 80%. For example, after receiving the channel opening command, the scheduling thread needs to determine whether the usage rate of the system IO exceeds 80%. If the number of the channels exceeds 80%, the addition of a new channel is forbidden, because when the disk% util is greater than 80%, the reading speed has more delay (wait), which indicates that too many live broadcast landed in use; if not, the new channel is added to the scheduling queue for scheduling.
In a particular embodiment, the scheduling thread may be further configured to: judging whether the available memory in the video live broadcast server is larger than a second set value or not; if the available memory in the video live broadcast server is larger than a second set value, taking out a floor live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching thread to switch from the floor live broadcast to the memory live broadcast, and deleting the live broadcast channel from the scheduling queue; and if the available memory in the video live broadcast server is not more than a second set value, taking out a memory live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching thread to switch from the memory live broadcast to the ground live broadcast, and deleting the live broadcast channel from the scheduling queue. The second setting value may be set and adjusted according to actual conditions, for example, 100M may be taken.
For example, when scheduling a channel thread, the scheduling thread always processes the task state in the scheduling queue, and the processing procedure may include:
1. sequencing the queues from large to small according to the concurrency quantity, and setting all switching factors as 0;
2. preferably using the memory live broadcast, judging whether the size of the available memory is larger than 100M, and reserving 100M of memory for adding a new channel and other expenses, wherein the value of 100M is an empirical value.
3. And if the available memory is more than 100M, taking a floor live broadcast which can be used for scheduling from the scheduling queue, then handing the live broadcast channel to a switching thread, switching from the floor live broadcast to the memory live broadcast, and deleting the live broadcast from the scheduling queue.
4. And if the available memory is less than or equal to 100M, taking out a live memory which can be used for scheduling from the scheduling queue, then handing the live channel to a switching thread, switching from the live memory to the live broadcast on the ground, and deleting the live memory from the scheduling queue.
In a specific embodiment, the number of the live broadcast paths of the floor live broadcast which can be used for scheduling is the largest in the floor live broadcast of the scheduling queue, and the time length from the last scheduling is longer than the set time length; and/or the live broadcast channel number of the live broadcast of the memory which can be used for scheduling is minimum in the live broadcast of the memory of the scheduling queue, and the time length from the last scheduling is longer than the set time length.
The set time length can be set and adjusted according to actual conditions, and for example, 30 seconds can be taken. The available live broadcast on the ground needs to meet the condition that the number of live broadcast paths is the largest, and the scheduling time is more than 30 seconds from the last time, so that frequent switching of the scheduling mode can be avoided, the burden of a CPU (central processing unit) and a memory can be increased by frequently switching the scheduling mode, and the 30 seconds are experience values adjusted in the actual test process. The available live memory broadcast needs to meet the requirement of minimum live broadcast paths, and the scheduling time from the last time is more than 30 seconds.
And the switching thread is used for switching the direct broadcast type according to the scheduling result of the scheduling thread. In a specific embodiment, the switching thread may be further configured to switch the live type when an I frame of video data arrives. After the scheduling thread finishes one-time scheduling, the live channel needs to be switched by the switching thread, and the fluency of the video needs to be ensured in the switching process, so that the arrival of video data I frames needs to be waited, and the intervals of different system I frames are different, so the speed of switching the channel mode is different.
In a specific embodiment, the switching thread may be further configured to: when switching from memory live broadcast to floor live broadcast is carried out, a file is newly built, new data is stored to a disk, and a data sending thread is informed to read data from the disk and send the data to a client; and when switching from the floor live broadcast to the memory live broadcast is carried out, stopping writing the file, storing the new data into the memory, and informing the data sending thread to read the data from the memory and send the data to the client.
For example, if the memory is live broadcast to live broadcast on the ground, a new file needs to be created, new data is stored in a disk, a data sending thread is notified, and data is read from the disk from the slice and sent to the client. And updates the switching time of the task. And after the sending data thread reaches the specified time slice, reading the data from the disk, and deleting the previous disk data. And if the live broadcast is from the ground live broadcast to the memory live broadcast, stopping writing the file, storing the data in the memory, informing a data sending thread, starting from the slice, reading the data from the memory, and sending the data to the client. And after the sending data thread reaches the specified time slice, reading data from the memory and deleting the previous disk data. And updates the switching time of the task. After the switching work is completed, the task is added to the scheduling queue again for the scheduling thread to schedule.
It can be seen from the foregoing embodiments that the live video processing method according to the embodiments of the present invention provides a server performance adaptive public network live broadcast technology, and dynamically adjusts live memory broadcast and live broadcast on the ground according to an actual memory, a network bandwidth, an I/O performance, a channel access amount, and the like, so as to be capable of adapting to various live broadcast application occasions. For example, the video live broadcast processing method provided by the embodiment of the invention can solve the problems of fluency of external live broadcast and multiplexing of servers in the court public network live broadcast system at present. Under the condition that the number of the live broadcast channels and the performance of the server are not increased, the memory is automatically adjusted, the live broadcast is dropped to the ground, and stable live broadcast service is provided for the outside. The method is mainly based on HLS memory slices and physical slices, and combines multiple conditions of bandwidth, memory and I/O storage, so that automatic switching and automatic distribution of the memory and the live broadcast on the ground are realized, and perfect live broadcast on a court trial site is completed. Court trial disclosure is the key to judicial disclosure. With the publication of the court trial process, the assurance of quality and performance during the court trial becomes a ring that must be ensured during the publication process. By applying the live video processing method provided by the embodiment of the invention, the video concurrency can be improved under the condition of ensuring the video quality, and a powerful guarantee is provided for court trial disclosure. Obviously, the video live broadcast processing method of the embodiment of the invention is not limited to court business, and can be expanded to all live broadcast platforms.
Based on the same inventive concept, the embodiment of the present invention further provides a live video processing apparatus, as described in the following embodiments. Because the principle of the device for solving the problems is similar to the video live broadcast processing method, the implementation of the device can refer to the implementation of the video live broadcast processing method, and repeated parts are not described again.
Fig. 2 is a schematic diagram of a live video processing apparatus according to an embodiment of the present invention, and as shown in fig. 2, the live video processing apparatus according to the embodiment of the present invention may include:
the detection module 201 is configured to detect an I/O usage rate, an available memory, and a concurrency amount of each path of video stream in the live video server;
the scheduling module 202 is configured to schedule a scheduling queue of a live video channel according to the detection data of the detection module 201;
a switching module 203, configured to switch the direct broadcast type according to the scheduling result of the scheduling module 202.
In one embodiment, the detection module 201 may be further configured to:
acquiring the I/O utilization rate in a video live broadcast server by executing an iostat command;
acquiring an available memory in a live video server by calling a system interface of the live video server;
the method comprises the steps of obtaining the concurrency of each path of video stream in a video live broadcast server by counting the connection condition of Socket; wherein, for the condition that the IP of the client is the same, the concurrency is counted by the sending times of the same data slice.
In one embodiment, the scheduling module 202 may be further configured to:
after receiving a channel opening instruction, judging whether the I/O utilization rate in a video live broadcast server exceeds a first set value or not;
if the I/O utilization rate in the video live broadcast server exceeds a first set value, forbidding a new channel to be added into a scheduling queue;
and if the I/O utilization rate in the live video server does not exceed a first set value, adding the new channel into a scheduling queue.
In one embodiment, the scheduling module 202 may be further configured to:
judging whether the available memory in the video live broadcast server is larger than a second set value or not;
if the available memory in the video live broadcast server is larger than a second set value, taking out a floor live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching module for switching from the floor live broadcast to the memory live broadcast, and deleting the live broadcast channel from the scheduling queue;
and if the available memory in the video live broadcast server is not more than a second set value, taking out a memory live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching module to switch from the memory live broadcast to the ground live broadcast, and deleting the live broadcast channel from the scheduling queue.
In one embodiment, the number of the live broadcast paths available for the scheduled floor live broadcast is the largest among the floor live broadcasts of the scheduling queue, and the time length from the last scheduling is longer than the set time length; and/or the live broadcast channel number of the live broadcast of the memory which can be used for scheduling is minimum in the live broadcast of the memory of the scheduling queue, and the time length from the last scheduling is longer than the set time length.
In one embodiment, the switching module 203 may be further configured to switch the live type when an I frame of the video data arrives.
In one embodiment, the switching module 203 may be further configured to:
when switching from the memory live broadcast to the floor live broadcast is carried out, a file is newly built, new data is stored to a disk, and the new data is used for reading data from the disk and sending the data to a client;
and when switching from the floor live broadcast to the memory live broadcast is carried out, stopping writing the file, storing the new data into the memory, and reading the data from the memory and sending the data to the client.
In summary, the embodiments of the present invention provide a technique that can solve both the quality and the concurrence of live video broadcast in view of the advantages and disadvantages of live memory broadcast and live broadcast on the ground. In the embodiment of the invention, the input/output I/O utilization rate, the available memory and the concurrency of each path of video stream in a video live broadcast server are detected; scheduling a scheduling queue of a live video channel according to the detection data; switching the direct broadcasting type according to the scheduling result; therefore, dynamic management and control of the live video channel are realized, and the quality and the concurrency of the video are optimized by switching the storage state of the video.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A video live broadcast processing method is characterized by comprising the following steps:
after video live broadcast is started, starting a scheduling thread, a detection thread and a switching thread; the detection thread is used for detecting the input/output I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server; the scheduling thread is used for scheduling a scheduling queue of a video live channel according to the detection data of the detection thread; the switching thread is used for switching the direct broadcasting type according to the scheduling result of the scheduling thread;
the detection thread is further to:
acquiring the I/O utilization rate in a video live broadcast server by executing an iostat command;
acquiring an available memory in a live video server by calling a system interface of the live video server;
the method comprises the steps of obtaining the concurrency of each path of video stream in a video live broadcast server by counting the connection condition of Socket; for the condition that the IP of the client is the same, counting the concurrency quantity through the sending times of the same data piece;
the dispatch thread is further to:
judging whether the available memory in the video live broadcast server is larger than a second set value or not;
if the available memory in the video live broadcast server is larger than a second set value, taking out a floor live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching thread to switch from the floor live broadcast to the memory live broadcast, and deleting the live broadcast channel from the scheduling queue;
if the available memory in the video live broadcast server is not more than a second set value, taking out a memory live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching thread to switch from the memory live broadcast to the ground live broadcast, and deleting the live broadcast channel from the scheduling queue;
the number of the live broadcast paths available for the floor live broadcast in the scheduling queue is the largest in the floor live broadcast in the scheduling queue, and the time length from the last scheduling is longer than the set time length; and/or the live broadcast channel number of the live broadcast of the memory which can be used for scheduling is minimum in the live broadcast of the memory of the scheduling queue, and the time length from the last scheduling is longer than the set time length.
2. The method of claim 1, wherein the dispatch thread is further to:
after receiving a channel opening instruction, judging whether the I/O utilization rate in a video live broadcast server exceeds a first set value or not;
if the I/O utilization rate in the video live broadcast server exceeds a first set value, forbidding a new channel to be added into a scheduling queue;
and if the I/O utilization rate in the live video server does not exceed a first set value, adding the new channel into a scheduling queue.
3. The method of claim 1, wherein the switching thread is further for switching a live type when a video data I-frame arrives.
4. The method of claim 1, wherein the switching thread is further to:
when switching from memory live broadcast to floor live broadcast is carried out, a file is newly built, new data is stored to a disk, and a data sending thread is informed to read data from the disk and send the data to a client;
and when switching from the floor live broadcast to the memory live broadcast is carried out, stopping writing the file, storing the new data into the memory, and informing the data sending thread to read the data from the memory and send the data to the client.
5. A live video processing apparatus, comprising:
the detection module is used for detecting the I/O utilization rate, the available memory and the concurrency of each path of video stream in the video live broadcast server;
the scheduling module is used for scheduling a scheduling queue of a live video channel according to the detection data of the detection module;
the switching module is used for switching the direct broadcasting type according to the scheduling result of the scheduling module;
the detection module is further to:
acquiring the I/O utilization rate in a video live broadcast server by executing an iostat command;
acquiring an available memory in a live video server by calling a system interface of the live video server;
the method comprises the steps of obtaining the concurrency of each path of video stream in a video live broadcast server by counting the connection condition of Socket; for the condition that the IP of the client is the same, counting the concurrency quantity through the sending times of the same data piece;
the scheduling module is further to:
judging whether the available memory in the video live broadcast server is larger than a second set value or not;
if the available memory in the video live broadcast server is larger than a second set value, taking out a floor live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching module for switching from the floor live broadcast to the memory live broadcast, and deleting the live broadcast channel from the scheduling queue;
if the available memory in the video live broadcast server is not more than a second set value, taking out a memory live broadcast which can be used for scheduling from the scheduling queue, handing the live broadcast channel to a switching module to switch from the memory live broadcast to the ground live broadcast, and deleting the live broadcast channel from the scheduling queue;
the number of live broadcast paths available for scheduling of the live broadcast of the ground broadcast is the largest in the live broadcast of the ground broadcast of the scheduling queue, and the time length from the last scheduling is longer than the set time length; and/or the live broadcast channel number of the live broadcast of the memory which can be used for scheduling is minimum in the live broadcast of the memory of the scheduling queue, and the time length from the last scheduling is longer than the set time length.
6. The apparatus of claim 5, wherein the scheduling module is further to:
after receiving a channel opening instruction, judging whether the I/O utilization rate in a video live broadcast server exceeds a first set value or not;
if the I/O utilization rate in the video live broadcast server exceeds a first set value, forbidding a new channel to be added into a scheduling queue;
and if the I/O utilization rate in the live video server does not exceed a first set value, adding the new channel into a scheduling queue.
7. The apparatus of claim 5, wherein the switching module is further configured to switch a live type when an I frame of video data arrives.
8. The apparatus of claim 5, wherein the switching module is further to:
when switching from the memory live broadcast to the floor live broadcast is carried out, a file is newly built, new data is stored to a disk, and the new data is used for reading data from the disk and sending the data to a client;
and when switching from the floor live broadcast to the memory live broadcast is carried out, stopping writing the file, storing the new data into the memory, and reading the data from the memory and sending the data to the client.
CN201611116392.3A 2016-12-07 2016-12-07 Video live broadcast processing method and device Active CN106791957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611116392.3A CN106791957B (en) 2016-12-07 2016-12-07 Video live broadcast processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611116392.3A CN106791957B (en) 2016-12-07 2016-12-07 Video live broadcast processing method and device

Publications (2)

Publication Number Publication Date
CN106791957A CN106791957A (en) 2017-05-31
CN106791957B true CN106791957B (en) 2020-02-14

Family

ID=58882089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611116392.3A Active CN106791957B (en) 2016-12-07 2016-12-07 Video live broadcast processing method and device

Country Status (1)

Country Link
CN (1) CN106791957B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110875951B (en) * 2018-09-04 2022-07-01 北京奇虎科技有限公司 Statistical method and device for concurrency of call messages
CN112788352B (en) * 2019-11-01 2023-04-25 Vidaa(荷兰)国际控股有限公司 Live broadcast time shifting method, terminal and storage medium
CN111093107A (en) * 2019-12-18 2020-05-01 深圳市麦谷科技有限公司 Method and device for playing real-time live stream
CN111831432B (en) * 2020-07-01 2023-06-16 Oppo广东移动通信有限公司 IO request scheduling method and device, storage medium and electronic equipment
CN112199250B (en) * 2020-09-15 2024-05-14 北京达佳互联信息技术有限公司 Picture monitoring method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005063189A (en) * 2003-08-14 2005-03-10 Fujitsu Ltd Electronic equipment and processing method, and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110026B2 (en) * 2001-07-03 2006-09-19 Logitech Europe S.A. Image tagging for post processing
CN1624669A (en) * 2003-12-02 2005-06-08 陈凯 Method of raising magnetic disc read speed of docament service device
CN101287107B (en) * 2008-05-29 2010-10-13 腾讯科技(深圳)有限公司 Demand method, system and device of media file
CN102710969B (en) * 2012-05-31 2015-02-11 北京冠华天视数码科技有限公司 Method and system for transmitting live broadcast data through wireless network
CN105635811A (en) * 2014-11-06 2016-06-01 中广美意文化传播控股有限公司 Advertisement playing method and device based on broadcast and TV wireless live broadcast signal
CN105187848B (en) * 2015-08-18 2018-06-29 浪潮软件集团有限公司 Content distribution network system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005063189A (en) * 2003-08-14 2005-03-10 Fujitsu Ltd Electronic equipment and processing method, and program

Also Published As

Publication number Publication date
CN106791957A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106791957B (en) Video live broadcast processing method and device
US11228630B2 (en) Adaptive bit rate media streaming based on network conditions received via a network monitor
US10631024B2 (en) Intelligent video streaming system
US11005702B2 (en) Live media encoding failover system
WO2017101488A1 (en) Real-time transcoding monitoring method and real-time transcoding system
US9998915B2 (en) Wireless communication device
CN108243222A (en) Server network architecture method and device
KR101774983B1 (en) Method, apparatus, and system for monitoring quality of ott video
US20150134846A1 (en) Method and apparatus for media segment request retry control
CN106412630B (en) Video list switching control method and device
US9363199B1 (en) Bandwidth management for data services operating on a local network
US9813321B2 (en) Hybrid content delivery system
US9607002B2 (en) File retrieval from multiple storage locations
US10681398B1 (en) Video encoding based on viewer feedback
US10986156B1 (en) Quality prediction apparatus, quality prediction method and program
CN106330548B (en) Flow statistical method, device and system
CN113055493A (en) Data packet processing method, device, system, scheduling device and storage medium
US11985072B2 (en) Multimedia data stream processing method, electronic device, and storage medium
US9813316B1 (en) Implementing scalable throttled poller
CN108040261B (en) Network live broadcast management method and device and storage medium
US20240214643A1 (en) Bullet-screen comment data processing
CN111885198B (en) Message processing method, system and device and electronic setting
JP2018082241A (en) Moving image reproduction apparatus, moving image reproduction method and program
CN118337764A (en) Video stream processing method and device, nonvolatile storage medium and electronic equipment
CN116743944A (en) High-performance video data distribution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 101, 5 / F, building 6, yard 3, fengxiu Middle Road, Haidian District, Beijing 100085

Patentee after: BEIJING HUAXIA DENTSU TECHNOLOGY Co.,Ltd.

Address before: 100085 A, Ka Wah building, No. 9, 3rd Street, Beijing, Haidian District, A301

Patentee before: BEIJING CHINASYS TECHNOLOGIES Co.,Ltd.

CP03 Change of name, title or address