CN115988230B - Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file - Google Patents

Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file Download PDF

Info

Publication number
CN115988230B
CN115988230B CN202211615128.XA CN202211615128A CN115988230B CN 115988230 B CN115988230 B CN 115988230B CN 202211615128 A CN202211615128 A CN 202211615128A CN 115988230 B CN115988230 B CN 115988230B
Authority
CN
China
Prior art keywords
gop
transcoding
video
data
web
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211615128.XA
Other languages
Chinese (zh)
Other versions
CN115988230A (en
Inventor
黄剑
李鹏
李滨
孔令宇
郭磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijixing Cloud Space Technology Co ltd
Original Assignee
Beijixing Cloud Space Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijixing Cloud Space Technology Co ltd filed Critical Beijixing Cloud Space Technology Co ltd
Priority to CN202211615128.XA priority Critical patent/CN115988230B/en
Publication of CN115988230A publication Critical patent/CN115988230A/en
Application granted granted Critical
Publication of CN115988230B publication Critical patent/CN115988230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files, and belongs to the technical field of security video live broadcast. The method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files comprises the following steps: acquiring an original video: the server accesses the front-end camera and acquires an original video frame; original video transcoding: the server side carries out transcoding based on an extremely small GOP on the original video frame; encapsulating the video stream: and the server side carries out MPEG2-TS encapsulation on the transcoded video stream. The service end of the invention carries out transcoding based on the minimum GOP on the original video frame, the hardware acceleration or semi-acceleration transcoding of the original video stream is carried out to the video stream of the minimum GOP, under the principle framework of HLS, the video stream is sent to the Web end in a virtual file form through the reconstructed Http service, and the playing speed is dynamically adjusted at the Web end to maintain the low-delay playing level.

Description

Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file
Technical Field
The invention relates to the technical field of security video live broadcasting, in particular to a method for realizing low-delay HLS live broadcasting based on GOP transcoding and virtual files.
Background
The application of users to security videos is wider and wider, and users not only wish to retrieve videos through clients of the C/S, but also wish to retrieve videos in combination with a cloud and a browser. But the former IE no longer supports Flash, and the latter Microsoft discards IE to make ActiveX ancient. Conventional means for retrieving video at the browser side through plug-ins have met with significant challenges. How to change the technology, the user can freely read the video through the browser, and the method is a new sustainable exploration subject.
The currently available schemes are Http-FLV, webSocket, HLS, etc. The HLS (Http Live Streaming) has the best universality in comparison, is not only suitable for a PC end, but also can be seamlessly applied to a mobile end, namely an android system or an apple system. However, HLS is implemented according to the working principle proposed by apple corporation, and suffers from the disadvantage of intolerable delay, which is very large, exceeding 30 seconds.
In the various schemes at present, some can even reduce the delay to about 1 second.
But each has one or several of the following drawbacks:
1. In the existing scheme, the server needs to convert the streaming data into HLS after RTMP streaming. Flash is well known to have not been supported, and the streaming technology base of Flash is RTMP. Therefore, flash is not needed at the Web end, so that RTMP is directly invalidated at the Web end, and the RTMP only remains the application at the service end, but for the security monitoring industry, the industry standards are modes of application GB28181, RTSP and manufacturer SDK, and the RTMP is superfluous for the security monitoring service end.
2. Existing solutions are based on file cutting. Whether the hard disk file is split, the memory file is split, or the file of the large file is split, the essence is the same, the file must be relied on, the generation and the cutting of the file are both required to be participated in by IO, the system performance is influenced, the theoretical waiting time is generated, the playing delay of the Web end is increased, the performance of the server end is reduced due to the management of excessive slice files, and the access bottleneck is generated.
3. In the existing scheme, the m3u8 file and the video file transmitted to the Web terminal are real files. The m3u8 is either a file list which is already generated or a file list to be generated, or the file list is excessively tortuosity and dead plates if only the file name is contained in the file list, and the m3u8 is bound with a specific file, so that the possibility of reducing Web delay is limited no matter whether the file list is already generated or is to be generated.
Chinese patent publication No. CN114727130a discloses a method and system for virtually cutting TS files to provide HLS on-demand service, wherein the file list uses virtual TS slice information, and the scheme uses two layers of m3u8, but finally, the method is still based on re-cutting after positioning of real files, which is not improved at all.
4. Existing schemes do not adjust the coding parameters of the original video stream, especially the GOP. The GOP is a key for causing too large file cutting granularity, and the too large granularity can affect the delay increase of the Web end like domino.
Disclosure of Invention
The invention aims to provide a method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files, which reduces the delay of HLS at a Web end to 500-600 milliseconds, has more competitive power in the field of security real-time playing, and becomes an option for playing security real-time video at a user Web end so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: the method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files comprises the following steps:
Acquiring an original video: the server accesses the front-end camera and acquires an original video frame;
original video transcoding: the server side carries out transcoding based on an extremely small GOP on the original video frame;
encapsulating the video stream: the server side carries out MPEG2-TS encapsulation on the transcoded video stream and sends the video stream to the message queue publishing side;
Receiving a request: the server receives the Http long connection request of the Web end, maintains the session and keeps alive;
Feedback file: the server feeds back a virtual file list conforming to m3u8 specifications to a Web end m3u8 request in a session, wherein the file list contains 3 virtual files;
sending subscription data: the server side analyzes a request of a Web side ts video file in a session, acquires all data from a message queue subscribing side, sends the data to the Web side and empties the subscribing side;
Processing and caching: the Web end judges the continuous buffer triggering condition and processes correspondingly;
Adjusting the playing rate: the Web terminal realizes dynamic adjustment of the playing rate.
Preferably, the precondition of acquiring the original video is that the unique display and the nuclear display information of the query system are summarized to obtain a GPU encoding and decoding resource pool and a CPU decoding resource pool.
Preferably, the video access mode for obtaining the original video frame includes, but is not limited to, GB28181, RTSP/ONVIF, and SDKs of various manufacturers.
Preferably, the original video transcoding specifically includes the following steps:
inquiring a GPU encoding and decoding resource pool and a CPU decoding resource pool;
If the GPU coding and decoding resource pool has both available single-display decoding units and available single-display encoding units, selecting the single-display decoding units and the single-display encoding units, and carrying out hardware acceleration initialization, namely a hardware acceleration mode;
If the available core display coding units exist in the GPU coding and decoding resource pool and the available CPU decoding units exist in the CPU decoding resource pool, selecting the CPU decoding units and the core display coding units, namely, a half acceleration mode;
H264 BaseLine is transcoded according to the transcoding mode selected by the above procedure by setting the coding parameter minimum gop=2, i.e. containing 1I frame and 1P frame.
Preferably, the encapsulating video stream specifically includes the following steps:
obtaining the transcoded video stream frame by frame;
each frame is encapsulated into MPEG2-TS, and sent to the issuing end of the message queue in frame units.
Preferably, the feedback file specifically includes the following steps:
Analyzing a request of a Web terminal 'video number.m3u8' to acquire the video number;
For each long connection session, independently summarizing a virtual video file list (comprising 3 virtual files) character string which accords with m3u8 standard according to video numbers, wherein the serial numbers contained in the virtual files are automatically increased from 1 by default;
And sending the virtual video file list character string to the Web end.
Preferably, the sending subscription data specifically includes the following steps:
Analyzing a request of a Web end 'video number_sequence number ts' to acquire a video number;
For each long connection session, acquiring a corresponding message queue subscription according to the video number;
and acquiring data from the subscription terminal and sending the data to the Web terminal.
Preferably, the acquiring data from the subscription terminal and sending the data to the Web terminal specifically includes the following steps:
if the subscribing terminal contains data (at least 1I frame and 2P frames) greater than one GOP, extracting all the subscribing terminal data and sending the subscribing terminal data to the Web terminal;
If the subscribing terminal contains data (containing 1I frame and 1P frame) equal to one GOP, extracting the GOP data and sending the GOP data to the Web terminal;
if the subscribing terminal contains data (containing 1I frame or 1P frame) smaller than one GOP, extracting the data and sending the data to the Web terminal;
If the subscriber does not contain data, the subscriber waits for the arrival of data and repeats all the above processes.
Preferably, the processing cache specifically includes the following steps:
setting the condition of triggering the continuous caching rule to be that the caching duration of the Web end is not more than 80 milliseconds;
Triggering a continuous caching rule when the caching duration of the Web terminal is not more than 80 milliseconds: the Web end stores the received video data into files and buffers, and requests to the server end to download the next virtual video file in the m3u8 virtual file list and update the m3u8 virtual file list until the buffering time exceeds 240 milliseconds;
When the continuous caching rule is not triggered by the Web terminal: the Web end obtains the buffered header file, decodes and plays, and requests to the server end to download the next virtual video file in the m3u8 virtual file list and update the m3u8 virtual file list.
Preferably, the adjusting the playing rate specifically includes the following steps:
If the buffer time of the Web end is longer than 4 seconds, the playing speed reaches 2 times of the playing speed;
if the buffer time of the Web end is longer than 2 seconds, the playing speed reaches 1.2 times of the playing speed;
If the buffer time of the Web end is longer than 0.48 seconds, the playing speed reaches 1.04 times of the playing speed;
If the Web end buffer time is not longer than 0.48 seconds, the Web end playing speed reaches 1 time speed.
Compared with the prior art, the invention has the beneficial effects that: according to the method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files, the server side carries out minimum GOP-based transcoding on an original video frame, the original video stream is transcoded into a video stream of the minimum GOP in a hardware acceleration or semi-acceleration way, and the video stream is sent to the Web side in a virtual file mode through a reconstructed Http service under the principle framework of HLS, and the playing speed is dynamically adjusted at the Web side to maintain a low-delay playing level; setting the condition that the triggering continuous caching rule is that the caching duration of the Web end is not more than 80 milliseconds, realizing that the caching duration is kept to fluctuate up and down at 320 milliseconds, controlling HLS delay to a level of 500-600 milliseconds, reducing the delay of HLS live broadcast and accelerating the starting and broadcasting speed of the head screen of the Web end; the method overcomes the limitation of HLS principle frame, solves the contradiction that HLS time delay is not suitable for security live broadcast, avoids RTMP module development and fragment file read-write and management, ensures that HLS has extremely low delay, has more competitive power in the security real-time playing field, and becomes an optional option for playing security real-time video by a user Web terminal.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention;
FIG. 2 is a diagram illustrating a transcoding process according to the present invention;
FIG. 3 is a schematic diagram of an HTTP service flow according to the present invention;
FIG. 4 is a flow chart of adjusting the playing rate according to the present invention;
fig. 5 is a schematic diagram of a virtual file list (including 3 virtual files) fed back to the Web end and conforming to the m3u8 specification according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-4, the method for implementing low-latency HLS live broadcast based on GOP transcoding and virtual files includes the following steps:
s1, acquiring an original video: the server accesses the front-end camera and acquires an original video frame;
the pre-condition of the original video is that the single display and the nuclear display information of the query system are obtained, the GPU encoding and decoding resource pool and the CPU decoding resource pool are summarized, and the video access modes of the original video frame are obtained, wherein the video access modes comprise but are not limited to GB28181, RTSP/ONVIF and SDK of each manufacturer.
The pre-setting is carried out, which comprises the following steps:
setting that an intel core display gpu_intel or a single display gpu_ nvdia exists at a server side;
Setting the upper limit of the independent GPU decoding unit set of the server as gpu_ nvidia _ decs4limit, and setting the available independent GPU decoding unit set as gpu_ nvidia _ decs4available;
The upper limit of the single-display GPU coding unit set of the server is set as gpu_ nvidia _ encs4limit, and the available single-display GPU coding unit set is set as gpu_ nvidia _ encs4available;
Setting the upper limit of the core display GPU decoding unit set of the server as gpu_intel_ decs4 < 4 > limit, and setting the available core display GPU decoding unit set as gpu_intel_ decs4 < 4 > available;
Setting the upper limit of the core display GPU coding unit set of the server as gpu_intel_ encs4 < 4 > limit, and setting the available core display GPU coding unit set as gpu_intel_ encs4 < 4 > available;
the upper limit of the CPU decoding unit set of the server is set as CPU_intel_ decs4 _4limit, and the available CPU decoding unit set is set as CPU_intel_ decs4available;
Setting a video stream src_es accessed by a front-end camera, wherein the corresponding video number is videoid;
Let H264BaseLine video stream trans_es transcoded out with minimum gop=2 (containing 1I frame and 1P frame);
setting the video stream of the trans_es encapsulated by the MPEG2-TS as trans_ts;
let the video number of the Web side request be req_ videoid.
The front-end camera is accessed by means of GB28181, RTSP/ONVIF, SDK and the like, and the original video stream src_es is acquired.
S2, original video transcoding: the server performs minimum GOP based transcoding on the original video frame, please refer to fig. 2, specifically including the following steps:
step one: inquiring a GPU encoding and decoding resource pool and a CPU decoding resource pool;
step two: if the GPU coding and decoding resource pool has both available single-display decoding units and available single-display encoding units, selecting the single-display decoding units and the single-display encoding units, and carrying out hardware acceleration initialization, namely a hardware acceleration mode;
step three: if the available core display coding units exist in the GPU coding and decoding resource pool and the available CPU decoding units exist in the CPU decoding resource pool, selecting the CPU decoding units and the core display coding units, namely, a half acceleration mode;
Step four: h264 BaseLine is transcoded according to the transcoding mode selected by the above procedure by setting the coding parameter minimum gop=2, i.e. containing 1I frame and 1P frame.
Examples are as follows: hardware acceleration initialization: the system is summarized to encode and decode resources,
Inquiring system independent display and nuclear display resource information, initializing a GPU encoding and decoding resource pool and a CPU decoding resource pool, and enabling:
gpu_nvidia_decs4available=gpu_nvidia_decs4limit,
gpu_nvidia_encs4available=gpu_nvidia_encs4limit,
gpu_intel_decs4available=gpu_intel_decs4limit,
gpu_intel_encs4available=gpu_intel_encs4limit,
cpu_intel_decs4available=cpu_intel_decs4limit。
a transcoding mode is selected and transcoding is performed,
Querying the GPU codec resource pool and the CPU decoding resource pool to obtain a state of the latest available resource pool, including:
gpu_nvidia_decs4available,
gpu_nvidia_encs4available,
gpu_intel_decs4available,
gpu_intel_encs4available,
cpu_intel_decs4available。
If gpu _ nvidia _ decs _ 4available >0 and gpu _ nvidia _ encs _ 4available >0,
The longest unused decoder unit dec is removed from the gpu nvidia _ decs _ available,
The longest unused encoder unit enc is removed from the gpu _ nvidia _ encs _ available,
And carrying out hardware acceleration initialization after associating the two, wherein the hardware acceleration mode is a hardware acceleration mode.
Otherwise if gpu _ intel _ encs _ 4available >0 and cpu _ intel _ decs _ 4available >0,
The longest unused decoder unit dec is removed from the cpu _ inter _ decs _ 4available,
The longest unused encoder unit enc is removed from the gpu _ inter _ encs _ 4available,
For both associations, this is a half acceleration mode.
The transcoding mode is bound to the original video stream src_es and transcoded out the very small GOP based video stream trans_es.
S3, packaging video streams: the server side carries out MPEG2-TS encapsulation on the transcoded video stream and sends the video stream to the message queue release side, and the method specifically comprises the following steps:
obtaining the transcoded video stream frame by frame;
each frame is encapsulated into MPEG2-TS, and sent to the issuing end of the message queue in frame units.
For example, the transcoded video stream trans_es is encapsulated into MPEG2-TS frame by frame and then sent to the publishing end pub of the message queue msgque corresponding to videoid.
S4, receiving a request: the server receives the Http long connection request of the Web terminal, maintains the session and keeps alive, constructs the Http service supporting the virtual file, monitors the appointed Http port and filters out the long connection request of the Web terminal, and generates the session and keeps alive based on the long connection of the Http, see FIG. 3;
s5, feeding back a file: the server feeds back a virtual file list conforming to m3u8 specifications to a Web end m3u8 request in a session, wherein the file list contains 3 virtual files and specifically comprises the following steps:
Analyzing a request of a Web terminal 'video number.m3u8' to acquire the video number;
For each long connection session, independently summarizing a virtual video file list (comprising 3 virtual files) character string which accords with m3u8 standard according to video numbers, wherein the serial numbers contained in the virtual files are automatically increased from 1 by default;
And sending the virtual video file list character string to the Web end.
For example, listening for long connection session messages and filtering out m3u8 file requests, when the Web side requests a "video number.m3u8" file, this video number req _ videoid is extracted,
When req_ videoid exists in the transcoding service, independently generating a virtual video file list (comprising 3 virtual files) character string which accords with m3u8 standard for the session, wherein the serial numbers contained in the virtual files are automatically increased from 1 by default, and transmitting the virtual video file list character string to a Web end;
And when req_ videoid does not exist in the transcoding service, returning to the abnormality which is not found by the Web terminal.
S6, sending subscription data: the server requests the Web terminal ts video file in the session, obtains all data from the message queue subscribing terminal after analysis, sends the data to the Web terminal and clears the subscribing terminal, and specifically comprises the following steps:
Analyzing a request of a Web end 'video number_sequence number ts' to acquire a video number;
For each long connection session, acquiring a corresponding message queue subscription according to the video number;
The method comprises the following steps of:
if the subscribing terminal contains data (at least 1I frame and 2P frames) greater than one GOP, extracting all the subscribing terminal data and sending the subscribing terminal data to the Web terminal;
If the subscribing terminal contains data (containing 1I frame and 1P frame) equal to one GOP, extracting the GOP data and sending the GOP data to the Web terminal;
if the subscribing terminal contains data (containing 1I frame or 1P frame) smaller than one GOP, extracting the data and sending the data to the Web terminal;
If the subscriber does not contain data, the subscriber waits for the arrival of data and repeats all the above processes.
For example, when the Web side requests a "video number_sequence number ts" file, this video number req_ videoid is extracted,
When req_ videoid exists in the transcoding service, positioning to a subscription end sub of a message queue msgque corresponding to req_ videoid, acquiring trans_ts data from the sub and sending the trans_ts data to a Web end;
If the sub contains data (at least 1I frame and 2P frames) greater than one GOP, extracting all sub data and sending the sub data to the Web terminal;
if sub contains data (containing 1I frame and 1P frame) equal to one GOP, the GOP data is extracted and sent to the Web terminal;
If sub contains data (containing 1I frame or 1P frame) smaller than one GOP, extracting the data and sending the data to a Web terminal;
if the sub does not contain data, the sub waits for the arrival of the data and repeats the three steps;
And when req_ videoid does not exist in the transcoding service, returning to the abnormality which is not found by the Web terminal.
S7, processing cache: the Web end judges the continuous buffer triggering condition and the corresponding processing, and specifically comprises the following steps:
setting the condition of triggering the continuous caching rule to be that the caching duration of the Web end is not more than 80 milliseconds;
Triggering a continuous caching rule when the caching duration of the Web terminal is not more than 80 milliseconds: the Web end stores the received video data into files and buffers, and requests to the server end to download the next virtual video file in the m3u8 virtual file list and update the m3u8 virtual file list until the buffering time exceeds 240 milliseconds;
When the continuous caching rule is not triggered by the Web terminal: the Web end obtains the buffered header file, decodes and plays, and requests to the server end to download the next virtual video file in the m3u8 virtual file list and update the m3u8 virtual file list.
S8, adjusting the playing rate: the Web terminal realizes dynamic adjustment of the playing rate and specifically comprises the following steps:
If the buffer time of the Web end is longer than 4 seconds, the playing speed reaches 2 times of the playing speed;
if the buffer time of the Web end is longer than 2 seconds, the playing speed reaches 1.2 times of the playing speed;
If the buffer time of the Web end is longer than 0.48 seconds, the playing speed reaches 1.04 times of the playing speed;
If the Web end buffer time is not longer than 0.48 seconds, the Web end playing speed reaches 1 time speed.
The implementation maintains the buffer duration to fluctuate up and down 320 milliseconds.
Referring to fig. 5, the list string information of the virtual video file conforming to the m3u8 specification is as follows:
"#EXTM3U\n"
"#EXT-X-VERSION:3\n"
"#EXT-X-MEDIA-SEQUENCE: 48/n"// 48 denotes the smallest file number in the current list
"#EXT-X-TARGETDURATION:1\n"
"#EXTINF:0.080, no desc/n"// 0.080 indicates that the duration of real data may float assuming that the virtual file duration is 80 milliseconds
"01234567890123456001-48. Ts/n"// 01234567890123456001-48.Ts means virtual files
"#EXTINF:0.080,no desc\n"
"01234567890123456001-49. Ts/n"// 01234567890123456001 denotes the video number, 49 denotes the serial number of this virtual file
"#EXTINF:0.080,no desc\n"
"01234567890123456001-50.ts\n"
Through the above steps, control of HLS delay to a level of 500-600 milliseconds is achieved.
To sum up: according to the method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files, the server side carries out minimum GOP-based transcoding on an original video frame, the original video stream is transcoded into a video stream of the minimum GOP in a hardware acceleration or semi-acceleration way, and the video stream is sent to the Web side in a virtual file mode through a reconstructed Http service under the principle framework of HLS, and the playing speed is dynamically adjusted at the Web side to maintain a low-delay playing level; setting the condition that the triggering continuous caching rule is that the caching duration of the Web end is not more than 80 milliseconds, realizing that the caching duration is kept to fluctuate up and down at 320 milliseconds, controlling HLS delay to a level of 500-600 milliseconds, reducing the delay of HLS live broadcast and accelerating the starting and broadcasting speed of the head screen of the Web end; the method overcomes the limitation of HLS principle frame, solves the contradiction that HLS time delay is not suitable for security live broadcast, avoids RTMP module development and fragment file read-write and management, ensures that HLS has extremely low delay, has more competitive power in the security real-time playing field, and becomes an optional option for playing security real-time video by a user Web terminal.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.

Claims (8)

1. The method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual files is characterized by comprising the following steps: acquiring an original video, wherein the original video is used for accessing a front-end camera by a server and acquiring an original video frame; original video transcoding, which is used for the server to transcode the original video frames based on gop=2; packaging video stream, which is used for the server to carry out MPEG2-TS packaging on the transcoded video stream and send the video stream to the message queue publishing end; the method comprises the steps of receiving a request, wherein the request is used for a server to receive an Http long connection request of a Web end, maintaining a session and keeping alive; the feedback file is used for the server to request the Web end m3u8 in the session, and feeds back a virtual file list conforming to the m3u8 specification, wherein the file list contains 3 virtual files; the method comprises the steps of sending subscription data, which is used for a server to request a Web end ts video file in a session, acquiring all data from a message queue subscription end after analysis, sending the data to the Web end and emptying the subscription end; the processing cache is used for judging the continuous cache triggering condition and corresponding processing by the Web terminal; the play rate is adjusted, and the method is used for the Web terminal to dynamically adjust the play rate;
The original video transcoding specifically comprises the following steps: inquiring a GPU encoding and decoding resource pool and a CPU decoding resource pool; if the GPU coding and decoding resource pool has both available single-display decoding units and available single-display encoding units, selecting the single-display decoding units and the single-display encoding units, and carrying out hardware acceleration initialization, namely a hardware acceleration mode; if the available core display coding units exist in the GPU coding and decoding resource pool and the available CPU decoding units exist in the CPU decoding resource pool, selecting the CPU decoding units and the core display coding units, namely, a half acceleration mode; transcoding H264 BaseLine according to the transcoding mode selected in the above process by setting the coding parameter minimum GOP=2, comprising 1I frame and 1P frame;
The processing cache specifically comprises the following steps: setting the condition of triggering the continuous caching rule to be that the caching duration of the Web end is not more than 80 milliseconds; triggering a continuous caching rule when the caching duration of the Web terminal is not more than 80 milliseconds: the Web end stores the received video data into files and buffers, and requests to the server end to download the next virtual video file in the m3u8 virtual file list and update the m3u8 virtual file list until the buffering time exceeds 240 milliseconds; when the continuous caching rule is not triggered by the Web terminal: the Web end obtains the buffered header file, decodes and plays, and requests to the server end to download the next virtual video file in the m3u8 virtual file list and update the m3u8 virtual file list.
2. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files of claim 1, wherein: and acquiring the precondition of the original video as the independent display and the nuclear display information of the query system, and summarizing the GPU encoding and decoding resource pool and the CPU decoding resource pool.
3. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files as claimed in claim 2, wherein: the video access modes for obtaining the original video frames include, but are not limited to, GB28181, RTSP/ONVIF and SDK of each manufacturer.
4. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files of claim 1, wherein: the encapsulated video stream specifically comprises the following steps: obtaining the transcoded video stream frame by frame; each frame is encapsulated into MPEG2-TS, and sent to the issuing end of the message queue in frame units.
5. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files of claim 1, wherein: the feedback file specifically comprises the following steps: analyzing a request of a Web terminal 'video number.m3u8' to acquire the video number; for each long connection session, independently summarizing a virtual video file list character string meeting the m3u8 standard according to video numbers, wherein the serial numbers contained in the virtual files are automatically increased from 1 by default; and sending the virtual video file list character string to the Web end.
6. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files of claim 1, wherein: the sending subscription data specifically comprises the following steps: analyzing a request of a Web end 'video number_sequence number ts' to acquire a video number; for each long connection session, acquiring a corresponding message queue subscription according to the video number; and acquiring data from the subscription terminal and sending the data to the Web terminal.
7. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files of claim 6, wherein: the method for acquiring data from the subscription terminal and sending the data to the Web terminal specifically comprises the following steps: the subscribing terminal comprises data greater than one GOP, and all subscribing terminal data are extracted and sent to the Web terminal; the subscribing terminal comprises data equal to one GOP, extracts the GOP data and sends the GOP data to the Web terminal; the subscribing terminal comprises data smaller than one GOP, extracts the data and sends the data to the Web terminal; the subscriber does not contain data, and the subscriber waits for the arrival of data and repeats all the above processes.
8. The method for implementing low latency HLS live broadcast based on GOP transcoding and virtual files of claim 1, wherein: the adjusting of the play rate specifically includes the following steps: the buffer time of the Web end is longer than 4 seconds, and the playing speed is 2 times of the playing speed; the buffer time of the Web end is longer than 2 seconds, and the playing speed is 1.2 times of the playing speed; the buffer time of the Web end is longer than 0.48 seconds, and the playing speed is 1.04 times of the playing speed; the buffer time of the Web end is not longer than 0.48 seconds, and the playing speed of the Web end is 1 time speed.
CN202211615128.XA 2022-12-15 2022-12-15 Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file Active CN115988230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211615128.XA CN115988230B (en) 2022-12-15 2022-12-15 Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211615128.XA CN115988230B (en) 2022-12-15 2022-12-15 Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file

Publications (2)

Publication Number Publication Date
CN115988230A CN115988230A (en) 2023-04-18
CN115988230B true CN115988230B (en) 2024-04-30

Family

ID=85975139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211615128.XA Active CN115988230B (en) 2022-12-15 2022-12-15 Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file

Country Status (1)

Country Link
CN (1) CN115988230B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101047848A (en) * 2007-04-27 2007-10-03 北京大学 Network flow media player and method for support multi-viewpoint vedio composition
WO2016199098A1 (en) * 2015-06-12 2016-12-15 Ericsson Ab System and method for managing abr bitrate delivery responsive to video buffer characteristics of a client
CN107801051A (en) * 2017-10-27 2018-03-13 广东省南方数字电视无线传播有限公司 Virtual sliced sheet information transferring method and device, video server
CN107809684A (en) * 2017-10-27 2018-03-16 广东省南方数字电视无线传播有限公司 Video segment generation method and device, caching server
CN110933467A (en) * 2019-12-02 2020-03-27 腾讯科技(深圳)有限公司 Live broadcast data processing method and device and computer readable storage medium
CN111726651A (en) * 2020-07-03 2020-09-29 浪潮云信息技术股份公司 Audio and video stream live broadcasting method and system based on HILS protocol
CN113727114A (en) * 2021-07-21 2021-11-30 天津津航计算技术研究所 Transcoding video decoding method
CN114827670A (en) * 2022-06-30 2022-07-29 椭圆方程(深圳)信息技术有限公司 Video playing method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030167337A1 (en) * 2001-05-31 2003-09-04 Liew William Jia Multimedia virtual streaming for narrowband broadband IP-access network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101047848A (en) * 2007-04-27 2007-10-03 北京大学 Network flow media player and method for support multi-viewpoint vedio composition
WO2016199098A1 (en) * 2015-06-12 2016-12-15 Ericsson Ab System and method for managing abr bitrate delivery responsive to video buffer characteristics of a client
CN107801051A (en) * 2017-10-27 2018-03-13 广东省南方数字电视无线传播有限公司 Virtual sliced sheet information transferring method and device, video server
CN107809684A (en) * 2017-10-27 2018-03-16 广东省南方数字电视无线传播有限公司 Video segment generation method and device, caching server
CN110933467A (en) * 2019-12-02 2020-03-27 腾讯科技(深圳)有限公司 Live broadcast data processing method and device and computer readable storage medium
CN111726651A (en) * 2020-07-03 2020-09-29 浪潮云信息技术股份公司 Audio and video stream live broadcasting method and system based on HILS protocol
CN113727114A (en) * 2021-07-21 2021-11-30 天津津航计算技术研究所 Transcoding video decoding method
CN114827670A (en) * 2022-06-30 2022-07-29 椭圆方程(深圳)信息技术有限公司 Video playing method and device and electronic equipment

Also Published As

Publication number Publication date
CN115988230A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US9787747B2 (en) Optimizing video clarity
US10263875B2 (en) Real-time processing capability based quality adaptation
US9351020B2 (en) On the fly transcoding of video on demand content for adaptive streaming
US6496980B1 (en) Method of providing replay on demand for streaming digital multimedia
CN106060102B (en) Media providing method and terminal
CN108848060B (en) Multimedia file processing method, processing system and computer readable storage medium
US20070028278A1 (en) System and method for providing pre-encoded audio content to a television in a communications network
WO2019019370A1 (en) Processing method for live broadcast of audio and video, storage medium and mobile terminal
WO2019024919A1 (en) Video transcoding method and apparatus, server, and readable storage medium
US9596522B2 (en) Fragmented file structure for live media stream delivery
US20110145878A1 (en) Video decomposition and recomposition
KR20090061914A (en) Adaptive multimedia system for providing multimedia contents and codec to user terminal and method thereof
US20190166395A1 (en) Fast Channel Change In A Video Delivery Network
GB2548789A (en) Dynamically adaptive bitrate streaming
CN107743252A (en) A kind of method for reducing live delay
WO2015120766A1 (en) Video optimisation system and method
CN108494792A (en) A kind of flash player plays the converting system and its working method of hls video flowings
US9338204B2 (en) Prioritized side channel delivery for download and store media
CN113938470A (en) Method and device for playing RTSP data source by browser and streaming media server
CN113824925A (en) WEB plug-in-free video monitoring system and method
CN114501052B (en) Live broadcast data processing method, cloud platform, computer equipment and storage medium
CN102439935B (en) Media adaptation method and apparatus
CN115988230B (en) Method for realizing low-delay HLS live broadcast based on GOP transcoding and virtual file
WO2021017958A1 (en) Video transcoding method and apparatus
Zeng et al. TVSR‐OR: Tile‐based 360‐degree video streaming over real time streaming protocol with optimized read

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant