CN115209208A - Processing method and device for video circular playing - Google Patents
Processing method and device for video circular playing Download PDFInfo
- Publication number
- CN115209208A CN115209208A CN202110374967.6A CN202110374967A CN115209208A CN 115209208 A CN115209208 A CN 115209208A CN 202110374967 A CN202110374967 A CN 202110374967A CN 115209208 A CN115209208 A CN 115209208A
- Authority
- CN
- China
- Prior art keywords
- multimedia data
- video
- data
- decoded
- time axis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000009877 rendering Methods 0.000 claims abstract description 118
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000004044 response Effects 0.000 claims description 22
- 230000000977 initiatory effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 11
- 230000003993 interaction Effects 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4341—Demultiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application provides a processing method and device for video circular playing, which are used for solving the problem of pause caused by re-downloading of videos in the process of video circular playing. The method comprises the following steps: acquiring a multimedia data frame, wherein the multimedia data frame belongs to a data frame in a video stream to be circulated; decapsulating the multimedia data frame to obtain first multimedia data, and storing the first multimedia data in a first buffer queue; before decoding the first multimedia data, when determining that the first multimedia data belongs to an end point of a video stream to be circulated and the circulation frequency of the video stream to be circulated does not reach a set frequency, starting to download the video stream to be circulated again; acquiring first multimedia data from the first cache queue, decoding the first multimedia data, and storing the decoded first multimedia data into the second cache queue; and obtaining the decoded first multimedia data from the second buffer queue, and rendering the decoded first multimedia data.
Description
Technical Field
The present application relates to the field of video processing, and in particular, to a method and an apparatus for processing a video loop play.
Background
The existing video player can meet the basic use requirements of users, such as playing, fast forwarding, fast rewinding or double speed playing. However, if the user needs to play a segment of video in a circulating manner, the user needs to fast back to the title to continue playing after the entire segment of video is played, the video player will empty all video data after fast back, and download the video data again, and since the re-download needs a certain time, a pause phenomenon will occur when the user fast backs to the title to continue playing.
Disclosure of Invention
The embodiment of the application provides a processing method and a processing device for video circular playing, which are used for solving the problem of pause caused by the fact that a video needs to be downloaded again in the video circular playing process.
In a first aspect, the present application provides a display device comprising:
an input interface for receiving a multimedia data frame belonging to a data frame in a video stream to be circulated;
the controller is used for decapsulating the multimedia data frame to obtain first multimedia data and storing the first multimedia data into a first buffer queue;
the controller is further configured to, before the first multimedia data is decoded, start to download the video stream to be circulated again and buffer the video stream to be circulated to the first buffer queue when it is determined that the first multimedia data belongs to an end point of the video stream to be circulated and the number of times of circulation of the video stream to be circulated does not reach a set number of times;
the controller is further configured to acquire the first multimedia data from the first buffer queue, decode the acquired first multimedia data, and store the decoded first multimedia data in a second buffer queue;
the controller is further configured to obtain the decoded first multimedia data from the second buffer queue, and render the decoded first multimedia data to a display screen.
The display screen is used for displaying the first multimedia data.
Based on the scheme, the controller starts to download the video stream to be circulated again when the received first multimedia data is determined to be the end point of the video stream to be circulated. The downloading is restarted without waiting for the completion of the playing of all the video streams to be circulated, and the video streams to be circulated to be played next time can be downloaded before the playing of the video streams to be circulated is completed. The time delay caused by downloading after playing is completed is solved, and the problem of pause in the process of circulating playing is solved.
In some embodiments, before the input interface receives the frame of multimedia data, the controller is further configured to:
and determining the end point of the video stream to be circulated and the set number of times in response to the control operation of the user.
Based on the above scheme, the controller has determined the end point and the set number of times of the video stream to be circulated before the first multimedia data is received by the input interface. The controller may accurately determine whether the first multimedia data is the end point of the video stream to be circulated according to the determined end point of the video stream to be circulated, and may determine that the circulation has not reached the set number of times.
In some embodiments, the controller is further configured to determine a starting point of the video stream to be looped in response to the control operation, and when the controller starts to download the video stream to be looped again, the controller is specifically configured to:
and according to the starting point of the video stream to be circulated and the end point of the video stream to be circulated, downloading the video stream to be circulated again from the starting point of the video stream to be circulated.
Based on the scheme, the controller determines the starting point and the ending point of the video stream to be circulated according to the control operation of the user, and downloads the video stream to be circulated again according to the starting point and the ending point of the video stream to be circulated. The method and the device can accurately download the video stream to be circulated, and avoid downloading redundant useless data.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that the video time axis in the current rendering multimedia data is larger than an audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that an audio time axis in the current rendering multimedia data is larger than a video time axis in the current rendering multimedia data, and determining the delay amount of the video data in the current rendering multimedia data; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
Based on the scheme, when the main time axis used in rendering the multimedia data is a video time axis, if the video time axis in the currently rendered multimedia data is larger than an audio time axis, determining the delay amount of the audio data, and deleting the audio data corresponding to the delay amount; and if the audio time axis in the currently rendered multimedia data is larger than the video time axis, determining a first time length according to the delay amount of the video data, and stopping outputting the audio data within the first time length. By adopting the method, the audio and video data can be ensured to be synchronously output, and the watching experience of a user is improved.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller is further configured to:
when a main time axis used by the current rendering multimedia data is an audio time axis, switching the main time axis used by the current rendering multimedia data to a video time axis, determining that the video time axis in the current rendering multimedia data is larger than the audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller is further configured to:
when a main time axis used for rendering the multimedia data currently is an audio time axis, switching the main time axis used for rendering the multimedia data currently to a video time axis, determining that the audio time axis in the multimedia data currently rendered is larger than the video time axis in the multimedia data currently rendered, and determining the delay amount of the video data in the multimedia data currently rendered; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
Based on the scheme, when the main time axis used in rendering the multimedia data is the audio time axis, the currently used main time axis is switched to the video time axis. If the video time axis in the multimedia data rendered currently is larger than the audio time axis, determining the delay amount of the audio data, and deleting the audio data corresponding to the delay amount; and if the audio time axis in the currently rendered multimedia data is larger than the video time axis, determining a first time length according to the delay amount of the video data, and stopping outputting the audio data within the first time length. By adopting the method, the audio and video data can be ensured to be synchronously output, and the watching experience of a user is improved.
In a second aspect, an embodiment of the present application provides a processing method for loop playing of a video, including:
acquiring a multimedia data frame, wherein the multimedia data frame belongs to a data frame in a video stream to be circulated;
decapsulating the multimedia data frame to obtain first multimedia data, and storing the first multimedia data in a first buffer queue;
before the first multimedia data are decoded, when the first multimedia data are determined to belong to the end point of the video stream to be circulated and the circulation frequency of the video stream to be circulated does not reach the set frequency, starting to download the video stream to be circulated again and caching the video stream to be circulated to the first cache queue;
acquiring the first multimedia data from the first cache queue, decoding the acquired first multimedia data, and storing the decoded first multimedia data into a second cache queue;
and obtaining the decoded first multimedia data from the second buffer queue, and rendering the decoded first multimedia data.
In some embodiments, the method further comprises:
and before the multimedia data frame is acquired, receiving the end point of the video stream to be circulated and the set times which are sent by the upper layer application through an application program interface.
In some embodiments, the application program interface is further configured to receive a starting point of the video stream to be looped sent by the upper layer application, and initiate downloading the video stream to be looped again, including:
and according to the starting point of the video stream to be circulated and the end point of the video stream to be circulated, downloading the video stream to be circulated again from the starting point of the video stream to be circulated.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the method further includes:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that the video time axis in the current rendering multimedia data is larger than an audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the method further includes:
when a main time axis used by the current rendered multimedia data is a video time axis, determining that an audio time axis in the current rendered multimedia data is larger than the video time axis in the current rendered multimedia data, and determining the delay amount of the video data in the current rendered multimedia data; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
In some embodiments, the second buffer queue further includes decoded second multimedia data, the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the method further includes:
when a main time axis used by the current rendering multimedia data is an audio time axis, switching the main time axis used by the current rendering multimedia data to a video time axis, determining that the video time axis in the current rendering multimedia data is larger than the audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the method further includes:
when a main time axis used for rendering the multimedia data is an audio time axis, switching the main time axis used for rendering the multimedia data to a video time axis, determining that the audio time axis in the multimedia data rendered currently is larger than the video time axis in the multimedia data rendered currently, and determining the delay amount of the video data in the multimedia data rendered currently; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
In a third aspect, an embodiment of the present application provides a processing apparatus for loop playback of a video, including:
an input unit for receiving a multimedia data frame belonging to a data frame in a video stream to be circulated;
the control unit is used for decapsulating the multimedia data frame to obtain first multimedia data and storing the first multimedia data into a first buffer queue;
the control unit is further configured to, before the first multimedia data is decoded, determine that the first multimedia data belongs to an end point of the video stream to be circulated and that the number of times of circulation of the video stream to be circulated does not reach a set number of times, start to download the video stream to be circulated again and cache the video stream to be circulated in the first cache queue;
the control unit is further configured to acquire the first multimedia data from the first buffer queue, decode the acquired first multimedia data, and store the decoded first multimedia data in a second buffer queue;
the control unit is further configured to obtain the decoded first multimedia data from the second buffer queue, and render the decoded first multimedia data to a display unit.
The display unit is used for displaying the first multimedia data.
In some embodiments, before the input unit receives the multimedia data frame, the control unit is further configured to:
and determining the end point of the video stream to be circulated and the set number of times in response to the control operation of the user.
In some embodiments, the control unit is further configured to determine a starting point of the video stream to be looped in response to the control operation, and when the control unit starts to download the video stream to be looped again, the control unit is specifically configured to:
and according to the starting point of the video stream to be circulated and the end point of the video stream to be circulated, downloading the video stream to be circulated again from the starting point of the video stream to be circulated.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the control unit is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that the video time axis in the current rendering multimedia data is larger than an audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the control unit is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that an audio time axis in the current rendering multimedia data is larger than a video time axis in the current rendering multimedia data, and determining the delay amount of the video data in the current rendering multimedia data; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the control unit is further configured to:
when a main time axis used by the current rendering multimedia data is an audio time axis, switching the main time axis used by the current rendering multimedia data to a video time axis, determining that the video time axis in the current rendering multimedia data is larger than the audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the control unit is further configured to:
when a main time axis used for rendering the multimedia data currently is an audio time axis, switching the main time axis used for rendering the multimedia data currently to a video time axis, determining that the audio time axis in the current rendering multimedia data is larger than the video time axis in the current rendering multimedia data, and determining the delay amount of the video data in the current rendering multimedia data; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data in the first time length.
In a fourth aspect, the present application further provides a computer storage medium, in which computer program instructions are stored, and when the instructions are run on a computer, the instructions cause the computer to execute the processing method of live data as described in the second aspect.
The technical effect brought by any one implementation manner in the second aspect to the fourth aspect may refer to the technical effect brought by the implementation manner corresponding to the first aspect, and details are not described here again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a method for playing a video in a loop according to the prior art provided by an embodiment of the present application;
fig. 2A is a block diagram of a hardware configuration of a terminal device according to an embodiment of the present disclosure;
fig. 2B is a block diagram of a configuration of the control device 100 according to an embodiment of the present disclosure;
fig. 2C is a block diagram of a software structure of a terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a processing method for video loop playing according to an embodiment of the present disclosure;
fig. 4A is an interface diagram for setting video loop playing according to an embodiment of the present application;
FIG. 4B is a diagram of an interface for setting a video looping type according to an embodiment of the present application;
fig. 4C is an interface diagram for setting the number of video cycles according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a method for synchronizing time axes of audio data and video data in multimedia data according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a processing method for playing video in an overall loop according to an embodiment of the present disclosure;
fig. 7A is an interface diagram for setting the number of video loops, the loop start point, and the loop end point according to an embodiment of the present application;
fig. 7B is an interface diagram for setting a position of a start point of a video loop according to an embodiment of the present application;
fig. 7C is an interface diagram for setting a position of a video loop end point according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for determining a start frame closest to a start point of a loop according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of another processing method for loop playback of a video portion according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a display device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a processing apparatus for loop playback of a video according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in detail with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the present application, the term "and/or" describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
When the existing video application realizes the video circulation function, if the situation needs the whole video circulation, referring to the flow a shown in fig. 1, the video application will issue a circulation playing instruction to the video player according to the setting or fast backward operation of the user after the video player finishes playing for the first time, instruct the video player to download the video from the network, and play the video again. Since the video player needs a certain time to download the video from the network, it will cause a pause phenomenon each time the video head starts to be played circularly. If it is a case that a small segment of the whole video needs to be looped, referring to the flow B shown in fig. 1, the video application instructs the video player to download the video again from the starting point of the segment and play the video of the segment again according to the operation of the user adjusting the video progress bar. Also, since downloading the video takes a certain amount of time, a stuck condition occurs each time the beginning of the segment is played. And when playing the video, the user needs to manually operate each time, and the progress bar is pulled to the starting point of the video, so that the operation is inconvenient. Based on this, the embodiment of the present application provides a processing method and an apparatus for video loop playing, where a video player may directly start to download a video stream to be looped again when the video stream to be looped is downloaded. The video stream to be circulated does not need to be restarted and downloaded after being played, and the situation of pause can not occur when the starting point of the video stream to be circulated is played.
In the following, to facilitate understanding of the scheme of the present application, a plurality of embodiments in different scenarios will be described. It should be noted that the scheme provided by the application can be applied to video applications installed on various terminal devices such as a computer, a television, a smart phone or a tablet computer.
As an example, the configuration of the terminal device according to the present application will be described in detail below. Referring to fig. 2A, a schematic diagram of a possible hardware configuration of the terminal device 200 is shown. In some embodiments, the terminal equipment includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a processor 250, a display unit 260, an audio output interface 270, a memory, a power supply, and a user interface 280.
In some embodiments, the display unit 260 includes a display screen component for displaying images, and a driving component for driving image display, a component for receiving image signals output from the processor, and performing display of video content, image content, and a menu manipulation Interface, a User Interface (UI) Interface, and the like.
In some embodiments, the display part 260 may be at least one of a liquid crystal display, an organic light-Emitting Diode (OLED) display, and a projection display.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. The terminal 200 may perform data transmission with the target health detection device or the peer device 300 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, the detector 230 includes a sound collector, such as a microphone, etc., for receiving external sound.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: any one or more of a High Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, camera interface, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the processor 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the processor 250 is located, such as an external set-top box.
In some embodiments, the processor 250 includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphics Processing Unit (GPU), a Random Access Memory (RAM), and a Read-Only Memory (ROM).
In some embodiments, the CPU is configured to execute operating system and application program instructions stored in the memory, and to execute various application programs, data, and content in accordance with various interactive instructions that receive external inputs for ultimately displaying and playing various audiovisual content. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on the display component.
In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal that can be directly displayed or played on the terminal 200.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.
In some embodiments, the user may input a user command on a Graphical User Interface (GUI) displayed on the display part 260, and the user input interface receives the user input command through the Graphical User Interface (GUI).
In some embodiments, a "display interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner.
In some embodiments, user interface 280 is an interface that can be used to receive control inputs (e.g., physical buttons on the body of a peer device, or the like).
In some embodiments, the system of the peer device may include a Kernel (Kernel), a command parser (shell), a file system, and an application. The kernel, shell, and file system together form the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
In some embodiments, the present application may further include a control apparatus 100, where the control apparatus 100 is configured to control a terminal device 200 shown in fig. 2A, and may receive an operation instruction input by a user, convert the operation instruction into an instruction recognizable and responsive by the terminal device 200, and play an intermediary role in interaction between the user and the terminal device 200.
The control device 100 may be a remote control 100A, which includes infrared protocol communication, bluetooth protocol communication, other short-distance communication methods, etc., and controls the terminal device 200 in a wireless or other wired manner. The user can input a user instruction through a key on a remote controller, voice input, control panel input, or the like to control the terminal apparatus 200.
The control device 100 may also be a smart device, such as a mobile terminal 100B, a tablet computer, a notebook computer, and so on. The terminal device 200 is controlled using, for example, an application program running on the smart device. The application program may provide various controls to a user through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
Fig. 2B is a block diagram illustrating the configuration of the control apparatus 100. As shown in fig. 2B, the control device 100 may include a controller 110, a memory 120, a communicator 130, a user input interface 140, an output interface 150, and a power supply 160. It should be understood that fig. 2B is only an example, and the control device 100 may include more or better components than those shown in fig. 2B, and the present application is not limited thereto.
The controller 110 includes a Random Access Memory (RAM) 111, a Read Only Memory (ROM) 112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components of the communication cooperation, external and internal data processing functions.
Illustratively, when an interaction of a user pressing a key disposed on the remote controller 100A or an interaction of touching a touch panel disposed on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the terminal apparatus 200.
And a memory 120 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110.
The communicator 130 enables communication of control signals and data signals with the terminal device 200 under the control of the controller 110. Such as: the control apparatus 100 transmits a control signal (e.g., a touch signal or a button signal) to the terminal device 200 via the communicator 130, and the control apparatus 100 may receive the signal transmitted by the terminal device 200 via the communicator 130. The communicator 130 may include an infrared signal interface 131 and a radio frequency signal interface 132.
The user input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like, so that a user can input a user instruction regarding the control of the terminal device 200 to the control apparatus 100 through voice, touch, gesture, press, and the like.
The output interface 150 outputs a user instruction received by the user input interface 140 to the terminal device 200, or outputs an image or voice signal received by the terminal device 200. Here, the output interface 150 may include an LED interface 151, a vibration interface 152 generating vibration, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like.
And a power supply 160 for providing operation power support for each element of the control device 100 under the control of the controller 110.
Referring to fig. 2C, a block diagram of an architecture configuration of a terminal device operating system is exemplarily shown. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application programs built in the system and the non-system-level application programs belong to the application layer. Is responsible for direct interaction with the user. The application layer may include a plurality of applications, such as a setup application, a media center application, and the like.
The middleware layer can provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may implement multimedia and hypermedia information coding experts group (MHEG) functions as middleware related to data broadcasting.
And the kernel layer provides core system services, such as a kernel based on a Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driving service for various hardware.
The hardware configuration and the software structure of different terminals may be different, and thus both fig. 2A and fig. 2B are exemplary illustrations.
In order to facilitate understanding of the scheme of the present application, first, a processing method of video loop playing proposed by the present application is described below with a specific embodiment, and referring to fig. 3, a flow diagram of the processing method of video loop playing is provided.
301, the video player obtains a multimedia data frame, which belongs to a data frame in the video stream to be circulated.
As an example, the terminal device may determine a start point, an end point, and a number of cycles of the video stream to be circulated in response to an operation of a user in a display interface of the video application, and send the start point, the end point, and the number of cycles of the video stream to be circulated to the video application. The video application further sends the start point, the end point and the cycle number of the video stream to be cycled to the video player through the setting interface, and sends the film source address of the video stream to be cycled to the video player. The setting interface is used for realizing data transmission between the upper layer video application and the bottom layer video player.
The video player can further download the video stream to be circulated from the network according to the start point and the end point of the video stream to be circulated and the film source address of the video stream to be circulated, and download the multimedia data frame of the video stream to be circulated. Illustratively, the video player may download the video stream to be circulated from the network or read the video stream to be circulated from the local storage via a hypertext transfer Protocol (HTTP), an HTTP Live Streaming (HLS) Protocol, a Text descriptor (FD) Protocol, or the like, and store the video stream to be circulated in the buffer. The multimedia data may include video data, audio data, and subtitle data, among others.
302, the video player decapsulates the multimedia data frame to obtain first multimedia data, and stores the first multimedia data in a first buffer queue.
It should be noted that the first buffer queue is used for storing the multimedia data obtained after decapsulating the multimedia data frame. The decapsulated multimedia data may include one or more of audio data, video data, or subtitle data. As an example, when the decapsulated multimedia data includes different data types, the data belonging to the different data types may be buffered in different buffer queues. For example, the decapsulated audio data is cached in an audio cache queue, the decapsulated video data is cached in a video cache queue, and the decapsulated subtitle data is cached in a subtitle cache queue. The first buffer queue mentioned above may include a first audio buffer queue, a first video buffer queue, or a first subtitle buffer queue.
303, before decoding the first multimedia data, the video player starts to download the video stream to be circulated again when determining that the first multimedia data belongs to the end point of the video stream to be circulated and the circulation frequency of the video stream to be circulated does not reach the set frequency.
In some embodiments, the video player may compare the timestamp of the multimedia data corresponding to the end point of the video stream with circulation from the video application with the timestamp of the first multimedia data, and if the two timestamps are the same, may determine that the first multimedia data is the end point of the video stream to be circulated.
304, the video player acquires the first multimedia data from the first buffer queue, decodes the acquired first multimedia data, and stores the decoded first multimedia data into the second buffer queue.
The second buffer queue is configured to store decoded multimedia data, where the decoded multimedia data may include one or more of audio data, video data, or subtitle data. As an example, when the decoded multimedia data includes different data types, the data belonging to the different data types may be buffered in different buffer queues. For example, the decoded audio data is cached in the second audio cache queue, the decoded video data is cached in the second video cache queue, and the decoded caption data is cached in the caption cache queue. The above-mentioned second buffer queue may include a second audio buffer queue, a second video buffer queue, or a second subtitle buffer queue.
It should be noted that, after sending the first multimedia data to the first buffer queue to obtain the first multimedia data, the video player may delete the first multimedia data in the first buffer queue.
305, the video player obtains the decoded first multimedia data from the second buffer queue, and renders the decoded first multimedia data.
Generally, the downloading, de-encapsulating and decoding rate is higher than the playing speed, so that the video player will not be jammed, and therefore, there may be a certain time difference between the steps 304 and 305. For example, the rendering operation of the multimedia data in the video stream to be circulated is later than the downloading and decapsulating operations of the multimedia data in the video stream to be circulated. For example, when rendering is performed to the nth multimedia data frame, the downloaded multimedia data frame is the (n + m) th multimedia data frame.
In addition, it should be noted that, after the video player extracts the first multimedia data from the second buffer queue, the first multimedia data stored in the second buffer queue may also be deleted, so as to avoid resource waste caused by more used data stored in the second buffer queue.
The processing method for video loop playing provided by the embodiment of the application comprises the following steps: and establishing a setting interface for transmitting data between the upper layer video application and the bottom layer video player, and sending the starting point and the ending point of the video stream to be circulated and the set circulation times to the video player by the video application through the setting interface. The set number of cycles is hereinafter simply referred to as the set number. The video player modifies the existing flow of the loop play accordingly, and the existing flow of the loop play can refer to the flow chart shown in fig. 1. In the application, the video player does not notify the video application that the playing of the video stream to be circulated is completed after the playing of the video stream to be circulated is completed, and then the video application issues an instruction for restarting downloading and playing the video stream to be circulated to the video player. But directly according to the starting point, the ending point and the set times of the video stream to be played, which is received through the set interface, before the playing is not finished, the video stream to be played next time and to be played is started to be downloaded again, so that seamless circular playing is realized. For example, when the video player determines that the multimedia data frame at the end point of the video stream to be circulated has been downloaded, it is not necessary to determine whether the multimedia data frame before the multimedia data frame at the end point of the video stream to be circulated has been played or not, and it is also not necessary to determine whether the multimedia data frame at the end point of the video stream to be circulated has been played or not, and it is possible to directly start downloading the video stream to be circulated again.
Next, a processing method of video loop playback proposed in the present application will be described by taking a specific scene as an example.
Scene one: and (4) integrally circulating the whole video.
In this scenario, the video stream to be circulated is the whole video, that is, the starting point of the video stream to be circulated mentioned in the above embodiments is the head of the whole video, and the ending point of the video stream to be circulated is the tail of the whole video. For convenience of description, a video requiring a whole loop in this scene is referred to as video a.
In some embodiments, in playing the video a, the terminal device may determine to play the video a in a loop in response to a user operation in a display interface of the video application. For example, during the playing process of the video application, a display interface as shown in fig. 4A may be provided, and the display interface of fig. 4A includes an option 401 for loop playing. The terminal device may determine that the user needs to cyclically play the video a in response to a touch or remote operation of the user on the option 401 in the display interface shown in fig. 4A, and may further transmit an instruction to cyclically play the video a to the video application. The video application, upon receiving the instruction, may present a loop-type option in a display interface, for example, see the display interface shown in fig. 4B. Among them, the cycle type can be divided into an overall cycle and a partial cycle. In this scenario, taking the video a as an example of performing the overall loop, the terminal device may determine that the user needs to play the video a in the overall loop in response to an operation that the user selects the loop type as the overall loop in the display interface shown in fig. 4B, and may transmit an instruction for playing the video a in the overall loop to the video application. It should be noted that the operation of the user selecting the loop type as the whole loop in the display interface shown in fig. 4B may be a touch operation or a remote control operation. After receiving the instruction, the video application may further display an option for selecting the set number of times in the display interface, for example, see the display interface shown in fig. 4C. In some embodiments, the terminal device determines the number of times the user is to cyclically play the video a in response to the user selecting a set number of times in the display interface as shown in fig. 4C. The operation of selecting the set number of times by the user may be to input the set number of times into a corresponding selection box in the display interface as shown in fig. 4C, or may be to operate an up-down option and a down-down option after the set number of times in the display interface as shown in fig. 4C, where the up-down option may be to add one to the set number of times or to subtract one from the set number of times, and the embodiment of the present application is not specifically limited herein. The terminal device can transmit the set times to the video application after determining the set times in response to the operation of the user, and the video application can further transmit the cycle type and the set times to the video player through the set interface. After receiving the cycle type and the set times through the set interface, the video player determines that the currently played video A needs to be cyclically played, determines that the first multimedia data in the multimedia data of the decapsulated video A belongs to the trailer of the video A, and further starts to download the video A again when the currently completed cycle times does not reach the set times, namely, starts to download the video A again from the head of the video. Since the downloading, decapsulating, and decoding rate is generally higher than the video rendering speed, based on the above scheme, the video player starts to download video a again from the beginning of video a when determining that the decapsulated first multimedia data belongs to the end of video a. It can be ensured that the downloading of the multimedia data of the head of the video A to be played next time is already finished when the action of rendering the multimedia data of the tail of the video A is not yet finished. Therefore, the multimedia data at the head of the video A can be directly rendered without blockage when the rendering of the multimedia data at the tail of the video A is finished. It should be noted that fig. 4A-4C provided in this embodiment are only one example, and alternative methods may be adopted to set the loop type and the loop number of the video a.
In other embodiments, if the user sets the type of loop and the number of loops before initiating playback of video A. In this case, the video application will send the loop type and the number of loops to the video player before video a starts playing. After the video A starts downloading, the video player judges whether the decapsulated multimedia data belongs to the end of the video A and judges whether the cycle number reaches the set number, and if the decapsulated multimedia data belongs to the end of the video A and the cycle number does not reach the set number, the video player starts to download the video A again from the beginning of the video A. In some embodiments, the decapsulated multimedia data may be stored in a first buffer queue, and the video player may obtain the multimedia data from the first buffer queue to determine whether the multimedia data belongs to the end of video a, and may obtain the multimedia data from the first buffer queue to perform decoding processing.
In the above two embodiments, after determining that the multimedia data in the first buffer queue belongs to the end of the video a and determining that the loop time has not reached the set time, the video player may further set the slice header offset to 0, where this step is used to instruct to download the video a from the slice header. Wherein the slice header offset refers to an offset between the current multimedia data and the multimedia data belonging to the slice header of the video a.
In some embodiments, after decoding the multimedia data in the first buffer queue, the video player may store the decoded multimedia data in the second buffer queue. In some embodiments, before the video player renders a decoded multimedia data in the second buffer queue, it may first determine whether the decoded multimedia data belongs to the slice header of the video a and determine whether the decoded multimedia data is rendered for the first time. For convenience of description, the decoded multimedia data will be referred to as decoded second multimedia data. And if the video player determines that the decoded second multimedia data belongs to the slice header of the video A and determines that the decoded second multimedia data is not rendered for the first time, synchronizing the time axis of the video data in the decoded second multimedia data with the time axis of the audio data in the decoded second multimedia data. As an example, the method of synchronizing the time axes of the audio data and the video data in the decoded second multimedia data may be as follows as shown in fig. 5:
the video player determines 501 whether the currently used primary timeline is the timeline of the video data.
The main time axis may be referred to as a play time axis or a reference time axis. If the main time axis is a time axis of the video data, the main time axis can be used to indicate a time corresponding to the video data, such as information of a total duration, a start time, an end time, and a current playing time of the video data. That is, the main time axis is the time line of video playing and is linearly increased. Similarly, if the main time axis is the time axis of the audio data, the main time axis may be used to indicate the time corresponding to the audio data, such as the total duration, the start time, the end time, and other information of the audio data.
If yes, go to step 503.
If not, go to step 502.
The video player switches 502 the main timeline to the timeline for the video data.
The video player calculates 503 a delay amount of audio data in the currently rendered multimedia data.
In some embodiments, the delay amount of the audio data may be calculated as follows: the time axis of the video data is subtracted from the time axis of the audio data, and the obtained difference is mostly the delay amount of the audio data. For example, if the time axis of audio data in the currently rendered multimedia data is 00.
The video player determines whether the amount of delay of the audio data is greater than 0 504.
If the delay amount of the audio data is less than 0, it may be that, in the last playing process, the video data is completely played, and the audio data is not completely played, and some remaining audio data still exists in the second buffer queue, where the remaining audio data corresponds to the delay amount. Step 505 is performed.
If the delay amount of the audio data is greater than 0, it may be that the audio data is completely played and the video data is not completely played in the previous playing process, and some remaining video data still exists in the second buffer queue, then step 507 is performed.
The video player calculates 505 the data amount of the remaining audio data.
In some embodiments, the calculation may be performed as follows:
the data amount of the remaining audio data = the absolute value of the delay amount of the audio data per second.
The video player discards the data amount of the remaining audio data 506.
507, the video player determines a first time length according to the delay amount of the audio data, and stops outputting the audio data within the first time length.
For example, if the time axis of the audio data in the currently rendered multimedia data is 00. The currently used main time axis is 00. When the main time axis reaches 00.
The audio data and the video data are synchronized in time axis 508, and the main time axis is switched back to the original time axis.
Before step 502 is executed, if the video player has the time axis of the audio data as the main time axis, the main time axis is switched from the time axis of the video data to the time axis of the audio data.
If the video player has the time axis of the video data as the main time axis before executing step 502, the video player continues to use the time axis of the video data as the main time axis.
It should be noted that other methods may be used to correct the time axes of the audio data and the video data in the second multimedia data, and fig. 5 is only an example.
Next, a processing method of video loop playing in this scenario will be described with a specific embodiment. Specifically, reference may be made to the flowchart shown in fig. 6, where the flowchart shown in fig. 6 is taken as an example to set the set number of times and the loop type before the video a starts playing.
601, the video player starts downloading the video A.
Specifically, when the video application determines to play the video a, the video application sends the source address of the video a to the video player, and the video player can download the video a from the network according to the source address of the video a.
And 602, the video player decapsulates the video a to obtain the multimedia data of the video a.
In some embodiments, after the video player decapsulates the video a, the video player may store the decapsulated multimedia data in the first buffer queue.
603, the video player judges whether the decapsulated multimedia data belongs to the trailer of the video a.
If so, go to step 604.
If not, go to step 607.
The video player determines 604 if it needs to loop through video a.
If so, go to step 605.
If not, go to step 607.
605, the video player sets the slice header offset to 0.
The slice header offset refers to an offset between the current multimedia data and the multimedia data of the slice header.
The video player initiates the re-download of video a 606.
And the video player starts to download the video A from the slice header according to the offset of the slice header being 0.
After the downloading is completed, the steps 602 to 603 are continued.
607, the video player obtains the multimedia data in the first buffer queue and performs a decoding operation on the multimedia data.
In some embodiments, the video player may store the decoded multimedia data in the second buffer queue after performing a decoding operation on the multimedia data in the first buffer queue.
608, the video player determines whether the decoded multimedia data belongs to the header of video a.
If so, the decoded multimedia data belonging to the slice header of the video a is called as the decoded second multimedia data, and the step 609 is continuously executed.
If not, go to step 611.
609, the video player judges whether it is the first time to render the decoded second multimedia data.
In some embodiments, a counter may be included in the video player, and the video player starts the counter when rendering of the multimedia data belonging to the head of video a is started and increments the counter by one when rendering of the multimedia data belonging to the end of video a is started. Therefore, if the decoded second multimedia data is rendered for the first time, the counter is displayed as 0. The video player may determine whether to render the decoded second multimedia data for the first time according to the number displayed by the counter.
If so, go to step 610.
If not, step 611 is performed.
The video player synchronizes the time axes of the audio data and the video data in the decoded second multimedia data 610.
As an example, the time axes of the audio data and the video data in the decoded second multimedia data may be synchronized in the manner shown in fig. 5.
611, the video player renders the multimedia data in the second buffer queue.
612, whether the video player is disconnected for the set number of times.
If the set times is reached, the circulation is ended, and the playing is quitted.
If the set number of times is not reached, continue to step 608.
Scene two: partially cyclic scenarios.
In this scenario, the video stream to be circulated is a part of a video segment, and for convenience of description, the video stream to be circulated in this scenario is referred to as video B, the start point of the video stream to be circulated is referred to as a circulation start point, and the end point of the video stream to be circulated is referred to as a circulation end point.
In some embodiments, the terminal device may determine to play the video in a loop in response to a user operating in a display interface of the video application during the playing of the video. For example, during the playing process of the video, a display interface as shown in fig. 4A in scene one may be provided, and the display interface in fig. 4A includes an option 401 of loop playing. The terminal device may determine that the user needs to cyclically play the video in response to a touch or remote operation of the user on the option 401 in the display interface shown in fig. 4A, and may transmit an instruction to cyclically play the video to the video application. The video application, upon receiving the instruction, may present an option to select the loop type in a display interface, for example, see the display interface shown in fig. 4B in scenario one. In this scenario, the video is subjected to partial circulation, and the part played in circulation is the video B. In some embodiments, the terminal device may transmit execution of the partial loop video to the video application in response to an operation of the user selecting the loop type as a partial loop in the display interface as shown in fig. 4B, and the video application may show operations of setting a loop start point, setting a loop end point, and setting a set number of times in the display interface after receiving the instruction. For example, see the display interface shown in FIG. 7A. The operation of setting the set number of times may refer to the manner provided in scenario one, and is not described herein again. The display interface shown in fig. 7A further includes an option for setting a loop start point, the terminal device sends an instruction that the user wants to set the loop start point to the video application in response to a touch control or a remote operation of the user on the option for setting the loop start point, and after receiving the instruction, the video application may display the selectable loop start point in the display interface, for example, see the interface shown in fig. 7B, in the interface shown in fig. 7B, a progress bar below the video is in a blurred state, which indicates that the user may click any position of the progress bar as the loop start point.
In some embodiments, the terminal device determines a loop start point in response to a user's operation in the display interface as shown in fig. 7B, and transmits the loop start point to the video application. In some embodiments, the display interface shown in fig. 7A includes an option of setting the cycle end point, and the terminal device may further send, to the video application, an instruction that the user wants to set the cycle end point in response to a touch or remote operation of the user on the option of setting the cycle end point, and after receiving the instruction, the video application may present an alternative cycle end point in the display interface, for example, referring to the interface shown in fig. 7C, in the interface shown in fig. 7C, the progress bar below the video is in a blurred state, which indicates that the user may click any position of the progress bar as the cycle end point. In some embodiments, the end device determines the end of the loop in response to user manipulation in the display interface as shown in fig. 7C and sends the end of the loop to the video application. After the video application receives the set times, the cycle starting point and the cycle ending point, the set times, the cycle starting point and the cycle ending point can be sent to the video player through the set interface. After receiving the set number of times, the loop start point, and the loop end point, the video player downloads the video B from the network according to the loop start point, the loop end point, and the film source address of the complete video including the video B, and subsequently, for convenience of description, the complete video including the video B is referred to as video C. The video player may further perform decapsulation processing on the downloaded video B to obtain multimedia data of the video B, and may store the multimedia data of the video B in the first buffer queue. And when the video player determines that the first multimedia data in the unpacked multimedia data belongs to the cycle end point according to the cycle end point from the video application, the video player starts to download the video B again.
As an alternative, after determining that the first multimedia data in the decapsulated multimedia data belongs to the loop end point and before starting to download video B again, the video player may set the start offset to 0, which is an operation for instructing to download video B from the loop start point. Wherein the start offset refers to an offset between the current multimedia data and the multimedia data at the start of the cycle.
In addition, it should be noted that, since the cycle start point is not necessarily the start frame of decoding, for example, the time corresponding to the cycle start point is 3. That is, after setting the start offset, the video player needs to determine the start frame closest to the loop start point, set the offset of the closest start frame to 0, and start downloading from the start frame closest to the loop start point.
As an example, the method shown in fig. 8 may be adopted for determining the start frame. It should be noted that, when downloading the video B for the first time, the video player cannot know the start frame closest to the cycle start point, so that when downloading the video B for the first time, it is necessary to download all the data packets including the cycle start point, and determine the start frame closest to the cycle start point after decapsulating the data packets. As shown in fig. 8:
the video player determines 801 the loop start point.
The video player downloads a data packet containing the loop start point 802.
Specifically, the video player can download the data packet containing the loop start point when downloading according to the film source address and the loop start point and the loop end point. For example, if the time corresponding to the cycle start point is 3. For convenience of description, the packet including the start point of the loop will be referred to as a packet P.
803, the video player performs a decapsulation process on the packet P.
The video player records 804 the start frame contained in the data packet P closest to the start of the loop.
805, before the video player starts to download video B again, the start offset is set to 0 first, and then the offset of the latest start frame is further set to 0.
The video player starts downloading video B from the start frame closest to the start of the loop 806.
In some embodiments, after the video player downloads and decapsulates the video B, the multimedia data of the video B obtained after decapsulation may be stored in the first buffer queue. After the video player decodes the multimedia data in the first buffer queue, the decoded multimedia data may be stored in the second buffer queue. In some embodiments, before the video player renders a decoded multimedia data in the second buffer queue, it may further be determined whether the decoded multimedia data belongs to a loop starting point and whether the decoded multimedia data is rendered for the first time. For convenience of description, the decoded multimedia data will be referred to as decoded second multimedia data. And if the video player determines that the decoded second multimedia data belongs to the cycle starting point and determines that the decoded second multimedia data is not rendered for the first time, discarding data from the starting frame to the cycle starting point in the decoded second multimedia data, and synchronizing the time axis of the video data in the decoded second multimedia data with the time axis of the audio data in the decoded second multimedia data. As an example, the method shown in fig. 5 in scene one may be adopted to synchronize time axes of audio data and video data in the decoded second multimedia data, and details are not repeated here.
Next, a processing method of video loop playing in this scenario will be described with a specific embodiment. Reference may be made in particular to the flow chart shown in fig. 9.
901, the video player starts playing the video C.
Where video C refers to the complete video containing video B.
Video B is downloaded in the video player 902.
Specifically, in the process of playing the video C by the video player, the terminal device sends an instruction of playing the video B in a loop to the video application in response to an operation of the user, and the video application sends a loop start point, a loop end point, and a set number of times to the video player. And the video player starts to download the video B after receiving the cycle starting point, the cycle end point and the set times.
And 903, the video player unpacks the video B to obtain the multimedia data of the video B.
As an alternative, after performing the decapsulation processing on the video B, the video player may store the decapsulated multimedia data in the first buffer queue.
904, the video player determines whether the decapsulated multimedia data belongs to the end of the loop.
If so, go to step 905.
If not, go to step 908.
905, the video player judges whether the video B needs to be played circularly.
If so, step 909 is executed.
If not, step 908 is performed.
The video player sets the start offset to 0 and sets the offset of the nearest start frame to 0 906.
Wherein, the starting offset refers to the displacement between the current multimedia data and the multimedia data of the cycle starting point; the offset of the nearest start frame refers to the offset between the current multimedia data and the start frame nearest to the start point of the loop.
907, the video player starts to download video B again.
The video player starts downloading video B from the start frame closest to the start of the loop, according to the offset of the closest start frame being 0.
After the download is completed, the execution continues to step 903-step 904.
And 908, the video player acquires the multimedia data in the first buffer queue and performs decoding operation on the multimedia data.
In some embodiments, the video player may store the decoded multimedia data in the second buffer queue after performing a decoding operation on the multimedia data in the first buffer queue.
The video player 909 determines whether the decoded multimedia data belongs to a loop start point.
If so, the decoded multimedia data belonging to the loop start point is called as the decoded second multimedia data, and the process continues to step 910.
If not, go to step 912.
The video player determines whether the decoded second multimedia data is rendered for the first time 910.
If so, go to step 911.
If not, step 913 is performed.
911, the video player synchronizes the time axes of the audio data and the video data in the decoded second multimedia data.
As an example, the time axes of the audio data and the video data in the decoded second multimedia data may be synchronized in the manner shown in fig. 5 in scene one.
912, the video player determines whether the decoded multimedia data belongs to multimedia data between the start frame closest to the loop start point and the loop start point.
If so, the multimedia data belonging between the starting frame closest to the cycle starting point and the cycle starting point is discarded.
If not, step 913 is performed.
913, the video player renders the multimedia data in the second buffer queue.
914, the video player determines whether the set number of times is reached.
If the set times is reached, the circulation is ended, the multimedia data of the video B left in the first cache queue are deleted, the multimedia data of the decoded video B in the second cache queue are deleted, and the video C is continuously played. For example, the duration of video C is 0-10, video B is 3.
If the set number of times is not reached, execution continues at step 909.
Based on the same concept as the above method, as shown in fig. 10, a display apparatus 1000 is provided. The display device 1000 is capable of performing the various steps of the above-described method, and will not be described in detail herein to avoid repetition. The display device 1000 includes: input interface 1001, controller 1002, display 1003.
An input interface 1001 for receiving a multimedia data frame belonging to a data frame in a video stream to be circulated;
the controller 1002 is configured to decapsulate the multimedia data frame to obtain first multimedia data, and store the first multimedia data in a first buffer queue;
the controller 1002 is further configured to, before decoding the first multimedia data, start to download the video stream to be circulated again and buffer the video stream to be circulated to the first buffer queue when it is determined that the first multimedia data belongs to an end point of the video stream to be circulated and the number of times of circulation of the video stream to be circulated does not reach a set number of times;
the controller 1002 is further configured to obtain the first multimedia data from the first buffer queue, decode the obtained first multimedia data, and store the decoded first multimedia data in a second buffer queue;
the controller 1002 is further configured to obtain the decoded first multimedia data from the second buffer queue, and render the decoded first multimedia data to the display 1003.
The display screen 1003 is configured to display the first multimedia data.
In some embodiments, before the input interface 1001 receives the multimedia data frame, the controller 1002 is further configured to:
and determining the end point of the video stream to be circulated and the set number of times in response to the control operation of the user.
In some embodiments, the controller 1002 is further configured to determine a starting point of the video stream to be circulated in response to the control operation, and when the controller 1002 starts to download the video stream to be circulated again, the controller is specifically configured to:
and according to the starting point of the video stream to be circulated and the end point of the video stream to be circulated, downloading the video stream to be circulated again from the starting point of the video stream to be circulated.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller 1002 is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that the video time axis in the current rendering multimedia data is larger than an audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller 1002 is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that an audio time axis in the current rendering multimedia data is larger than a video time axis in the current rendering multimedia data, and determining the delay amount of the video data in the current rendering multimedia data; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller 1002 is further configured to:
when a main time axis used by the current rendering multimedia data is an audio time axis, switching the main time axis used by the current rendering multimedia data to a video time axis, determining that the video time axis in the current rendering multimedia data is larger than the audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
In some embodiments, the second buffer queue further includes decoded second multimedia data, where the decoded second multimedia data includes audio data and video data, and the decoded second multimedia data belongs to a starting point of the video stream to be looped, and the controller 1002 is further configured to:
when a main time axis used for rendering the multimedia data currently is an audio time axis, switching the main time axis used for rendering the multimedia data currently to a video time axis, determining that the audio time axis in the multimedia data currently rendered is larger than the video time axis in the multimedia data currently rendered, and determining the delay amount of the video data in the multimedia data currently rendered; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
Based on the same concept as the method, as shown in fig. 11, a processing device 1100 for video loop playing is provided. The apparatus 1100 is capable of performing the various steps of the above-described method, and will not be described in detail herein to avoid repetition. The apparatus 1100 comprises: communication unit 1101, processing unit 1102, display unit 1103.
An input unit 1101 for receiving a multimedia data frame belonging to a data frame in a video stream to be circulated;
a control unit 1102, configured to decapsulate the multimedia data frame to obtain first multimedia data, and store the first multimedia data in a first buffer queue;
the control unit 1102 is further configured to, before the first multimedia data is decoded, start to download the to-be-circulated video stream again and buffer the to-be-circulated video stream into the first buffer queue when it is determined that the first multimedia data belongs to an end point of the to-be-circulated video stream and the number of times of circulation of the to-be-circulated video stream does not reach a set number of times;
the control unit 1102 is further configured to acquire the first multimedia data from the first buffer queue, decode the acquired first multimedia data, and store the decoded first multimedia data into a second buffer queue;
the control unit 1102 is further configured to obtain the decoded first multimedia data from the second buffer queue, and render the decoded first multimedia data to the display unit 1103.
The display unit 1103 is configured to display the first multimedia data.
Embodiments of the present application also provide a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the steps of any of the methods described above.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
While specific embodiments of the present application have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the present application is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and principles of this application, and these changes and modifications are intended to be included within the scope of this application. While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A display device, comprising:
an input interface for receiving a multimedia data frame belonging to a data frame in a video stream to be circulated;
a controller configured to perform:
decapsulating the multimedia data frame to obtain first multimedia data, and storing the first multimedia data into a first buffer queue; before the first multimedia data are decoded, when the first multimedia data are determined to belong to the end point of the video stream to be circulated and the circulation frequency of the video stream to be circulated does not reach the set frequency, starting to download the video stream to be circulated again and caching the video stream to be circulated to the first cache queue; acquiring the first multimedia data from the first cache queue, decoding the acquired first multimedia data, and storing the decoded first multimedia data into a second cache queue; acquiring the decoded first multimedia data from the second cache queue, and rendering the decoded first multimedia data to a display screen;
the display screen is used for displaying the first multimedia data.
2. The display device of claim 1, wherein prior to the input interface receiving the frame of multimedia data, the controller is further to:
and determining the end point of the video stream to be circulated and the set times in response to the control operation of a user.
3. The display device according to claim 1 or 2, wherein the controller is further configured to determine a starting point of the video stream to be looped in response to the control operation, and the controller, when initiating a re-download of the video stream to be looped, is specifically configured to:
and according to the starting point of the video stream to be circulated and the end point of the video stream to be circulated, downloading the video stream to be circulated again from the starting point of the video stream to be circulated.
4. The display device of claim 1, wherein the second buffer queue further comprises decoded second multimedia data, the decoded second multimedia data comprising audio data and video data, the decoded second multimedia data belonging to a starting point of the video stream to be looped, and the controller is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that the video time axis in the current rendering multimedia data is larger than an audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
5. The display device of claim 1, wherein the second buffer queue further comprises decoded second multimedia data, the decoded second multimedia data comprising audio data and video data, the decoded second multimedia data belonging to a starting point of the video stream to be looped, and the controller is further configured to:
when a main time axis used by the current rendering multimedia data is a video time axis, determining that an audio time axis in the current rendering multimedia data is larger than a video time axis in the current rendering multimedia data, and determining the delay amount of the video data in the current rendering multimedia data; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data in the first time length.
6. The display device of claim 1, 4 or 5, wherein the second buffer queue further comprises decoded second multimedia data, the decoded second multimedia data comprising audio data and video data, the decoded second multimedia data belonging to a starting point of the video stream to be looped, the controller is further configured to:
when a main time axis used by the current rendering multimedia data is an audio time axis, switching the main time axis used by the current rendering multimedia data to a video time axis, determining that the video time axis in the current rendering multimedia data is larger than the audio time axis in the current rendering multimedia data, determining the delay amount of the audio data in the current rendering multimedia data, and deleting the audio data in the current rendering multimedia data corresponding to the delay amount of the audio data in the current rendering multimedia data from the second cache queue.
7. The display device of claim 1, 4 or 5, wherein the second buffer queue further comprises decoded second multimedia data, the decoded second multimedia data comprising audio data and video data, the decoded second multimedia data belonging to a starting point of the video stream to be looped, the controller is further configured to:
when a main time axis used for rendering the multimedia data currently is an audio time axis, switching the main time axis used for rendering the multimedia data currently to a video time axis, determining that the audio time axis in the multimedia data currently rendered is larger than the video time axis in the multimedia data currently rendered, and determining the delay amount of the video data in the multimedia data currently rendered; and determining a first time length according to the delay amount of the video data in the current rendering multimedia data, and stopping outputting the audio data within the first time length.
8. A processing method for video loop playing is characterized by comprising the following steps:
acquiring a multimedia data frame, wherein the multimedia data frame belongs to a data frame in a video stream to be circulated;
decapsulating the multimedia data frame to obtain first multimedia data, and storing the first multimedia data in a first buffer queue;
before the first multimedia data are decoded, when the first multimedia data are determined to belong to the end point of the video stream to be circulated and the circulation frequency of the video stream to be circulated does not reach the set frequency, starting to download the video stream to be circulated again and caching the video stream to be circulated to the first cache queue;
acquiring the first multimedia data from the first cache queue, decoding the acquired first multimedia data, and storing the decoded first multimedia data into a second cache queue;
and obtaining the decoded first multimedia data from the second buffer queue, and rendering the decoded first multimedia data.
9. The method of claim 8, further comprising:
and before the multimedia data frame is acquired, receiving the end point of the video stream to be circulated and the set times which are sent by the upper layer application through an application program interface.
10. A computer-readable storage medium having stored thereon computer-executable instructions which, when invoked by a computer, cause the computer to perform the method of any one of claims 8 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110374967.6A CN115209208B (en) | 2021-04-08 | 2021-04-08 | Video cyclic playing processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110374967.6A CN115209208B (en) | 2021-04-08 | 2021-04-08 | Video cyclic playing processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115209208A true CN115209208A (en) | 2022-10-18 |
CN115209208B CN115209208B (en) | 2024-07-23 |
Family
ID=83571067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110374967.6A Active CN115209208B (en) | 2021-04-08 | 2021-04-08 | Video cyclic playing processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115209208B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116193185A (en) * | 2022-12-26 | 2023-05-30 | 北京仁光科技有限公司 | Method, device, apparatus and medium for multi-window playing of video stream |
CN118018795A (en) * | 2024-01-31 | 2024-05-10 | 书行科技(北京)有限公司 | Video playing method, device, electronic equipment and computer readable storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1472960A (en) * | 2003-06-13 | 2004-02-04 | 天津大学 | Method for controlling and adjusting for circulating play transfer flow |
CN102084652A (en) * | 2008-06-04 | 2011-06-01 | 高通股份有限公司 | Method and apparatus for selective caching of burst stream transmission |
CN102421034A (en) * | 2011-12-19 | 2012-04-18 | 中山爱科数字科技股份有限公司 | Video playing method formed by video live-broadcasting or video monitoring |
CN102724584A (en) * | 2012-06-18 | 2012-10-10 | Tcl集团股份有限公司 | Method and device for playing network videos online and smart television |
CN105187895A (en) * | 2015-09-17 | 2015-12-23 | 北京暴风科技股份有限公司 | Data-caching method and system for playing videos on mobile platform by means of hardware acceleration |
WO2017059450A1 (en) * | 2015-10-02 | 2017-04-06 | Twitter, Inc. | Gapless video looping |
WO2018195461A1 (en) * | 2017-04-21 | 2018-10-25 | Zenimax Media Inc. | Player input motion compensation by anticipating motion vectors |
CN109447048A (en) * | 2018-12-25 | 2019-03-08 | 苏州闪驰数控系统集成有限公司 | A kind of artificial intelligence early warning system |
-
2021
- 2021-04-08 CN CN202110374967.6A patent/CN115209208B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1472960A (en) * | 2003-06-13 | 2004-02-04 | 天津大学 | Method for controlling and adjusting for circulating play transfer flow |
CN102084652A (en) * | 2008-06-04 | 2011-06-01 | 高通股份有限公司 | Method and apparatus for selective caching of burst stream transmission |
CN102421034A (en) * | 2011-12-19 | 2012-04-18 | 中山爱科数字科技股份有限公司 | Video playing method formed by video live-broadcasting or video monitoring |
CN102724584A (en) * | 2012-06-18 | 2012-10-10 | Tcl集团股份有限公司 | Method and device for playing network videos online and smart television |
CN105187895A (en) * | 2015-09-17 | 2015-12-23 | 北京暴风科技股份有限公司 | Data-caching method and system for playing videos on mobile platform by means of hardware acceleration |
WO2017059450A1 (en) * | 2015-10-02 | 2017-04-06 | Twitter, Inc. | Gapless video looping |
WO2018195461A1 (en) * | 2017-04-21 | 2018-10-25 | Zenimax Media Inc. | Player input motion compensation by anticipating motion vectors |
CN109447048A (en) * | 2018-12-25 | 2019-03-08 | 苏州闪驰数控系统集成有限公司 | A kind of artificial intelligence early warning system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116193185A (en) * | 2022-12-26 | 2023-05-30 | 北京仁光科技有限公司 | Method, device, apparatus and medium for multi-window playing of video stream |
CN118018795A (en) * | 2024-01-31 | 2024-05-10 | 书行科技(北京)有限公司 | Video playing method, device, electronic equipment and computer readable storage medium |
CN118018795B (en) * | 2024-01-31 | 2024-09-27 | 书行科技(北京)有限公司 | Video playing method, device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115209208B (en) | 2024-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111741372B (en) | Screen projection method for video call, display device and terminal device | |
WO2020098504A1 (en) | Video switching control method and display device | |
CN111601135B (en) | Method for synchronously injecting audio and video elementary streams and display equipment | |
CN112367543A (en) | Display device, mobile terminal, screen projection method and screen projection system | |
CN115209208B (en) | Video cyclic playing processing method and device | |
WO2021217435A1 (en) | Streaming media synchronization method and display device | |
CN112153447A (en) | Display device and sound and picture synchronous control method | |
CN113507638A (en) | Display device and screen projection method | |
CN112153406A (en) | Live broadcast data generation method, display equipment and server | |
CN111935510B (en) | Double-browser application loading method and display equipment | |
CN113473194B (en) | Intelligent device and response method | |
CN114095769A (en) | Live broadcast low-delay processing method of application-level player and display equipment | |
CN111741314A (en) | Video playing method and display equipment | |
CN115379277B (en) | VR panoramic video playing method and system based on IPTV service | |
CN113596546B (en) | Multi-stream program playing method and display device | |
CN113453063B (en) | Resource playing method and display equipment | |
CN115623275A (en) | Subtitle display method and display equipment | |
CN111343498B (en) | Mute control method and device and smart television | |
CN114040258A (en) | Display method and display equipment for switching digital television program from time shift to recording | |
CN115119030A (en) | Subtitle processing method and device | |
CN111629250A (en) | Display device and video playing method | |
CN115134644B (en) | Live broadcast data processing method and device | |
CN115174991B (en) | Display equipment and video playing method | |
CN113873335B (en) | Program time-shifting positioning playing method and display equipment | |
CN111601158B (en) | Method for optimizing audio track cutting of streaming media pipeline and display equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |