WO2022260423A1 - Audio or video playing method and apparatus - Google Patents

Audio or video playing method and apparatus Download PDF

Info

Publication number
WO2022260423A1
WO2022260423A1 PCT/KR2022/008070 KR2022008070W WO2022260423A1 WO 2022260423 A1 WO2022260423 A1 WO 2022260423A1 KR 2022008070 W KR2022008070 W KR 2022008070W WO 2022260423 A1 WO2022260423 A1 WO 2022260423A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
clock
current
ctr
counted
Prior art date
Application number
PCT/KR2022/008070
Other languages
French (fr)
Inventor
Xianghu CHEN
Yong Zhang
Ye Sun
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2022260423A1 publication Critical patent/WO2022260423A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the present disclosure relates to multimedia, and in particular relates to an audio or video playing method and apparatus.
  • the inventor finds that if adopting the ATSC3.0 standard, when a UE uses an existing audio or video playing method, audio or video cannot be played smoothly. After careful analysis, the inventor finds that the reasons for this issue are as follows:
  • the UE In the existing audio or video playing method, after a broadcast operator transmits audio or video signals to user equipment (UE), the UE is required to recover a clock of the same frequency as that of the broadcast operator, an initial value of the clock being the first program reference time received by the UE, and the UE processes data in a decoding stage based on the clock.
  • an existing TV standard e.g., MPEG-2 International Standard ISO/IEC 13818
  • an audio or video source clock frequency is 27MHz.
  • UE may recover a clock having a frequency totally identical to that at the audio or video source.
  • the value of the audio or video source clock frequency is not specified.
  • UE will not be able to recover a clock having the same frequency as that at the audio or video source, which results in that there is a difference between a reference time at the UE and a coding clock at the audio or video source.
  • This difference after a few hours or an even longer period of continuous accumulation, will eventually lead to starvation of data (that is, since the reference clock is faster than the coding clock at the audio or video source, it leads to that the playing progress is faster than generating data at the audio or video source, which leads to that the UE needs to wait for the arrival of audio or video data during the playback process) or backlog of data (that is, since the reference clock is slower than the coding clock at the audio or video source, it leads to that the playing progress is slower than generating data at the audio or video source, which leads to backlog of audio or video data at the UE) at the UE, and causes the audio or the video unable to be smoothly played.
  • an object of the present disclosure is to provide an audio or video playing method and apparatus, to enable audio or video to be smoothly played.
  • An audio or video playing method includes:
  • a playback terminal performing, by a playback terminal, rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, and performing synchronization adjustment control on a dedicated clock of the playback terminal based on a source time, so as to match the dedicated clock with the source time; in which the elapsed playback time is obtained based on the dedicated clock.
  • performing the rendering synchronization control includes:
  • Preferably obtaining the elapsed playback time includes:
  • the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
  • performing the synchronization adjustment control on the dedicated clock of the playback terminal includes:
  • time error Time error is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; in which the preset maximum error threshold is larger than zero;
  • time error Time error is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; in which the preset minimum error threshold is smaller than zero.
  • obtaining the current counted time of the dedicated clock includes:
  • W ctr_accu W ctr_curr +N ⁇ W ctr_max ; in which N is the current number of rotations; and updating the previous clock counted value W ctr_pre to the current clock counted value W ctr_curr ; in which W ctr_max is a maximum clock counted value of the dedicated clock; and
  • Embodiments of the present disclosure provide an audio or video playing apparatus, including: a dedicated clock, a rendering control unit and a clock synchronization unit; in which
  • the rendering control unit is configured to, in a process of audio or video playback, perform rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, in which the elapsed playback time is obtained based on the dedicated clock of a playback terminal;
  • the clock synchronization unit is configured to, in the process of audio or video playback, perform synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
  • the rendering control unit is configured to perform the rendering synchronization control, including:
  • the rendering control unit is configured to obtain the elapsed playback time, including:
  • the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
  • the clock synchronization unit is configured to perform the synchronization adjustment control on the dedicated clock of the playback terminal, including:
  • time error Time error is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; in which the preset maximum error threshold is larger than zero;
  • time error Time error is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; in which the preset minimum error threshold is smaller than zero.
  • the apparatus further includes a dedicated timing unit;
  • timing unit is configured to determine the current counted time of the dedicated clock according to a time obtaining instruction from the rendering control unit or the clock synchronization unit, and return the current counted time of the dedicated clock to the rendering control unit or the clock synchronization unit; in which determining the current counted time of the dedicated clock includes:
  • W ctr_accu W ctr_curr +N ⁇ W ctr_max ; in which N is the current number of rotations; and updating the previous clock counted value W ctr_pre to the current clock counted value W ctr_curr ; in which W ctr_max is a maximum clock counted value of the dedicated clock; and
  • Embodiments of the present disclosure further provide an audio or video playing device, including a processor and a memory;
  • the memory stores an application program executable by the processor, and configured to cause the processor to perform the audio or video playing method as mentioned above.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, storing computer-readable instructions which are configured to perform the audio or video playing method as mentioned above.
  • the synchronization adjustment control is performed on the dedicated clock of the playback terminal based on the source time, which can ensure that the dedicated clock of the playback terminal matches the coding clock at the source in the audio or video playback process. Further in the rendering stage, the rendering synchronization control is performed based on the representation timestamp of the to-be-rendered data frame and the elapsed playback time obtained based on the dedicated clock of the playback terminal.
  • performing synchronization in the rendering stage can meet the needs of audio or video playback with a high real-time requirement such as live broadcasting.
  • performing the synchronization adjustment control on the dedicated clock of the playback terminal based on the source time can realize the decoupling of the clock at the source and the clock at the playback terminal.
  • the rendering stage has a clock precision requirement lower than a clock precision requirement for the coding clock at the source, which allows a tolerable error in the audio or video rendering, and therefore, the clock frequency at the playback terminal does not need to be as high as the source clock (e.g., up to 1GHz). With the decrease of the clock frequency, the power consumption is reduced too, and thus the implementation cost and power consumption of the dedicated clock at the playback terminal can be reduced.
  • FIG.1 is a schematic diagram of a flow of an audio or video playing method according to an embodiment of the present disclosure
  • FIG.2 is a schematic diagram of a flow of a method for obtaining a counted time of a dedicated clock according to an embodiment of the present disclosure
  • FIG.3 is a schematic diagram of a flow of a method for controlling synchronization adjustment of a dedicated clock at a playback terminal according to an embodiment of the present disclosure.
  • FIG.4 is a diagram of a structure of an audio or video playing apparatus according to an embodiment of the present disclosure.
  • the present disclosure performs rendering control in a rendering stage of audio or video playback based on a dedicated clock of UE, and periodically performs synchronization adjustment on the dedicated clock based on source times of a source during the playback process, which enables the speed of playing audio or video at the receiver to be consistent with the speed of generating the audio or the video, so as to achieve the effect of smooth audio or video playback.
  • Fig. 1 is a schematic diagram of a flow of an audio or video playing method according to an embodiment of the present disclosure. As shown in Fig.1, the embodiment includes the following:
  • Step 101 in a process of audio or video playback, a playback terminal performs rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame.
  • the elapsed playback time is obtained based on a dedicated clock of the playback terminal.
  • the playback terminal After the playback terminal starts the audio or video playback procedure, it performs rendering synchronization control on each data frame which enters into a rendering stage after being de-multiplexed and processed by an audio or video decoder, to achieve the effect of smooth playback.
  • the present disclosure adopts synchronization control in the rendering stage, which on the one hand can satisfy the real-time property of audio or video playback, and on the other hand, can achieve the effect of smooth audio or video playback.
  • the elapsed playback time is to indicate a period of time playing a corresponding type of data recorded based on the dedicated clock after a first audio frame or a first video frame arrives at the renderer of the playback terminal and is rendered.
  • the elapsed playback time may be obtained as follows:
  • the first initial time is a counted time obtained when the first data frame of the same type as the to-be-rendered data frame arrives at the renderer in the audio or video playback process.
  • an initial value of the elapsed playback time is zero. Since the counted times of the dedicated clock are progressively increasing time values, the elapsed playback time is a time increasing from zero.
  • the counted time may be obtained as follows:
  • Step a1 obtain a current clock counted value W ctr_curr from the dedicated clock of the playback terminal.
  • the dedicated clock generates pulses through hardware and counts the pulses, and divides a counted value by a frequency value to obtain a counted time.
  • a register with a limited number of bits is used for the clock hardware to store clock counts. Assuming that the number of bits of the register is M, and the clock frequency is Clock Freq , then the maximum time can be indicated is (2 M -1)/Clock Freq . When the maximum value is reached, it will rotate back and restart from 0. For example, for a register of 33 bits and 90KHz, the maximum value is (2 33 -1)/90000, approximating 95,443 seconds or 26.5 hours. In order to make the counted times of the dedicated clock monotonously increase, the above-mentioned rotating back situation needs to be handled when counting the times, that is, the number of rotations needs to be considered when counting the times.
  • Step a2 if currently a previous clock counted value W ctr_pre is a preset initial value, then update the previous clock counted value W ctr_pre to the current clock counted value W ctr_curr .
  • This step is used to update the previous clock counted value W ctr_pre to the current clock counted value W ctr_curr when a counted time is obtained for the first time.
  • the initial value of W ctr_pre may be set by a person skilled in the art according to actual needs, e.g., zero, but not limited thereto.
  • Step a3 if W ctr_curr ⁇ W ctr_pre is satisfied, then add a current number of rotations by one.
  • N is the current number of rotations; and the previous clock counted value W ctr_pre is updated to the current clock counted value W ctr_curr ; W ctr_max is a maximum clock counted value of the dedicated clock.
  • Step a5 divide the clock accumulated count W ctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
  • the presentation timestamp (PTS) of a data frame may be calculated using an existing method.
  • a first PTS of a first video frame is 0, and PTSs of subsequent video frames are accumulated using Frame duration .
  • the Frame pts of the first video frame is 0ms
  • the Frame pts of the second video frame is 16.667ms
  • the Frame_pts of the N th frame is (N-1) ⁇ 16.667ms.
  • the Frame pts is a time value increasing from 0. In this way, by comparing the presentation timestamp Frame pts and the elapsed playback time which also increases from zero, whether a time when a to-be-rendered data frame arrives at the render matches a playback speed can be known.
  • step 101 the presentation timestamp of the to-be-rendered data frame and the elapsed playback time of a type of data corresponding to the to-be-rendered data frame are compared, and the rendering synchronization control is performed according to a compared result.
  • the rendering synchronization control may be performed as follows:
  • the smooth playback of audio frame data and the smooth playback of video frame data can be realized respectively. If Frame pts ⁇ STC elapsed ⁇ Frame pts +D max is satisfied, it indicates that the current time meets the requirement of smooth playback, and that it is suitable to render the to-be-rendered data frame. Therefore, the to-be-rendered data frame is rendered immediately.
  • D max satisfies 0 ⁇ D max ⁇ Frame duration
  • a suitable value of D max may be configured by a person skilled in the art according to an actual smoothness requirement.
  • D max Frame duration may be configured.
  • Frame pts ⁇ STC elapsed ⁇ Frame pts +D max is currently timing suitable for performing immediate rendering, so that a certain tolerable error between the dedicated clock and the source clock is allowable, and it is unnecessary for the precision of the dedicated clock and the precision of the source clock to be identical, and the precision is up to milliseconds or above, i.e., the clock frequency is larger than or equal to 1KHz, and the smooth audio or video playback can be realized, so as to reduce the requirement for the precision of the dedicated clock, and further reduce the power consumption and the production cost of the dedicated clock, and improve the applicability of the method.
  • Step 102 In the process of audio or video playback, the playback terminal performs synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
  • the synchronization adjustment control is performed on the dedicated clock, to match the dedicated clock with the source time, which realizes matching between the dedicated clock and the coding clock at the audio or video source.
  • the dedicated clock of the playback terminal is a reference clock synchronized in the rendering stage, and each data frame has presentation duration, thus, the playback times of data frames have a tolerable floating range.
  • the dedicated clock of the playback terminal is unnecessary to have a precision requirement as high as that of the coding clock at the source, as long as a time error between the dedicated clock and the source clock does not affect the viewing experience of the human eye.
  • the playback terminal and the source clock can be decoupled, which in turn, can reduce the cost of realizing the dedicated clock of the playback terminal.
  • the synchronization adjustment control on the dedicated clock of the playback terminal may be performed periodically as follows:
  • Step 1021 when a preset synchronization adjustment time is reached, the playback terminal obtains a current source time SrcTime curr and a current counted time STC curr of the dedicated clock.
  • Both the counted time STC curr and the source time SrcTime curr are monotone increasing values, so that relative times calculated respectively based on the two parameters in the subsequent steps are progressively increasing values too.
  • a current source time SrcTime curr and a current counted time of the dedicated clock need to be obtained at the same time, so as to determine a time difference between the current dedicated clock and the audio or video source clock, and further to determine whether to perform synchronization adjustment on the dedicated clock based on the time difference, so as to make the time of the dedicated clock match with the source time.
  • a person skilled in the art may configure a synchronization adjustment time according to an actual periodical adjusting policy, for example, determining the synchronization adjustment time according to a preset synchronization adjustment period.
  • a preset synchronization adjustment period For the synchronization adjustment period, if it is configured to be too long, then it is unable to meet the synchronization adjustment timeliness requirement, and if it is configured to be too short, then it will result in too much control cost.
  • the period e.g., 10s, may be adaptively configured by a person skilled in the art according to a policy that meets the synchronization adjustment timeliness requirement and that reduces the control cost as much as possible, in consideration of the speed of accumulation of the error time between the dedicated clock and the source clock.
  • the present disclosure is not limited to this.
  • the current counted time of the dedicated clock may be obtained in the same way as that in step 101, which will not be elaborated herein.
  • source times are periodically sent from the source to the playback terminal.
  • the playback terminal may obtain the source times by parsing source signals.
  • Step 1022 calculate a difference between the counted time STC curr and a second initial time STC init to obtain a first relative time STC diff ; and calculate a difference between the source time SrcTime curr and a third initial time SrcTime init to obtain a second relative time SrcTime diff .
  • the second initial time STC init is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTime init is a source time obtained when the first synchronization adjustment time is reached.
  • the first relative time STC diff indicates a time interval between the counted time of the dedicated clock currently obtained and a counted time obtained when the first synchronization adjustment time is reached.
  • the second relative time SrcTime diff indicates a time interval between the source time currently obtained and the source time obtained when the first synchronization adjustment time is reached. The difference between the first relative time and the second relative time can reflect whether the dedicated clock matches the source clock.
  • Step 1023 calculate a difference between the first relative time STC diff and the second relative time SrcTime diff to obtain a time error Time error between the dedicated clock and the audio or video source clock.
  • the difference between the first relative time and the second relative time i.e., the time error Time error ) will be very small, or otherwise, the difference will be quite large.
  • Step 1024 if the time error Time error is larger than a preset maximum error threshold, then reduce an output frequency of the dedicated clock according to a preset first frequency refining step size; the preset maximum error threshold is larger than zero; and
  • time error Time error is smaller than a preset minimum error threshold, then increase the output frequency of the dedicated clock according to a preset second frequency refining step size; the preset minimum error threshold is smaller than zero.
  • the time error Time error is larger than the preset maximum error threshold, and the maximum error threshold is larger than zero, then it indicates that the dedicated clock is faster than the source clock. In this case, it is necessary to reduce the output frequency of the dedicated clock so as to slow down the time counting of the dedicated clock, so that the dedicated clock matches the source clock. If the time error Time error is smaller than the preset minimum error threshold, and the minimum error threshold is smaller than zero, then it indicates that the dedicated clock is slower than the source clock. In this case, it is necessary to increase the output frequency of the dedicated clock so as to speed up the time counting of the dedicated clock, so that the dedicated clock matches the source clock. If the time error Time error is between the minimum error threshold and the maximum error threshold, then it indicates that the dedicated clock matches the source clock, and it is unnecessary to adjust the frequency of the dedicated clock.
  • the first frequency refining step size is used to control the magnitude of reduction in the output frequency of the dedicated clock each time
  • the second frequency refining step size is used to control the magnitude of an increase in the output frequency of the dedicated clock each time.
  • a person skilled in the art may set the values of the first frequency refining step size and the second frequency refining step size according to the actual requirement of convergence speed of the adjusting.
  • a person skilled in the art may set the maximum error threshold and the minimum error threshold according to a matching degree requirement of the dedicated clock and the source clock in an implementation.
  • the absolute values of the maximum error threshold and the minimum error threshold are same, but the present disclosure is not limited thereto.
  • performing the synchronization adjustment control on the dedicated clock of the playback terminal based on the source time can ensure that the dedicated clock of the playback terminal matches the coding clock at the source, and in the rendering stage, the rendering synchronization control is performed based on the representation timestamp of the to-be-rendered data frame and the elapsed playback time of a type of data corresponding to the to-be-rendered data frame obtained based on the dedicated time of the playback terminal.
  • performing synchronization in the rendering stage can meet the needs of audio or video playback with a high real-time requirement such as live broadcasting.
  • performing the synchronization adjustment control on the dedicated clock of the playback terminal based on the source time can realize the decoupling of the clock at the source and the clock at the playback terminal.
  • the rendering stage has a clock precision requirement lower than a clock precision requirement for the coding clock at the source, which allows a tolerable error in the audio or video rendering, and therefore, the clock frequency at the playback terminal does not need to be as high as the source clock, and thus the implementation cost and power consumption of the dedicated clock at the playback terminal can be reduced.
  • the embodiments of the present disclosure further provide an audio or video playing apparatus.
  • the apparatus at least includes: a rendering control unit 401, a dedicated clock 402, and a clock synchronization unit 403.
  • the rendering control unit 401 is configured to, in a process of audio or video playback, perform rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, in which the elapsed playback time is obtained based on the dedicated clock of a playback terminal.
  • the clock synchronization unit 403 is configured to, in the process of audio or video playback, perform synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
  • the rendering control unit 401 is configured to perform the rendering synchronization control, including:
  • the rendering control unit 401 is configured to obtain the elapsed playback time, including:
  • the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
  • the clock synchronization unit 402 is configured to perform the synchronization adjustment control on the dedicated clock, including:
  • time error Time error is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; in which the preset maximum error threshold is larger than zero;
  • time error Time error is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; in which the preset minimum error threshold is smaller than zero.
  • the apparatus further includes a dedicated timing unit 404.
  • the timing unit 404 is configured to determine the current counted time of the dedicated clock according to a time obtaining instruction from the rendering control unit or the clock synchronization unit, and return the current counted time of the dedicated clock to the rendering control unit or the clock synchronization unit; in which determining the current counted time of the dedicated clock includes:
  • W ctr_accu W ctr_curr +N ⁇ W ctr_max ; in which N is the current number of rotations; and updating the previous clock counted value W ctr_pre to the current clock counted value W ctr_curr ; in which W ctr_max is a maximum clock counted value of the dedicated clock; and
  • the embodiments of the present disclosure further provide an audio or video playing device, including a processor and a memory;
  • the memory stores an application program executable by the processor, and configured to cause the processor to perform the audio or video playing method described above.
  • the memory may be specifically implemented as a variety of storage media such as electrically erasable programmable read-only memory (EEPROM), flash memory, and programmable read-only memory (PROM).
  • the processor may be implemented to include one or more central processing units or one or more field programmable gate arrays, where the field programmable gate array(s) integrate(s) one or more central processing unit cores.
  • the central processing unit or central processing unit core may be implemented as a CPU or micro control unit (MCU).
  • a hardware module may include specially designed permanent circuits or logic devices (e.g., dedicated processors, such as FPGAs or ASICs) to carry out specific operations.
  • a hardware module may also include programmable logic devices or circuits temporarily configured by software (for example, including general-purpose processors or other programmable processors) to perform specific operations. Whether adopting the mechanical way or adopting the dedicated permanent circuits or adopting the temporarily configured circuits (e.g., configured by software) to implement the hardware module can be determined according to cost and time considerations.
  • the embodiment of the present disclosure further provides a computer-readable storage medium on which stores computer-readable instructions, and the computer-readable instructions are configured to perform the audio or video playing method mentioned above.
  • a system or an apparatus equipped with a storage medium may be provided, and on the storage medium, software program code for realizing the function of any one of the above-mentioned embodiments is stored, and a computer (or a CPU or an MPU) of the system or the apparatus is able to read and execute the program code stored on the storage medium.
  • a computer or a CPU or an MPU
  • an operating system or the like operating on the computer may also be used to carry out part or all of the actual operations through instructions based on the program code.
  • Implementations of storage media used to provide the program code include floppy disks, hard disks, magneto-optical disks, optical disks (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tape, non-transitory memory card and ROM.
  • the program code may be downloaded from a server computer or a cloud via a communication network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure provides an audio or video playing method and apparatus, in which the method includes: in a process of audio or video playback, performing, by a playback terminal, rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, and performing synchronization adjustment control on a dedicated clock of the playback terminal based on a source time, so as to match the dedicated clock with the source time; in which the elapsed playback time is obtained based on the dedicated clock. In the present disclosure, the rendering synchronization control is performed based on the dedicated clock of the playback terminal, and the synchronization adjustment control is periodically performed on the dedicated clock based on source times, which enables the playback terminal to play audio or video smoothly.

Description

AUDIO OR VIDEO PLAYING METHOD AND APPARATUS
The present disclosure relates to multimedia, and in particular relates to an audio or video playing method and apparatus.
With more and more TV manufacturers supporting the next generation of TV standard, ATSC3.0 (Advanced Television Systems Committee 3.0), TV programs having a high frame rate of 4K/8K high resolution are becoming the mainstream of future TVs.
In the process of developing the present invention, the inventor finds that if adopting the ATSC3.0 standard, when a UE uses an existing audio or video playing method, audio or video cannot be played smoothly. After careful analysis, the inventor finds that the reasons for this issue are as follows:
In the existing audio or video playing method, after a broadcast operator transmits audio or video signals to user equipment (UE), the UE is required to recover a clock of the same frequency as that of the broadcast operator, an initial value of the clock being the first program reference time received by the UE, and the UE processes data in a decoding stage based on the clock. In an existing TV standard (e.g., MPEG-2 International Standard ISO/IEC 13818), it is clearly specified that an audio or video source clock frequency is 27MHz. Thus, UE may recover a clock having a frequency totally identical to that at the audio or video source. However, in other systems such as ATSC3.0, the value of the audio or video source clock frequency is not specified. In this case, UE will not be able to recover a clock having the same frequency as that at the audio or video source, which results in that there is a difference between a reference time at the UE and a coding clock at the audio or video source. This difference, after a few hours or an even longer period of continuous accumulation, will eventually lead to starvation of data (that is, since the reference clock is faster than the coding clock at the audio or video source, it leads to that the playing progress is faster than generating data at the audio or video source, which leads to that the UE needs to wait for the arrival of audio or video data during the playback process) or backlog of data (that is, since the reference clock is slower than the coding clock at the audio or video source, it leads to that the playing progress is slower than generating data at the audio or video source, which leads to backlog of audio or video data at the UE) at the UE, and causes the audio or the video unable to be smoothly played.
In view of this, an object of the present disclosure is to provide an audio or video playing method and apparatus, to enable audio or video to be smoothly played.
To achieve the above object, the technical solutions provided by the embodiments of the present disclosure are as follows.
An audio or video playing method includes:
in a process of audio or video playback, performing, by a playback terminal, rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, and performing synchronization adjustment control on a dedicated clock of the playback terminal based on a source time, so as to match the dedicated clock with the source time; in which the elapsed playback time is obtained based on the dedicated clock.
Preferably, performing the rendering synchronization control includes:
when the to-be-rendered data frame arrives at a renderer, if Framepts≤STCelapsed≤Framepts+Dmax is satisfied, then performing rendering based on the current to-be-rendered data frame; in which STCelapsed is currently the elapsed playback time, Framepts is the representation timestamp of the current to-be-rendered data frame, and Dmax is a preset maximum allowable delayed representation time; 0≤Dmax≤Frameduration; in which Frameduration is a representation duration of a single data frame;
if STCelapsed<Framepts is satisfied, then after waiting for a period of time △t, performing the rendering based on the current to-be-rendered data frame, in which △t=Framepts-STCelapsed; and
if STCelapsed>Framepts+ Dmax is satisfied, then discarding the current to-be-rendered data frame.
Preferably obtaining the elapsed playback time includes:
when the to-be-rendered data frame arrives at a renderer, obtaining a current counted time of the dedicated clock, in which counted times of the dedicated clock are monotone increasing values; and
calculating a difference between the counted time and a first initial time to obtain the elapsed playback time; in which the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
Preferably, performing the synchronization adjustment control on the dedicated clock of the playback terminal includes:
when a preset synchronization adjustment time is reached, obtaining, by the playback terminal, a current source time SrcTimecurr and a current counted time STCcurr of the dedicated clock, in which both the counted time STCcurr and the source time SrcTimecurr are monotone increasing values;
calculating a difference between the counted time STCcurr and a second initial time STCinit to obtain a first relative time STCdiff; and calculating a difference between the source time SrcTimecurr and a third initial time SrcTimeinit to obtain a second relative time SrcTimediff; in which the second initial time STCinit is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTimeinit is a source time obtained when the first synchronization adjustment time is reached;
calculating a difference between the first relative time STCdiff and the second relative time SrcTimediff to obtain a time error Timeerror between the dedicated clock and the audio or video source clock;
if the time error Timeerror is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; in which the preset maximum error threshold is larger than zero; and
if the time error Timeerror is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; in which the preset minimum error threshold is smaller than zero.
Preferably, obtaining the current counted time of the dedicated clock includes:
obtaining a current clock counted value Wctr_curr from the dedicated clock;
if currently a previous clock counted value Wctr_pre is a preset initial value, then updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr;
if Wctr_curr<Wctr_pre is satisfied, then adding a current number of rotations by one; in which an initial value of the number of rotations is 0;
calculating a clock accumulated count Wctr_accu according to Wctr_accu=Wctr_curr+N×Wctr_max; in which N is the current number of rotations; and updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr; in which Wctr_max is a maximum clock counted value of the dedicated clock; and
dividing the clock accumulated count Wctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
Embodiments of the present disclosure provide an audio or video playing apparatus, including: a dedicated clock, a rendering control unit and a clock synchronization unit; in which
the rendering control unit is configured to, in a process of audio or video playback, perform rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, in which the elapsed playback time is obtained based on the dedicated clock of a playback terminal; and
the clock synchronization unit is configured to, in the process of audio or video playback, perform synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
Preferably, the rendering control unit is configured to perform the rendering synchronization control, including:
when the to-be-rendered data frame arrives at a renderer, if Framepts≤STCelapsed≤Framepts+Dmax is satisfied, then performing rendering based on the current to-be-rendered data frame; in which STCelapsed is currently the elapsed playback time, Framepts is the representation timestamp of the current to-be-rendered data frame, and Dmax is a preset maximum allowable delayed representation time; 0≤Dmax≤Frameduration; in which Frameduration is a representation duration of a single data frame;
if STCelapsed<Framepts is satisfied, then after waiting for a period of time △t, performing the rendering based on the current to-be-rendered data frame, in which △t=Framepts-STCelapsed; and
if STCelapsed>Framepts+ Dmax is satisfied, then discarding the current to-be-rendered data frame.
Preferably, the rendering control unit is configured to obtain the elapsed playback time, including:
when the to-be-rendered data frame arrives at a renderer, obtaining a current counted time of the dedicated clock, in which counted times of the dedicated clock are monotone increasing values; and
calculating a difference between the counted time and a first initial time to obtain the elapsed playback time; in which the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
Preferably, the clock synchronization unit is configured to perform the synchronization adjustment control on the dedicated clock of the playback terminal, including:
when a preset synchronization adjustment time is reached, obtaining, by the playback terminal, a current source time SrcTimecurr and a current counted time STCcurr of the dedicated clock, in which both the counted time STCcurr and the source time SrcTimecurr are monotone increasing values;
calculating a difference between the counted time STCcurr and a second initial time STCinit to obtain a first relative time STCdiff; and calculating a difference between the source time SrcTimecurr and a third initial time SrcTimeinit to obtain a second relative time SrcTimediff; in which the second initial time STCinit is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTimeinit is a source time obtained when the first synchronization adjustment time is reached;
calculating a difference between the first relative time STCdiff and the second relative time SrcTimediff to obtain a time error Timeerror between the dedicated clock and the audio or video source clock;
if the time error Timeerror is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; in which the preset maximum error threshold is larger than zero; and
if the time error Timeerror is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; in which the preset minimum error threshold is smaller than zero.
Preferably, the apparatus further includes a dedicated timing unit;
in which the timing unit is configured to determine the current counted time of the dedicated clock according to a time obtaining instruction from the rendering control unit or the clock synchronization unit, and return the current counted time of the dedicated clock to the rendering control unit or the clock synchronization unit; in which determining the current counted time of the dedicated clock includes:
obtaining the current clock counted value Wctr_curr from the dedicated clock;
if currently a previous clock counted value Wctr_pre is a preset initial value, then updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr;
if Wctr_curr<Wctr_pre is satisfied, then adding a current number of rotations by one; in which an initial value of the number of rotations is 0;
calculating a clock accumulated count Wctr_accu according to Wctr_accu=Wctr_curr+N×Wctr_max; in which N is the current number of rotations; and updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr; in which Wctr_max is a maximum clock counted value of the dedicated clock; and
dividing the clock accumulated count Wctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
Embodiments of the present disclosure further provide an audio or video playing device, including a processor and a memory;
in which the memory stores an application program executable by the processor, and configured to cause the processor to perform the audio or video playing method as mentioned above.
Embodiments of the present disclosure further provide a computer-readable storage medium, storing computer-readable instructions which are configured to perform the audio or video playing method as mentioned above.
As can be seen from the above technical schemes, according to the audio or video playing schemes provided by the embodiments of the present disclosure, in the audio or video playback process, the synchronization adjustment control is performed on the dedicated clock of the playback terminal based on the source time, which can ensure that the dedicated clock of the playback terminal matches the coding clock at the source in the audio or video playback process. Further in the rendering stage, the rendering synchronization control is performed based on the representation timestamp of the to-be-rendered data frame and the elapsed playback time obtained based on the dedicated clock of the playback terminal. In this way, on the one hand, using the dedicated clock which matches the source time achieves smooth playback, and on the other hand, performing synchronization in the rendering stage can meet the needs of audio or video playback with a high real-time requirement such as live broadcasting. In addition, performing the synchronization adjustment control on the dedicated clock of the playback terminal based on the source time can realize the decoupling of the clock at the source and the clock at the playback terminal. Further, the rendering stage has a clock precision requirement lower than a clock precision requirement for the coding clock at the source, which allows a tolerable error in the audio or video rendering, and therefore, the clock frequency at the playback terminal does not need to be as high as the source clock (e.g., up to 1GHz). With the decrease of the clock frequency, the power consumption is reduced too, and thus the implementation cost and power consumption of the dedicated clock at the playback terminal can be reduced.
FIG.1 is a schematic diagram of a flow of an audio or video playing method according to an embodiment of the present disclosure;
FIG.2 is a schematic diagram of a flow of a method for obtaining a counted time of a dedicated clock according to an embodiment of the present disclosure;
FIG.3 is a schematic diagram of a flow of a method for controlling synchronization adjustment of a dedicated clock at a playback terminal according to an embodiment of the present disclosure; and
FIG.4 is a diagram of a structure of an audio or video playing apparatus according to an embodiment of the present disclosure.
To make the objects, technical schemes and advantages of the present disclosure clearer, the present disclosure will be described in detail hereinafter with reference to accompanying drawings and detailed embodiments.
Consider that many programs broadcast by broadcast operators have a strong real-time property, and the conventional way of performing synchronization by means of buffering in a decoding stage on the one hand will lead to a long latency of the audio or video playback, and the real-time property of audio or video playback cannot be guaranteed, and on the other hand, the smoothness of audio or video playback cannot be guaranteed. For this reason, the present disclosure performs rendering control in a rendering stage of audio or video playback based on a dedicated clock of UE, and periodically performs synchronization adjustment on the dedicated clock based on source times of a source during the playback process, which enables the speed of playing audio or video at the receiver to be consistent with the speed of generating the audio or the video, so as to achieve the effect of smooth audio or video playback.
Fig. 1 is a schematic diagram of a flow of an audio or video playing method according to an embodiment of the present disclosure. As shown in Fig.1, the embodiment includes the following:
Step 101, in a process of audio or video playback, a playback terminal performs rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame. The elapsed playback time is obtained based on a dedicated clock of the playback terminal.
In this step, after the playback terminal starts the audio or video playback procedure, it performs rendering synchronization control on each data frame which enters into a rendering stage after being de-multiplexed and processed by an audio or video decoder, to achieve the effect of smooth playback.
Herein, compared to the traditional buffer synchronizing method in the decoding stage, the present disclosure adopts synchronization control in the rendering stage, which on the one hand can satisfy the real-time property of audio or video playback, and on the other hand, can achieve the effect of smooth audio or video playback.
The elapsed playback time is to indicate a period of time playing a corresponding type of data recorded based on the dedicated clock after a first audio frame or a first video frame arrives at the renderer of the playback terminal and is rendered.
Preferably, in a detailed implementation, the elapsed playback time may be obtained as follows:
When the to-be-rendered data frame arrives at the renderer, obtain a current counted time of the dedicated clock, in which counted times of the dedicated clock are monotone increasing values; and calculate a difference between the counted time and a first initial time to obtain the elapsed playback time.
The first initial time is a counted time obtained when the first data frame of the same type as the to-be-rendered data frame arrives at the renderer in the audio or video playback process. Thus, an initial value of the elapsed playback time is zero. Since the counted times of the dedicated clock are progressively increasing time values, the elapsed playback time is a time increasing from zero.
Preferably, in an implementation, to accurately obtain a counted time of the dedicated clock, as shown in Fig.2, the counted time may be obtained as follows:
Step a1, obtain a current clock counted value Wctr_curr from the dedicated clock of the playback terminal.
It is to be specified that, in this step, the dedicated clock generates pulses through hardware and counts the pulses, and divides a counted value by a frequency value to obtain a counted time. Usually a register with a limited number of bits is used for the clock hardware to store clock counts. Assuming that the number of bits of the register is M, and the clock frequency is ClockFreq, then the maximum time can be indicated is (2M -1)/ClockFreq. When the maximum value is reached, it will rotate back and restart from 0. For example, for a register of 33 bits and 90KHz, the maximum value is (233-1)/90000, approximating 95,443 seconds or 26.5 hours. In order to make the counted times of the dedicated clock monotonously increase, the above-mentioned rotating back situation needs to be handled when counting the times, that is, the number of rotations needs to be considered when counting the times.
Step a2, if currently a previous clock counted value Wctr_pre is a preset initial value, then update the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr.
This step is used to update the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr when a counted time is obtained for the first time.
Specifically, the initial value of Wctr_pre may be set by a person skilled in the art according to actual needs, e.g., zero, but not limited thereto.
Step a3, if Wctr_curr<Wctr_pre is satisfied, then add a current number of rotations by one.
An initial value of the number of rotations is 0.
Herein, if Wctr_curr<Wctr_pre, then it indicates that a count rotation has occurred currently, and the number of rotations should be added by one.
Step a4, calculate a clock accumulated count Wctr_accu according to Wctr_accu=Wctr_curr+N×Wctr_max.
N is the current number of rotations; and the previous clock counted value Wctr_pre is updated to the current clock counted value Wctr_curr; Wctr_max is a maximum clock counted value of the dedicated clock.
Herein, when calculating the clock accumulated count Wctr_accu, the current number of rotations is considered, and therefore it is guaranteed that the counted times of the dedicated clock are monotone increasing.
Step a5, divide the clock accumulated count Wctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
The presentation timestamp (PTS) of a data frame may be calculated using an existing method. The presentation timestamp Framepts of a data frame may be calculated according to Framepts=(n-1)×Frameduration, where n is a number of an audio data frame or a number of a video data frame, and the number is for data frames of the same type, i.e., indicating an nth audio data frame or an nth video data frame. Using video frame data as an example, a first PTS of a first video frame is 0, and PTSs of subsequent video frames are accumulated using Frameduration. For example, for a video of 60fps, with Frameduration being 16.667ms, the Framepts of the first video frame is 0ms, the Framepts of the second video frame is 16.667ms, and the Frame_pts of the Nth frame is (N-1) ×16.667ms. Thus, the Framepts is a time value increasing from 0. In this way, by comparing the presentation timestamp Framepts and the elapsed playback time which also increases from zero, whether a time when a to-be-rendered data frame arrives at the render matches a playback speed can be known.
Specifically, in step 101, the presentation timestamp of the to-be-rendered data frame and the elapsed playback time of a type of data corresponding to the to-be-rendered data frame are compared, and the rendering synchronization control is performed according to a compared result.
To achieve a better smooth playback effect, in an implementation, the rendering synchronization control may be performed as follows:
when the to-be-rendered data frame arrives at the renderer, if Framepts≤STCelapsed≤Framepts+Dmax is satisfied, then perform rendering based on the current to-be-rendered data frame; where STCelapsed is currently the elapsed playback time, Framepts is the representation timestamp of the current to-be-rendered data frame, and Dmax is a preset maximum allowable delayed representation time; 0≤Dmax≤Frameduration; in which Frameduration is a representation duration of a single data frame;
if STCelapsed<Framepts is satisfied, then after waiting for a period of time △t, perform the rendering based on the current to-be-rendered data frame, in which △t=Framepts-STCelapsed; and
if STCelapsed>Framepts+ Dmax is satisfied, then discard the current to-be-rendered data frame.
Using the above methods, the smooth playback of audio frame data and the smooth playback of video frame data can be realized respectively. If Framepts≤STCelapsed≤Framepts+Dmax is satisfied, it indicates that the current time meets the requirement of smooth playback, and that it is suitable to render the to-be-rendered data frame. Therefore, the to-be-rendered data frame is rendered immediately.
Dmax satisfies 0≤Dmax≤Frameduration, and a suitable value of Dmax may be configured by a person skilled in the art according to an actual smoothness requirement. Preferably, to smoothly present pictures and meanwhile reducing discarding data frames, Dmax=Frameduration may be configured.
If STCelapsed<Framepts is satisfied, then it indicates that the current to-be-rendered data frame arrives at the renderer earlier than the speed of smooth playback, and the rendering may be performed based on the current to-be-rendered data frame after waiting for a certain interval (i.e, Framepts-STCelapsed), so as to achieve the effect of smooth playback.
If STCelapsed>Framepts+Dmax is satisfied, then it indicates that the current to-be-rendered data frame arrives at the renderer later than the speed of smooth playback. Thus, the current to-be-rendered data frame misses a playback time that corresponds to the smooth playback requirement, and the current to-be-rendered data frame should be discarded.
In the above method, Framepts≤STCelapsed≤Framepts+Dmax is currently timing suitable for performing immediate rendering, so that a certain tolerable error between the dedicated clock and the source clock is allowable, and it is unnecessary for the precision of the dedicated clock and the precision of the source clock to be identical, and the precision is up to milliseconds or above, i.e., the clock frequency is larger than or equal to 1KHz, and the smooth audio or video playback can be realized, so as to reduce the requirement for the precision of the dedicated clock, and further reduce the power consumption and the production cost of the dedicated clock, and improve the applicability of the method.
Step 102, In the process of audio or video playback, the playback terminal performs synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
In this step, to guarantee the accuracy of performing rendering synchronization control based on the dedicated clock of the playback terminal in step 101, in the process of audio or video playback, the synchronization adjustment control is performed on the dedicated clock, to match the dedicated clock with the source time, which realizes matching between the dedicated clock and the coding clock at the audio or video source.
Herein, since the dedicated clock of the playback terminal is a reference clock synchronized in the rendering stage, and each data frame has presentation duration, thus, the playback times of data frames have a tolerable floating range. In this way, the dedicated clock of the playback terminal is unnecessary to have a precision requirement as high as that of the coding clock at the source, as long as a time error between the dedicated clock and the source clock does not affect the viewing experience of the human eye. Therefore, by using the dedicated clock of the playback terminal in the rendering process to perform synchronization adjustment, and to perform the synchronization adjustment control on the dedicated clock in the playback process, so that the dedicated clock matches the source clock, the playback terminal and the source clock can be decoupled, which in turn, can reduce the cost of realizing the dedicated clock of the playback terminal.
Preferably, to enable the dedicated clock to better synchronize with the source clock, in an implementation, as shown in Fig.3, the synchronization adjustment control on the dedicated clock of the playback terminal may be performed periodically as follows:
Step 1021, when a preset synchronization adjustment time is reached, the playback terminal obtains a current source time SrcTimecurr and a current counted time STCcurr of the dedicated clock.
Both the counted time STCcurr and the source time SrcTimecurr are monotone increasing values, so that relative times calculated respectively based on the two parameters in the subsequent steps are progressively increasing values too.
In this step, when each synchronization adjustment time is reached, a current source time SrcTimecurr and a current counted time of the dedicated clock need to be obtained at the same time, so as to determine a time difference between the current dedicated clock and the audio or video source clock, and further to determine whether to perform synchronization adjustment on the dedicated clock based on the time difference, so as to make the time of the dedicated clock match with the source time.
In implementation, a person skilled in the art may configure a synchronization adjustment time according to an actual periodical adjusting policy, for example, determining the synchronization adjustment time according to a preset synchronization adjustment period. For the synchronization adjustment period, if it is configured to be too long, then it is unable to meet the synchronization adjustment timeliness requirement, and if it is configured to be too short, then it will result in too much control cost. The period, e.g., 10s, may be adaptively configured by a person skilled in the art according to a policy that meets the synchronization adjustment timeliness requirement and that reduces the control cost as much as possible, in consideration of the speed of accumulation of the error time between the dedicated clock and the source clock. However, the present disclosure is not limited to this.
Herein, the current counted time of the dedicated clock may be obtained in the same way as that in step 101, which will not be elaborated herein.
It is to be specified that source times are periodically sent from the source to the playback terminal. In this way, the playback terminal may obtain the source times by parsing source signals.
Step 1022, calculate a difference between the counted time STCcurr and a second initial time STCinit to obtain a first relative time STCdiff; and calculate a difference between the source time SrcTimecurr and a third initial time SrcTimeinit to obtain a second relative time SrcTimediff.
The second initial time STCinit is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTimeinit is a source time obtained when the first synchronization adjustment time is reached.
In this step, the first relative time STCdiff indicates a time interval between the counted time of the dedicated clock currently obtained and a counted time obtained when the first synchronization adjustment time is reached. The second relative time SrcTimediff indicates a time interval between the source time currently obtained and the source time obtained when the first synchronization adjustment time is reached. The difference between the first relative time and the second relative time can reflect whether the dedicated clock matches the source clock.
Step 1023, calculate a difference between the first relative time STCdiff and the second relative time SrcTimediff to obtain a time error Timeerror between the dedicated clock and the audio or video source clock.
Herein, if the dedicated clock matches the source clock, then the difference between the first relative time and the second relative time (i.e., the time error Timeerror) will be very small, or otherwise, the difference will be quite large.
Step 1024, if the time error Timeerror is larger than a preset maximum error threshold, then reduce an output frequency of the dedicated clock according to a preset first frequency refining step size; the preset maximum error threshold is larger than zero; and
if the time error Timeerror is smaller than a preset minimum error threshold, then increase the output frequency of the dedicated clock according to a preset second frequency refining step size; the preset minimum error threshold is smaller than zero.
Herein, if the time error Timeerror is larger than the preset maximum error threshold, and the maximum error threshold is larger than zero, then it indicates that the dedicated clock is faster than the source clock. In this case, it is necessary to reduce the output frequency of the dedicated clock so as to slow down the time counting of the dedicated clock, so that the dedicated clock matches the source clock. If the time error Timeerror is smaller than the preset minimum error threshold, and the minimum error threshold is smaller than zero, then it indicates that the dedicated clock is slower than the source clock. In this case, it is necessary to increase the output frequency of the dedicated clock so as to speed up the time counting of the dedicated clock, so that the dedicated clock matches the source clock. If the time error Timeerror is between the minimum error threshold and the maximum error threshold, then it indicates that the dedicated clock matches the source clock, and it is unnecessary to adjust the frequency of the dedicated clock.
Specifically, the first frequency refining step size is used to control the magnitude of reduction in the output frequency of the dedicated clock each time, and the second frequency refining step size is used to control the magnitude of an increase in the output frequency of the dedicated clock each time. Specifically, a person skilled in the art may set the values of the first frequency refining step size and the second frequency refining step size according to the actual requirement of convergence speed of the adjusting.
A person skilled in the art may set the maximum error threshold and the minimum error threshold according to a matching degree requirement of the dedicated clock and the source clock in an implementation. Preferably, the absolute values of the maximum error threshold and the minimum error threshold are same, but the present disclosure is not limited thereto.
As can be seen from the above method embodiment, in the audio or video playing method, in the process of audio or video playback, performing the synchronization adjustment control on the dedicated clock of the playback terminal based on the source time, can ensure that the dedicated clock of the playback terminal matches the coding clock at the source, and in the rendering stage, the rendering synchronization control is performed based on the representation timestamp of the to-be-rendered data frame and the elapsed playback time of a type of data corresponding to the to-be-rendered data frame obtained based on the dedicated time of the playback terminal. In this way, on the one hand, using the dedicated clock matching the source time achieves smooth playback, and on the other hand, performing synchronization in the rendering stage can meet the needs of audio or video playback with a high real-time requirement such as live broadcasting. In addition, performing the synchronization adjustment control on the dedicated clock of the playback terminal based on the source time can realize the decoupling of the clock at the source and the clock at the playback terminal. Further, the rendering stage has a clock precision requirement lower than a clock precision requirement for the coding clock at the source, which allows a tolerable error in the audio or video rendering, and therefore, the clock frequency at the playback terminal does not need to be as high as the source clock, and thus the implementation cost and power consumption of the dedicated clock at the playback terminal can be reduced.
Corresponding to the audio or video playing method embodiments, the embodiments of the present disclosure further provide an audio or video playing apparatus. As shown in Fig.4, the apparatus at least includes: a rendering control unit 401, a dedicated clock 402, and a clock synchronization unit 403.
The rendering control unit 401 is configured to, in a process of audio or video playback, perform rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, in which the elapsed playback time is obtained based on the dedicated clock of a playback terminal.
The clock synchronization unit 403 is configured to, in the process of audio or video playback, perform synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
In an implementation, the rendering control unit 401 is configured to perform the rendering synchronization control, including:
when the to-be-rendered data frame arrives at a renderer, if Framepts≤STCelapsed≤Framepts+Dmax is satisfied, then performing rendering based on the current to-be-rendered data frame; where STCelapsed is currently the elapsed playback time, Framepts is the representation timestamp of the current to-be-rendered data frame, and Dmax is a preset maximum allowable delayed representation time; 0≤Dmax≤Frameduration; where Frameduration is a representation duration of a single data frame;
if STCelapsed<Framepts is satisfied, then after waiting for a period of time △t, performing the rendering based on the current to-be-rendered data frame, where △t=Framepts-STCelapsed; and
if STCelapsed>Framepts+ Dmax is satisfied, then discarding the current to-be-rendered data frame.
In an implementation, the rendering control unit 401 is configured to obtain the elapsed playback time, including:
when the to-be-rendered data frame arrives at a renderer, obtaining a current counted time of the dedicated clock, in which counted times of the dedicated clock are monotone increasing values; and
calculating a difference between the counted time and a first initial time to obtain the elapsed playback time; in which the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
In an implementation, the clock synchronization unit 402 is configured to perform the synchronization adjustment control on the dedicated clock, including:
when a preset synchronization adjustment time is reached, obtaining, by the playback terminal, a current source time SrcTimecurr and a current counted time STCcurr of the dedicated clock, in which both the counted time STCcurr and the source time SrcTimecurr are monotone increasing values;
calculating a difference between the counted time STCcurr and a second initial time STCinit to obtain a first relative time STCdiff; and calculating a difference between the source time SrcTimecurr and a third initial time SrcTimeinit to obtain a second relative time SrcTimediff; in which the second initial time STCinit is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTimeinit is a source time obtained when the first synchronization adjustment time is reached;
calculating a difference between the first relative time STCdiff and the second relative time SrcTimediff to obtain a time error Timeerror between the dedicated clock and the audio or video source clock;
if the time error Timeerror is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; in which the preset maximum error threshold is larger than zero; and
if the time error Timeerror is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; in which the preset minimum error threshold is smaller than zero.
In an implementation, the apparatus further includes a dedicated timing unit 404.
The timing unit 404 is configured to determine the current counted time of the dedicated clock according to a time obtaining instruction from the rendering control unit or the clock synchronization unit, and return the current counted time of the dedicated clock to the rendering control unit or the clock synchronization unit; in which determining the current counted time of the dedicated clock includes:
obtaining the current clock counted value Wctr_curr from the dedicated clock;
if currently a previous clock counted value Wctr_pre is a preset initial value, then updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr;
if Wctr_curr<Wctr_pre is satisfied, then adding a current number of rotations by one; in which an initial value of the number of rotations is 0;
calculating a clock accumulated count Wctr_accu according to Wctr_accu=Wctr_curr+N×Wctr_max; in which N is the current number of rotations; and updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr; in which Wctr_max is a maximum clock counted value of the dedicated clock; and
dividing the clock accumulated count Wctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
Corresponding to the audio or video playing method embodiments, the embodiments of the present disclosure further provide an audio or video playing device, including a processor and a memory;
the memory stores an application program executable by the processor, and configured to cause the processor to perform the audio or video playing method described above.
The memory may be specifically implemented as a variety of storage media such as electrically erasable programmable read-only memory (EEPROM), flash memory, and programmable read-only memory (PROM). The processor may be implemented to include one or more central processing units or one or more field programmable gate arrays, where the field programmable gate array(s) integrate(s) one or more central processing unit cores. Specifically, the central processing unit or central processing unit core may be implemented as a CPU or micro control unit (MCU).
It should be specified that not all steps and modules in the above-mentioned processes and structural diagrams are necessary, and some steps or modules can be omitted according to actual needs. The order of executing the respective steps is not fixed and can be adjusted as needed. The division of the respective modules is just to facilitate the description and is functional. In actual implementation, a module may be implemented by multiple modules, and the functions of multiple modules may also be implemented by a same module. These modules may be located in a same device, or may be located in different devices.
The hardware modules in each embodiment may be implemented in a mechanical way or an electronic way. For example, a hardware module may include specially designed permanent circuits or logic devices (e.g., dedicated processors, such as FPGAs or ASICs) to carry out specific operations. A hardware module may also include programmable logic devices or circuits temporarily configured by software (for example, including general-purpose processors or other programmable processors) to perform specific operations. Whether adopting the mechanical way or adopting the dedicated permanent circuits or adopting the temporarily configured circuits (e.g., configured by software) to implement the hardware module can be determined according to cost and time considerations.
The embodiment of the present disclosure further provides a computer-readable storage medium on which stores computer-readable instructions, and the computer-readable instructions are configured to perform the audio or video playing method mentioned above.
Specifically, a system or an apparatus equipped with a storage medium may be provided, and on the storage medium, software program code for realizing the function of any one of the above-mentioned embodiments is stored, and a computer (or a CPU or an MPU) of the system or the apparatus is able to read and execute the program code stored on the storage medium. In addition, an operating system or the like operating on the computer may also be used to carry out part or all of the actual operations through instructions based on the program code. It is also possible to write the program code read from the storage medium to a memory set in an expansion board inserted into the computer or to a memory set in an expansion unit connected to the computer, and then a CPU in the expansion board or the expansion unit may be enabled to perform part or all of the actual operations based on the instructions of the program code, so as to realize the function of any one of the above-mentioned embodiments.
Implementations of storage media used to provide the program code include floppy disks, hard disks, magneto-optical disks, optical disks (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tape, non-transitory memory card and ROM. Alternatively, the program code may be downloaded from a server computer or a cloud via a communication network.
Herein, “schematic” means “serving as an example, an instance, or illustration”, and any illustration or embodiment described as “schematic” in the present disclosure should not be construed as a more preferred or advantageous technical solution. In order to make the drawings concise, the respective drawings only schematically show parts relative to the present disclosure, but do not represent the actual structure when implemented as a product. In addition, in order to make the drawings concise and easy to understand, in some drawings, for components with the same structure or function in some drawings, only one of them is schematically shown, or only one of them is indicated. Herein, “one” does not mean that the number of relevant parts of the present disclosure is limited to “only one”, and “one” does not mean to exclude the situation where the number of relevant parts of the present disclosure is “more than one”. Herein, “above”, “below”, “in front of”, “behind”, “left”, “right”, “inside”, “outside”, etc. are only used to indicate relative positions between related parts, but not used to limit the absolute positions of these relevant parts.
The above descriptions are merely preferred embodiments of the present disclosure, but are not intended to limit the protect scope of the present disclosure. Any modification, equivalent replacement, or improvement made within the spirit and principle of this application shall all fall within the protection scope of the present disclosure.

Claims (12)

  1. An audio or video playing method, characterized by comprising:
    in a process of audio or video playback, performing, by a playback terminal, rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, and performing synchronization adjustment control on a dedicated clock of the playback terminal based on a source time, so as to match the dedicated clock with the source time; wherein the elapsed playback time is obtained based on the dedicated clock.
  2. The method of claim 1, characterized in that performing the rendering synchronization control comprises:
    when the to-be-rendered data frame arrives at a renderer, if Framepts≤STCelapsed≤Framepts+Dmax is satisfied, then performing rendering based on the current to-be-rendered data frame; wherein STCelapsed is currently the elapsed playback time, Framepts is the representation timestamp of the current to-be-rendered data frame, and Dmax is a preset maximum allowable delayed representation time; 0≤Dmax≤Frameduration; wherein Frameduration is a representation duration of a single data frame;
    if STCelapsed<Framepts is satisfied, then after waiting for a period of time △t, performing the rendering based on the current to-be-rendered data frame, wherein △t=Framepts-STCelapsed; and
    if STCelapsed>Framepts+ Dmax is satisfied, then discarding the current to-be-rendered data frame.
  3. The method of claim 1, characterized in that obtaining the elapsed playback time comprises:
    when the to-be-rendered data frame arrives at a renderer, obtaining a current counted time of the dedicated clock, wherein counted times of the dedicated clock are monotone increasing values; and
    calculating a difference between the counted time and a first initial time to obtain the elapsed playback time; wherein the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
  4. The method of claim 1, characterized in that performing the synchronization adjustment control on the dedicated clock of the playback terminal comprises:
    when a preset synchronization adjustment time is reached, obtaining, by the playback terminal, a current source time SrcTimecurr and a current counted time STCcurr of the dedicated clock, wherein both the counted time STCcurr and the source time SrcTimecurr are monotone increasing values;
    calculating a difference between the counted time STCcurr and a second initial time STCinit to obtain a first relative time STCdiff; and calculating a difference between the source time SrcTimecurr and a third initial time SrcTimeinit to obtain a second relative time SrcTimediff; wherein the second initial time STCinit is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTimeinit is a source time obtained when the first synchronization adjustment time is reached;
    calculating a difference between the first relative time STCdiff and the second relative time SrcTimediff to obtain a time error Timeerror between the dedicated clock and the audio or video source clock;
    if the time error Timeerror is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; wherein the preset maximum error threshold is larger than zero; and
    if the time error Timeerror is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; wherein the preset minimum error threshold is smaller than zero.
  5. The method of claim 3, characterized in that obtaining the current counted time of the dedicated clock comprises:
    obtaining a current clock counted value Wctr_curr from the dedicated clock;
    if currently a previous clock counted value Wctr_pre is a preset initial value, then updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr;
    if Wctr_curr<Wctr_pre is satisfied, then adding a current number of rotations by one; wherein an initial value of the number of rotations is 0;
    calculating a clock accumulated count Wctr_accu according to Wctr_accu=Wctr_curr+N×Wctr_max; wherein N is the current number of rotations; and updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr; wherein Wctr_max is a maximum clock counted value of the dedicated clock; and
    dividing the clock accumulated count Wctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
  6. An audio or video playing apparatus, characterized by comprising: a dedicated clock, a rendering control unit and a clock synchronization unit; wherein
    the rendering control unit is configured to, in a process of audio or video playback, perform rendering synchronization control based on a representation timestamp of a to-be-rendered data frame and an elapsed playback time of a type of data corresponding to the to-be-rendered data frame, wherein the elapsed playback time is obtained based on the dedicated clock of a playback terminal; and
    the clock synchronization unit is configured to, in the process of audio or video playback, perform synchronization adjustment control on the dedicated clock based on a source time, so as to match the dedicated clock with the source time.
  7. The apparatus of claim 6, characterized in that the rendering control unit is configured to perform the rendering synchronization control, including:
    when the to-be-rendered data frame arrives at a renderer, if Framepts≤STCelapsed≤Framepts+Dmax is satisfied, then performing rendering based on the current to-be-rendered data frame; wherein STCelapsed is currently the elapsed playback time, Framepts is the representation timestamp of the current to-be-rendered data frame, and Dmax is a preset maximum allowable delayed representation time; 0≤Dmax≤Frameduration; wherein Frameduration is a representation duration of a single data frame;
    if STCelapsed<Framepts is satisfied, then after waiting for a period of time △t, performing the rendering based on the current to-be-rendered data frame, wherein △t=Framepts-STCelapsed; and
    if STCelapsed>Framepts+ Dmax is satisfied, then discarding the current to-be-rendered data frame.
  8. The apparatus of claim 6, characterized in that the rendering control unit is configured to obtain the elapsed playback time, including:
    when the to-be-rendered data frame arrives at a renderer, obtaining a current counted time of the dedicated clock, wherein counted times of the dedicated clock are monotone increasing values; and
    calculating a difference between the counted time and a first initial time to obtain the elapsed playback time; wherein the first initial time is a counted time obtained when a first data frame of the audio or video playback arrived at the renderer.
  9. The apparatus of claim 6, characterized in that the clock synchronization unit is configured to perform the synchronization adjustment control on the dedicated clock of the playback terminal, including:
    when a preset synchronization adjustment time is reached, obtaining, by the playback terminal, a current source time SrcTimecurr and a current counted time STCcurr of the dedicated clock, wherein both the counted time STCcurr and the source time SrcTimecurr are monotone increasing values;
    calculating a difference between the counted time STCcurr and a second initial time STCinit to obtain a first relative time STCdiff; and calculating a difference between the source time SrcTimecurr and a third initial time SrcTimeinit to obtain a second relative time SrcTimediff; wherein the second initial time STCinit is a counted time obtained when a first synchronization adjustment time is reached; and the third initial time SrcTimeinit is a source time obtained when the first synchronization adjustment time is reached;
    calculating a difference between the first relative time STCdiff and the second relative time SrcTimediff to obtain a time error Timeerror between the dedicated clock and the audio or video source clock;
    if the time error Timeerror is larger than a preset maximum error threshold, then reducing an output frequency of the dedicated clock according to a preset first frequency refining step size; wherein the preset maximum error threshold is larger than zero; and
    if the time error Timeerror is smaller than a preset minimum error threshold, then increasing the output frequency of the dedicated clock according to a preset second frequency refining step size; wherein the preset minimum error threshold is smaller than zero.
  10. The apparatus of claim 8, characterized in that the apparatus further comprises a dedicated timing unit;
    wherein the timing unit is configured to determine the current counted time of the dedicated clock according to a time obtaining instruction from the rendering control unit or the clock synchronization unit, and return the current counted time of the dedicated clock to the rendering control unit or the clock synchronization unit; wherein determining the current counted time of the dedicated clock comprises:
    obtaining the current clock counted value Wctr_curr from the dedicated clock;
    if currently a previous clock counted value Wctr_pre is a preset initial value, then updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr;
    if Wctr_curr<Wctr_pre is satisfied, then adding a current number of rotations by one; wherein an initial value of the number of rotations is 0;
    calculating a clock accumulated count Wctr_accu according to Wctr_accu=Wctr_curr+N×Wctr_max; wherein N is the current number of rotations; and updating the previous clock counted value Wctr_pre to the current clock counted value Wctr_curr; wherein Wctr_max is a maximum clock counted value of the dedicated clock; and
    dividing the clock accumulated count Wctr_accu by a current clock frequency of the dedicated clock to obtain the current counted time of the dedicated clock.
  11. An audio or video playing device, characterized by comprising a processor and a memory;
    wherein the memory stores an application program executable by the processor, and configured to cause the processor to perform the audio or video playing method in claim 1.
  12. A computer-readable storage medium, characterized by storing computer-readable instructions which are configured to perform the audio or video playing method in claim 1.
PCT/KR2022/008070 2021-06-08 2022-06-08 Audio or video playing method and apparatus WO2022260423A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110635135.5A CN113382300B (en) 2021-06-08 2021-06-08 Audio and video playing method and device
CN202110635135.5 2021-06-08

Publications (1)

Publication Number Publication Date
WO2022260423A1 true WO2022260423A1 (en) 2022-12-15

Family

ID=77576258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/008070 WO2022260423A1 (en) 2021-06-08 2022-06-08 Audio or video playing method and apparatus

Country Status (2)

Country Link
CN (1) CN113382300B (en)
WO (1) WO2022260423A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113382300B (en) * 2021-06-08 2023-03-21 三星电子(中国)研发中心 Audio and video playing method and device
CN114189737B (en) * 2021-12-06 2024-08-06 国微集团(深圳)有限公司 Digital television rapid channel switching method and digital television

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170070337A1 (en) * 2015-09-09 2017-03-09 Imagination Technologies Limited Synchronising Devices Using Clock Signal Time Difference Estimation
US9609179B2 (en) * 2010-09-22 2017-03-28 Thomson Licensing Methods for processing multimedia flows and corresponding devices
CN108243350A (en) * 2016-12-26 2018-07-03 深圳市中兴微电子技术有限公司 A kind of method and apparatus of audio-visual synchronization processing
US20200260396A1 (en) * 2015-12-16 2020-08-13 Sonos, Inc. Synchronization of Content Between Networked Devices
US20200379723A1 (en) * 2003-07-28 2020-12-03 Sonos, Inc Media Playback Device and System
CN113382300A (en) * 2021-06-08 2021-09-10 三星电子(中国)研发中心 Audio and video playing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747332B (en) * 2013-12-25 2018-08-10 乐视致新电子科技(天津)有限公司 A kind of smoothing processing method and device of video
CN105898504A (en) * 2016-04-26 2016-08-24 乐视控股(北京)有限公司 Audio and video synchronization method and apparatus
CN107509100A (en) * 2017-09-15 2017-12-22 深圳国微技术有限公司 Audio and video synchronization method, system, computer installation and computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200379723A1 (en) * 2003-07-28 2020-12-03 Sonos, Inc Media Playback Device and System
US9609179B2 (en) * 2010-09-22 2017-03-28 Thomson Licensing Methods for processing multimedia flows and corresponding devices
US20170070337A1 (en) * 2015-09-09 2017-03-09 Imagination Technologies Limited Synchronising Devices Using Clock Signal Time Difference Estimation
US20200260396A1 (en) * 2015-12-16 2020-08-13 Sonos, Inc. Synchronization of Content Between Networked Devices
CN108243350A (en) * 2016-12-26 2018-07-03 深圳市中兴微电子技术有限公司 A kind of method and apparatus of audio-visual synchronization processing
CN113382300A (en) * 2021-06-08 2021-09-10 三星电子(中国)研发中心 Audio and video playing method and device

Also Published As

Publication number Publication date
CN113382300B (en) 2023-03-21
CN113382300A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
WO2022260423A1 (en) Audio or video playing method and apparatus
CA2234490C (en) Timing correction method and apparatus
WO2016129963A1 (en) Multi-view streaming service supporting method, and device for supporting same
WO2013042961A1 (en) Method and apparatus for synchronizing media data of multimedia broadcast service
JP3422686B2 (en) Data decoding device and data decoding method
WO2012077982A2 (en) Transmitter and receiver for transmitting and receiving multimedia content, and reproduction method therefor
WO2018131806A1 (en) Electronic apparatus and method of operating the same
KR101168612B1 (en) Device and method for synchronizing data in digital television receiver
WO2011108868A2 (en) Apparatus and method for recording and playing a media file, and a recording medium therefor
WO2011035560A1 (en) Device and method for synchronization of stereoscopic digital imaging
WO2014204192A1 (en) Apparatus and method for receiving broadcast content from a broadcast stream and an alternate location
WO2018088784A1 (en) Electronic apparatus and operating method thereof
WO2014126365A1 (en) Terminal apparatus and method for time synchronization
WO2020096148A1 (en) Method and device for switching media service channels
WO2012023787A2 (en) Digital receiver and content processing method in digital receiver
JP2001509354A (en) Information stream syntax indicating the existence of a connection point
JP3100308B2 (en) Image and audio information playback system
WO2016129965A1 (en) Method for providing streaming data through base station interworking node, and base station interworking node therefor
CN107959874B (en) Method and device for automatically correcting sound and picture synchronization
WO2017047848A1 (en) Zapping advertisement system using multiplexing characteristics
WO2022181859A1 (en) Method for synchronizing playback of digital content among plurality of connected devices, and device using same
WO2012078011A2 (en) Broadcast receiving device and method
EP3970384A1 (en) Method and apparatus for playing multimedia streaming data
WO2020040520A1 (en) Method for controlling reproduction of digital media content and apparatus therefor
WO2013162335A1 (en) Image playback apparatus for 3dtv and method performed by the apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22820551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22820551

Country of ref document: EP

Kind code of ref document: A1