CN109714634A - A kind of decoding synchronous method, device and the equipment of live data streams - Google Patents

A kind of decoding synchronous method, device and the equipment of live data streams Download PDF

Info

Publication number
CN109714634A
CN109714634A CN201811637340.XA CN201811637340A CN109714634A CN 109714634 A CN109714634 A CN 109714634A CN 201811637340 A CN201811637340 A CN 201811637340A CN 109714634 A CN109714634 A CN 109714634A
Authority
CN
China
Prior art keywords
audio
frame
video
timestamp
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811637340.XA
Other languages
Chinese (zh)
Other versions
CN109714634B (en
Inventor
李斌
王玉伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Electronics Co Ltd
Original Assignee
Qingdao Hisense Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronics Co Ltd filed Critical Qingdao Hisense Electronics Co Ltd
Priority to CN201811637340.XA priority Critical patent/CN109714634B/en
Publication of CN109714634A publication Critical patent/CN109714634A/en
Application granted granted Critical
Publication of CN109714634B publication Critical patent/CN109714634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

This specification provides decoding synchronous method, device and the equipment of a kind of live data streams, this method comprises: calculating maximum time stamp according to specified parameter;The timestamp of the non-first frame audio frame of the audio buffer queue more to be added and the maximum time stamp, if the timestamp of the non-first frame audio frame is less than the maximum time stamp, then the non-first frame audio frame is stored in the audio buffer queue, if the timestamp of the non-first frame audio frame is greater than the maximum time stamp, the non-first frame audio frame is abandoned;Audio frame is obtained from the audio buffer queue, decoding obtains audio data.The present invention adjusts corresponding audio buffer queue sound intermediate frequency frame time by dynamic and stabs corresponding time upper limit value, the audio frame that timestamp is more than the time upper limit value is abandoned, reduce audio buffer queue, again using video data synchronization to audio data, and then achieve the purpose that be accurately controlled live video delay.

Description

A kind of decoding synchronous method, device and the equipment of live data streams
Technical field
This specification be related to internet direct seeding technique field more particularly to a kind of live data streams decoding synchronous method, Device and equipment.
Background technique
Direct seeding technique is more and more popular at present, but is also faced with more challenges.Entire live streaming process is divided into following several A committed step: main broadcaster's client, by the pushing video streaming locally acquired to server;Server to video flowing cached with And forwarding;Spectator client, the video flowing for pulling server buffer play out.
Spectator client needs to be decoded the audio, video data in video flowing after network server download video stream After could play, therefore will necessarily have certain network delay.Network delay refers to acquiring from main broadcaster's client, arrives spectators Time difference between client terminal playing.Network delay can be divided into spectator client prolonging relative to main broadcaster's client terminal playing video flowing When, the delay of transmitting video-frequency flow between client and server, as server each CDN (content distributing network) server it Between transmitting video-frequency flow delay.Delay be mainly spectator client it is data cached caused by, and in the prior art, spectators client End usually according to real-time monitoring current network environment, adjusts the size cached in playing process, in real time to guarantee lower delay Time, but due to multiple buffers that are generally divided into of player end, and what usual player externally provided applies controllable cache It is only the caching after demultiplexing, therefore the size cached in playing process is still difficult to be accurately controlled, that is, is difficult to control accurately Live video delay.
Summary of the invention
To overcome the problems in correlation technique, present description provides a kind of decoding sides of synchronization of live data streams Method, device and equipment.
According to this specification embodiment in a first aspect, providing a kind of decoding synchronous method of live data streams, the side Method includes:
According to specified parameter calculate maximum time stamp, wherein the specified parameter include audio frame decoding delay and The output time delay of decoded audio data, the maximum time stamp indicate the cacheable audio frame of audio buffer queue The maximum value of timestamp;
The timestamp of the non-first frame audio frame of the audio buffer queue more to be added and the maximum time stamp, if institute The timestamp of non-first frame audio frame is stated no more than the maximum time stamp, then the non-first frame audio frame is stored in the audio In buffer queue, if the timestamp of the non-first frame audio frame is greater than the maximum time stamp, by the non-first frame audio frame It abandons;
Audio frame is obtained from the audio buffer queue, decoding obtains audio data.
According to the second aspect of this specification embodiment, a kind of decoding synchronous device of live data streams is provided, comprising:
Audio buffer module, including the audio buffer queue for buffered audio frame;
Computing module, for calculating maximum time stamp according to specified parameter, wherein the specified parameter includes audio frame The output time delay of decoding delay and decoded audio data, the maximum time stamp indicate that the audio buffer queue can delay The maximum value of the timestamp for the audio frame deposited;
Judgment module, the timestamp of the non-first frame audio frame for the audio buffer queue more to be added and it is described most Big timestamp, if the timestamp of the non-first frame audio frame is not more than the maximum time stamp, by the non-first frame audio frame It is stored in the audio buffer queue, if the timestamp of the non-first frame audio frame is greater than the maximum time stamp, by institute Non- first frame audio frame is stated to abandon;
Audio decoder module, for obtaining audio frame from the audio buffer queue, decoding obtains audio data.
According to the third aspect of this specification embodiment, a kind of decoding synchronizer of live data streams is provided, comprising: place Manage device and memory;
The memory is for storing executable computer instruction;
Wherein, it is performed the steps of when the processor executes the computer instruction
According to specified parameter calculate maximum time stamp, wherein the specified parameter include audio frame decoding delay and The output time delay of decoded audio data, the maximum time stamp indicate the cacheable audio frame of audio buffer queue The maximum value of timestamp;
The timestamp of the non-first frame audio frame of audio buffer queue more to be added and the maximum time stamp, if described non- The timestamp of first frame audio frame is not more than the maximum time stamp, then the non-first frame audio frame is stored in the audio buffer In queue, if the timestamp of the non-first frame audio frame is greater than the maximum time stamp, the non-first frame audio frame is abandoned;
Audio frame is obtained from the audio buffer queue, decoding obtains audio data.
The technical solution that the embodiment of this specification provides can include the following benefits:
In this specification embodiment, a kind of frame losing strategy of live data streams sound intermediate frequency frame is proposed, and be based on the frame losing Strategy specifically devises a kind of decoding synchronous method of live data streams;Compared with the existing technology, frame losing plan proposed by the present invention Slightly, it is contemplated that the delay of audio output module and the delay of audio decoder module, and in order to compensate for the delay of the two modules, The present invention is adjusted by dynamic can receive the timestamp maximum value of buffered audio frame in corresponding audio buffer queue to realize, It is exactly that dynamic adjusts the corresponding time upper limit value of corresponding audio buffer queue sound intermediate frequency frame time stamp;If wait enter audio buffer team The timestamp of the audio frame of column is greater than the timestamp maximum value, then after the audio frame enters audio buffer queue, can expand straight Delay is broadcast, therefore selection is abandoned, and then achievees the purpose that be accurately controlled live video delay.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not This specification can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the reality for meeting this specification Example is applied, and is used to explain the principle of this specification together with specification.
Fig. 1 is the connection figure of user terminal player internal module shown according to an exemplary embodiment in the prior art.
Fig. 2 is a kind of this specification decoding synchronous method process of live data streams shown according to an exemplary embodiment Figure.
Fig. 3 is the process of this specification live data streams sound intermediate frequency frame frame losing strategy shown according to an exemplary embodiment Figure.
Fig. 4 is the synchronization policy signal that this specification audio-visual synchronization module shown according to an exemplary embodiment uses Figure.
Fig. 5 is a kind of logical box of the decoding synchronous device of live data streams shown in one exemplary embodiment of this specification Figure.
Fig. 6 is a kind of logical box of the decoding synchronizer of live data streams shown in one exemplary embodiment of this specification Figure.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with this specification.On the contrary, they are only and such as institute The example of the consistent device and method of some aspects be described in detail in attached claims, this specification.
It is only to be not intended to be limiting this explanation merely for for the purpose of describing particular embodiments in the term that this specification uses Book.The "an" of used singular, " described " and "the" are also intended to packet in this specification and in the appended claims Most forms are included, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein is Refer to and includes that one or more associated any or all of project listed may combine.
It will be appreciated that though various information may be described using term first, second, third, etc. in this specification, but These information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not taking off In the case where this specification range, the first information can also be referred to as the second information, and similarly, the second information can also be claimed For the first information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ... " or " in response to determination ".
Next this specification embodiment is described in detail.
As shown in Figure 1, Fig. 1 is the connection figure of user terminal player inner part comprising modules.As seen from the figure, user terminal plays Device generally comprises: network protocol parsing and download module 101, downloading data buffer queue 102, demultiplexing module 103, audio are slow Deposit queue 1041 and video cache queue 1042, audio decoder module 1051 and Video decoding module 1052, audio-visual synchronization mould Block 106, audio output module 1071 and Video Output Modules 1072.
Wherein, network protocol parsing and download module 101 are responsible for parsing network protocol and download live data streams, network association View parsing and download module 101 store the live data downloaded but do not demultiplexed also often through downloading data buffer queue Stream.But for live data streams, usually support to parse in downloading, by matching network protocol parsing and download module 101 Parsing speed of download with to live data streams demultiplexing speed, the caching of the downloading data buffer queue 102 can be reduced To almost negligible degree.In one embodiment of the invention, the cache size in downloading data buffer queue 102 can be neglected Slightly disregard.
Demultiplexing module 103 is responsible for decapsulating the live data streams after downloading, with audio frame, the video after being separated The data such as frame or caption stream.Since the speed and audio decoder or the decoded speed of video of live data streams demultiplexing are usually It cannot exactly match, therefore usually there is also at least one audio buffer queue 1041 or videos after demultiplexing module 103 Buffer queue 1042 is for storing also not decoded audio frame or video frame.In the present embodiment, the audio buffer queue 1041 Or video cache queue 1042 may be self-existent module, it is also possible to be under the jurisdiction of downloading data buffer queue 102 or Demultiplexing module 103.In general, audio buffer queue 1041 or video cache queue 1042 would generally provide the audio of caching In the frame number upper limit or the cache size upper limit of frame or video frame, or caching sound intermediate frequency frame or the corresponding timestamp of video frame Limit.
Audio decoder module 1051 or Video decoding module 1052 are each responsible for the audio frame that will be compressed or video frame carries out Decoding.Wherein, the audio decoder module 1051 or Video decoding module 1052 generally also have several frames due to the needs of decoding algorithm The caching of data.There are buffer queues in the audio decoder module or Video decoding module to cache several frame data, in this way, The audio decoder module 1051 or the length of the buffer queue in Video decoding module 1052 represent decoding delay.
Audio-visual synchronization module 106 is for synchronizing decoded audio data and video data;Then it sends out respectively To audio output module 1071 and Video Output Modules 1072.
Audio output module 1071 and Video Output Modules 1072 are responsible for namely playing end in live streaming terminal and exporting respectively Audio data and video data;Due to technical reason, the audio output module and Video Output Modules can also have data and delay It deposits.Likewise, audio output module 1071 or Video Output Modules 1072 are also required to buffer queue to cache the data, and the sound The length of buffer queue in frequency output module 1071 represents the output time delay of audio data.
To support MEMC (MEMC:Motion Estimate and Motion Compensation;I.e. estimation and Motion compensation, the movement image quality compensation technique used in LCD TV) video output terminal for, since MEMC algorithm needs It is calculated by the video data of multiframe, therefore video data caching can not be deleted.Equally, audio output module is also specific square There is data buffer storages for the relationship of case quotient.
Audio buffer queue 1041 or video cache queue 1042 after demultiplexing usually only are considered in existing player In data buffer storage, but the caching frame number upper limit or caching as defined in the frequency buffer queue 1041 or video cache queue 1042 The corresponding timestamp upper limit of data frame be it is fixed, cause player live streaming be delayed change greatly, player still cannot be real Now to the accurate control of live video delay.
It is inaccurate in order to solve the problems, such as in the prior art to remain unchanged to live video delays time to control in the present invention, consider The output time delay of audio output module 1071 and the decoding delay of audio decoder module 1051, devise a kind of live data Flow the frame losing strategy of sound intermediate frequency frame;Frame losing strategy based on design, the decoding for proposing a kind of live data streams sound intermediate frequency frame are same One step process, as shown in Fig. 2, the method includes the steps S202- step S206:
S202, maximum time stamp is calculated according to specified parameter, wherein the specified parameter includes the decoding delay of audio frame And the output time delay of decoded audio data, the maximum time stamp indicate the cacheable audio of audio buffer queue The maximum value of the timestamp of frame;
S204, audio buffer queue more to be added non-first frame audio frame timestamp and the maximum time stamp, if The timestamp of the non-first frame audio frame is not more than the maximum time stamp, then by the non-first frame audio frame be stored in it is described It is more than described by timestamp if the timestamp of the non-first frame audio frame is greater than the maximum time stamp in audio buffer queue Maximum time stamp then abandons the non-first frame audio frame;
S206, audio frame is obtained from the audio buffer queue, decoding obtains audio data.
In the present embodiment, after user's point opens live play device, of short duration delay is waited, user can watch live streaming view Frequency rise the corresponding video content of first video frame of sowing time, audio frame corresponding with first video frame be headed by frame audio Frame in the present embodiment, records the corresponding system time of head frame audio frame and timestamp;What system time here can refer to It is UTC time, also cries world's unified time;Each audio frame or video frame can have corresponding timestamp, according to the timestamp, To realize played in order.
When starting live video content, audio buffer queue 1041 and video cache queue 1042 inside player will Establish, and by after demultiplexing audio frame and video frame be stored in audio buffer queue 1041 and video cache queue respectively 1042, then the audio frame in audio buffer queue is taken out and is decoded again by audio decoder module 1051;However live streaming number Speed according to stream demultiplexing is greater than the decoded speed of audio decoder module, at this point, can will have little time decoded audio frame deposit sound Frequency buffer queue 1041;For the persistent cache for preventing 1041 sound intermediate frequency frame of audio buffer queue, the continuous tired of delay time is caused Product, is usually provided with a upper limit to the number of frames of 1041 sound intermediate frequency frame buffer of audio buffer queue, when audio buffer queue reaches To number of frames the upper limit when, then the audio frame of audio buffer queue 1041 to be added is abandoned, to control delay;There are also a kind of sides Method is to one fixed upper limit of the corresponding time stamp setting of the audio frame cached in audio buffer queue 1041, when sound to be added The corresponding timestamp of the audio frame of frequency buffer queue 1041 is greater than the timestamp upper limit of setting, equally also by audio buffer to be added The audio frame of queue 1041 abandons.For example, when the timestamp of the audio frame of audio buffer queue to be added is 1200, and this When audio buffer queue setting fixed upper limit value be 1000, then selection the audio frame of the audio buffer queue to be added is lost It abandons, to reduce delay accumulation.
In the present embodiment, it is contemplated that audio decoder module 1051 and audio output module 1071 all have delay, adopt It is inappropriate with above-mentioned fixed upper limit value, therefore the present embodiment is proposed according to specified parameter, the decoding delay and institute Output time delay is stated, maximum time stamp is calculated, the maximum time stamp indicates the cacheable audio frame of audio buffer queue The maximum value of timestamp.Wherein, the specified parameter includes: the decoding delay for extracting obtained audio frame and decoded audio The output time delay of data, the corresponding system time of first frame audio frame and timestamp recorded and preset user allow Maximum Playout delay.Wherein, the maximum Playout delay that the user allows includes as caused by audio buffer queue size Time delay, audio decoder time delay and audio output time delay etc., that is to say, that the maximum Playout delay that the user allows represents use Family is acceptable to play the upper limit threshold being always delayed.The head frame audio frame is corresponding with the video frame of sowing time is played.It sets in this way The timestamp upper limit value that the maximum time stamp for setting out goes adjustment audio buffer queue that the audio buffer queue 1041 is arranged For dynamic change, i.e., the size of audio buffer queue is with the decoding delay of audio decoder module and the output of audio output module Time delay dynamic adjusts.
In the present embodiment, the user allows maximum Playout delay, timestamp or maximum time stamp, decoding delay, It is consistent to export the units such as time delay and system time, can be millisecond or delicate.
In one embodiment, the maximum time stamp can be determined as follows, specifically:
Obtain the time difference for playing the corresponding system time of first frame audio frame to present system time;It determines currently playing The timestamp of the corresponding audio frame of video frame;After the timestamp is added with the maximum Playout delay that preset user allows with The decoding delay and the output time delay compare, and obtain the maximum time stamp.
The overall logic of maximum time stamp is calculated by above step are as follows: rise by video and be multicast to lasting play to present Time difference (video playback time) and video play the corresponding timestamp of first audio frame of sowing time, calculate and are playing instantly The corresponding timestamp of audio frame;The live streaming delay that live play device allows when generally all can limit live streaming, and in order to control this Live streaming delay, then be preset with the maximum Playout delay of user's permission;If not considering the solution of the audio decoder module 1051 The output time delay of code time delay and audio output module 1071, then directly adding the corresponding timestamp of audio frame being played on should The maximum Playout delay that user allows, can be obtained the corresponding maximum time stamp of audio buffer queue.But the present embodiment is to mend Repay the decoding delay of the audio decoder module and the output time delay of audio output module, then it is audio frame being played on is corresponding Decoding delay and audio output module with the audio decoder module after timestamp is added with the maximum Playout delay that user allows The sum of output time delay be compared, to obtain maximum time stamp.
In the present embodiment, after determining the maximum time stamp, the non-first frame sound to audio buffer queue to be added can be started Frequency frame is judged, judges whether audio frame audio buffer queue 1041 is added;According to judging result, timestamp is not surpassed The non-first frame audio frame for crossing maximum time stamp is stored in the audio buffer queue, is more than the non-head of maximum time stamp by timestamp Frame audio frame abandons.For example, it when the timestamp of the audio frame of audio buffer queue to be added is 1010, and calculates at this time The maximum time stamp come is 1000, then selects to abandon the audio frame of the audio buffer queue to be added, tired to reduce delay Product.
Clearly to illustrate frame losing situation mentioned above, the stream of audio frame frame losing judgement is provided in one embodiment Cheng Tu, as shown in figure 3, specific steps are as follows:
Step 301: whether the audio frame (audio_frame) for judging audio buffer queue to be added is that video plays sowing time the The first frame of the corresponding audio of one video frame;If first frame thens follow the steps 302;If not first frame, thens follow the steps 303;
Step 302: recording the corresponding system time (system_time of the first frameaudio_start) and timestamp (audio_ Start_time), step 306 is then executed;
Step 303: real-time cache size (audio_decoder_buffer) and the audio for obtaining audio decoder module are defeated Then the real-time time delay (audio_output_delay) of module out executes step 304;
Step 304: the corresponding maximum time stamp of audio buffer queue (max_pts) described in present system time is calculated, Then step 305 is executed;
Step 305: the timestamp of the audio frame and the size of maximum time stamp are compared in judgement, if the timestamp of the audio frame Greater than maximum time stamp, 307 are thened follow the steps;If the timestamp of the audio frame is not more than maximum time stamp, then follow the steps 306;
Step 306: audio buffer queue is added in the audio frame (audio_frame);
Step 307: the audio frame (audio_frame) is abandoned.
Wherein, the calculation formula of the maximum time stamp are as follows:
Max_pts=(current_system_time-system_timeaudio_start)+audio_start_time
+thresholdupper_limit-(audio_decoder_buffer+audio_output_delay)
Wherein, max_pts is the maximum time stamp, and current_system_time is that broadcasting is instantly corresponding current System time;system_timeaudio_startThe corresponding system time of first audio frame of sowing time is played for the video;audio_ Start_time is that the video plays the corresponding timestamp of first audio frame of sowing time;thresholdupper_limitFor the user The maximum Playout delay of permission;Audio_decoder_buffer is the decoding delay of audio decoder module;audio_output_ Delay is the output time delay of the audio output module.
In one embodiment, the non-first frame audio frame is stored in the audio buffer according to judging result above After in queue, the audio frame in the audio buffer queue can be also traversed, when being more than maximum for timestamp in audio buffer queue Between the audio frame that stabs abandon.Frame losing under such situation, the audio frame not only treated into audio buffer queue have carried out frame losing Judgement also carries out frame losing screening to the audio frame for coming into audio buffer queue, reduces audio buffer queue again Cache size, according to the maximum time stamp of dynamic change at any time, the dynamic cache size for controlling audio buffer queue, into one Step reduces delay.
For convenience of the calculation method for understanding above two frame losing situation and maximum time stamp, now for example illustrate, If the maximum Playout delay that preset user allows is 100, (timestamp or system time unit be all ms), video plays sowing time the The corresponding timestamp of one audio frame is 100, and broadcasting is multicast to from video and (is broadcast at this time to present system time difference for 900 1000) timestamp of the corresponding audio frame of the video frame put is;The decoding delay of the decoder module and output module is defeated at this time The sum of time delay is 40 out, then corresponding maximum time stamp is 1060 at this time, has carried out an audio buffer queue to be added at this time Audio frame, the timestamp of the frame is 1058, and through judging, which is less than maximum time stamp, then the audio is added in the frame Buffer queue;With video playing, it is 910 that broadcasting to present system time difference is multicast to from video, and the decoder module The sum of decoding delay and the output time delay of output module become 55, then corresponding maximum time stamp becomes 1055 at this time, this when Between put audio buffer queue to be added audio frame timestamp be 1053, through judging, 1053 less than 1055, then are by timestamp The audio buffer queue is added in 1053 frame, then in the traversal audio buffer queue of this standard of maximum time stamp 1055 Audio frame, discovery exist be greater than maximum time stamp 1055 the corresponding audio frame of timestamp, i.e., timestamp be 1058 audio Frame, and then the audio frame deletion for being 1058 by timestamp.
In the present embodiment, after above-mentioned frame losing processing, the audio frame in the audio buffer queue is taken out, and Decoding obtains audio data.
In one embodiment, the decoding synchronous method of the live data streams further include: delay video frame buffer to video It deposits in queue, video frame is obtained from the video cache queue and is decoded, video data is obtained;By the audio data and The video data is based on audio-visual synchronization strategy and synchronizes processing.
In one embodiment, do not make the corresponding frame losing processing of the above method to video frame, it is directly cached into view Frequency buffer queue.In this way, frame losing processing only has been carried out to audio frame, therefore the timestamp that audio data corresponds to audio frame should be It is discontinuous, and video frame does not do corresponding frame losing processing, the timestamp that obtained video data corresponds to video frame is relatively continuous 's.In view of the situation, using the synchronization policy of audio video synchronization to audio, the synchronization policy specifically:
According to the corresponding timestamp of decoded audio frame, update obtains audio timeline;
Judge on the audio timeline with the presence or absence of the time corresponding with the timestamp of the video frame, and if it exists, then By the corresponding video data synchronization of the video frame to the time corresponding audio data;
If the time corresponding with the timestamp of the video frame is not present in the audio timeline, will not have and correspond to Audio number corresponding with the latter time of the time arest neighbors in the video data synchronization to audio timeline of the time According to.
In this embodiment, the synchronization policy using by the method for audio video synchronization to audio, specifically wrap by this method It includes: audio timeline being updated according to the timestamp of decoded audio frame and the corresponding audio data of audio frame, then will decoding Video frame afterwards is arranged according to timestamp size order, and is put into queue, is taken out from the queue by Video Rendering process Whether the video data of one video frame has the correspondence time corresponding according to the timestamp of the video frame on audio timeline Audio data, to judge whether the video frame needs to be rendered.
Illustrate the audio-visual synchronization strategy in the embodiment for specific embodiment, as shown in figure 4, audio buffer team First frame AFirst and the corresponding timestamp of tail frame ALast in column are aF, aL, the first frame VFirst in video cache queue Timestamp corresponding with tail frame VLast is vF, vL;Audio frame is also cached in audio buffer queue simultaneously: Audio1, Audio2,Audio3,Audio4;Same video cache queue is corresponding to be cached with video frame: Video1, Video2, Video3, Video4;
If discovery does not abandon each audio frame of caching after the maximum time stamp judgement that the present embodiment calculates, Then the audio frame is decoded correspondingly with video frame it is synchronous after broadcast;
Another situation is just as shown in figure 4, after the maximum time stamp judgement that the present embodiment calculates, audio is slow It deposits Audio1, Audio2, Audio3 in queue this 3 audio frames to be dropped, by the audio frame solution in audio buffer queue After code, primary jump is had occurred in the audio timeline updated, i.e., after the corresponding tail frame of audio finishes playing, continues to play The corresponding audio data of audio frame Audio4, the timestamp on audio timeline can be jumped directly to Audio4 pairs of audio frame by aL The timestamp a4 answered;But after the corresponding tail frame decoding of video is played simultaneously, video frame Video1 can also be decoded and be synchronized, But since the time corresponding with Video1 being not present on audio timeline, therefore video frame Video1 can only also be synchronized to audio frame Audio4, but because the timestamp v1 of video frame Video1 is less than the timestamp a4 of audio frame Audio4, i.e., on audio timeline Audio output speed leads over video playout speed, therefore the corresponding video data of video frame Video1 is needed by Fast rendering, together Manage Video2, Video3;By the video data Fast rendering to video frame, so that multiple video frame synchronization is to the sound Frequency time shaft.
It in one embodiment, can also include another audio-visual synchronization strategy, which includes:
According to the corresponding timestamp of decoded audio frame, audio timeline is updated;
Judge with the presence or absence of the time corresponding with the timestamp of the video frame on the audio timeline, if the audio There is the time corresponding with the timestamp of the video frame on time shaft, then extremely by the corresponding video data synchronization of the video frame Time corresponding audio data;
If the time corresponding with the timestamp of the video frame is not present on the audio timeline, by the video frame Corresponding video data abandons.Equally by taking that example above as an example, Audio1, Audio2 in audio buffer queue, This 3 audio frames of Audio3 have been dropped, and also will after Video1, Video2, Video3 decoding in corresponding video cache queue It is dropped, and then guarantees audio video synchronization to audio timeline.
By the audio-visual synchronization strategy it is found that audio timeline is the fiducial time axis played, that is to say, that audio is slow Deposit the size of queue 1041 namely the duration of live streaming delay;Therefore it can be by controlling audio buffer queue to be added or delaying in audio The timestamp size of the audio frame in queue is deposited, to realize the size of control audio buffer queue, further realizes accurate control The duration of delay is broadcast live.
Live data streams described in this specification decode the corresponding each embodiment of synchronous method and are not restricted to moving Dynamic terminal, computer end play the data stream synchronous method under the situations such as network direct broadcasting or television broadcasting live telecast, also Can be adapted for caching audio, video data, decode and the various application scenarios of synchronization process under.
Corresponding with a kind of aforementioned decoding embodiment of synchronous method of live data streams, this specification additionally provides one kind The decoding synchronous device of live data streams.As shown in figure 5, described device 500 includes:
Audio buffer module 501, including the audio buffer queue 1041 for buffered audio frame;
Computing module 502, for calculating maximum time stamp according to specified parameter, wherein the specified parameter includes audio The output time delay of the decoding delay of frame and decoded audio data, the maximum time stamp indicate the audio buffer queue The maximum value of the timestamp of cacheable audio frame;
Judgment module 503, the timestamp of the non-first frame audio frame for the audio buffer queue more to be added and institute Maximum time stamp is stated, if the timestamp of the non-first frame audio frame is not more than the maximum time stamp, timestamp is less than institute Maximum time stamp then the non-first frame audio frame is stored in the audio buffer queue is stated, if the non-first frame audio frame Timestamp be greater than the maximum time stamp, then the non-first frame audio frame that timestamp is more than the maximum time stamp is lost It abandons;
Audio decoder module 504, for obtaining audio frame from the audio buffer queue, decoding obtains audio data.
In the present embodiment, described device 500 further includes obtaining module 505, and the acquisition module 505 is for obtaining audio solution When the output of the audio data after decoding delay and audio output module output decoding when code module is to audio frame decoding is synchronous Prolong.
In the present embodiment, the computing module 502 extracts audio decoder module 504 to audio from acquisition module 505 The output time delay of audio data after decoding delay when frame decoding is synchronous with audio output module output decoding.
In one embodiment, described device 500 further includes recording unit 509, and the recording unit preserves the finger Determine parameter, the specified parameter includes: that the corresponding system time of first frame audio frame and timestamp and preset user allow Maximum Playout delay.
Based on calculation method corresponding with embodiment of the method, calculates and determine maximum time stamp.
Judgment module 503 obtains the maximum time stamp that computing module 502 is calculated, and the non-first frame audio frame is corresponding Timestamp compared with maximum time stamp, according to comparison result decide whether by the audio frame be stored in audio buffer queue.
In one embodiment, described after the non-first frame audio frame is stored in the audio buffer queue Judgment module 503 is also used to traverse the audio frame in the audio buffer queue, and is more than the sound of maximum time stamp by timestamp Frequency frame abandons.
In one embodiment, described device 500 further include: video cache module 506, the video cache module include Video cache queue for buffered video frame;Video decoding module 507, for obtaining video from the video cache queue Frame is decoded, and obtains video data;Audio-visual synchronization module 508 is used for the audio data and the video data base Processing is synchronized in audio-visual synchronization strategy.
In one embodiment, the audio-visual synchronization module 508 includes:
Time shaft updating unit, for updating audio timeline according to the corresponding timestamp of decoded audio frame;
Synchronization unit whether there is the time corresponding with the timestamp of the video frame on the audio timeline, Time corresponding with the timestamp of the video frame if it exists, the then corresponding video data synchronization of the video frame to the time Corresponding audio data;It, will not if the time corresponding with the timestamp of the video frame is not present on the audio timeline It is corresponding with the latter time of the time arest neighbors in video data synchronization to audio timeline with the correspondence time Audio data.In one embodiment, the computing module 502 calculates the specific steps of the maximum time stamp are as follows: obtains Time difference of the corresponding system time of the first frame audio frame of broadcasting to present system time;Determine that currently playing video frame is corresponding The timestamp of audio frame;After the timestamp is added with the maximum Playout delay that preset user allows with the decoding delay And the output time delay compares, and obtains the maximum time stamp.
In one embodiment, the audio-visual synchronization module 508 further includes having:
Time shaft updating unit, for updating audio timeline according to the corresponding timestamp of decoded audio frame;
Synchronization unit, on audio timeline described in multilevel iudge with the presence or absence of it is corresponding with the timestamp of the video frame when Between, it is if there is the time corresponding with the timestamp of the video frame on the audio timeline, the video frame is corresponding Video data synchronization is to the time corresponding audio data;If on the audio timeline there is no with the video frame when Between stab the corresponding time, then the corresponding video data of the video frame is abandoned.
The function of modules and the realization process of effect are specifically detailed in the above method and correspond to step in above-mentioned apparatus Realization process, details are not described herein.
The embodiment of the device can be applied on a computing device in this specification file, such as server or terminal are set It is standby.Installation practice can also be realized by software realization by way of hardware or software and hardware combining.
In addition, this specification additionally provides a kind of decoding synchronizer of live data streams, as shown in fig. 6, the decoding Synchronizer 600 includes: processor 601 and memory 602;
The memory 602 is for storing executable computer instruction;
The processor 601 when executing the computer instruction for performing the steps of
According to specified parameter calculate maximum time stamp, wherein the specified parameter include audio frame decoding delay and The output time delay of decoded audio data, the maximum time stamp indicate the cacheable audio frame of audio buffer queue The maximum value of timestamp;
The timestamp of the non-first frame audio frame of the audio buffer queue more to be added and the maximum time stamp, if institute The timestamp of non-first frame audio frame is stated no more than the maximum time stamp, then the non-first frame audio frame is stored in the audio In buffer queue, if the timestamp of the non-first frame audio frame is greater than the maximum time stamp, by the non-first frame audio frame It abandons;
Audio frame is obtained from the audio buffer queue, decoding obtains audio data.
It is above-mentioned that this specification specific embodiment is described.Other embodiments are in the scope of the appended claims It is interior.In some cases, the movement recorded in detail in the claims or step can be come according to the sequence being different from embodiment It executes and desired result still may be implemented.In addition, process depicted in the drawing not necessarily require show it is specific suitable Sequence or consecutive order are just able to achieve desired result.In some embodiments, multitasking and parallel processing be also can With or may be advantageous.
Those skilled in the art will readily occur to this specification after considering specification and practicing the invention applied here Other embodiments.This specification is intended to cover any variations, uses, or adaptations of this specification, these modifications, Purposes or adaptive change follow the general principle of this specification and do not apply in the art including this specification Common knowledge or conventional techniques.The description and examples are only to be considered as illustrative, the true scope of this specification and Spirit is indicated by the following claims.
It should be understood that this specification is not limited to the precise structure that has been described above and shown in the drawings, And various modifications and changes may be made without departing from the scope thereof.The range of this specification is only limited by the attached claims System.
The foregoing is merely the preferred embodiments of this specification, all in this explanation not to limit this specification Within the spirit and principle of book, any modification, equivalent substitution, improvement and etc. done should be included in the model of this specification protection Within enclosing.

Claims (15)

1. a kind of decoding synchronous method of live data streams characterized by comprising
Maximum time stamp is calculated according to specified parameter, wherein the specified parameter includes decoding delay and the decoding of audio frame The output time delay of audio data afterwards, the maximum time stamp indicate cacheable audio frame time stamp in audio buffer queue Maximum value;
The timestamp of the non-first frame audio frame of the audio buffer queue more to be added and the maximum time stamp, if described non- The timestamp of first frame audio frame is not more than the maximum time stamp, then the non-first frame audio frame is stored in the audio buffer In queue, if the timestamp of the non-first frame audio frame is greater than the maximum time stamp, the non-first frame audio frame is abandoned;
Audio frame is obtained from the audio buffer queue, decoding obtains audio data.
2. a kind of decoding synchronous method of live data streams according to claim 1, which is characterized in that the specified parameter Further include: the corresponding system time of first frame audio frame and timestamp recorded and the maximum of preset user permission play Time delay, wherein the head frame audio frame is corresponding with the video frame of sowing time is played.
3. a kind of decoding synchronous method of live data streams according to claim 1, which is characterized in that by the non-first frame After audio frame is stored in the audio buffer queue, further includes: traverse the audio frame in the audio buffer queue, and will Timestamp is more than that the audio frame of the maximum time stamp abandons.
4. a kind of decoding synchronous method of live data streams according to claim 1 or 3, which is characterized in that the method Further include:
By video frame buffer into video cache queue, and record the timestamp of the video frame;
Video frame is obtained from the video cache queue to be decoded, and obtains video data;
The audio data and the video data are based on audio-visual synchronization strategy and synchronize processing.
5. a kind of decoding synchronous method of live data streams according to claim 4, which is characterized in that the audio-video is same Walking strategy includes:
According to the corresponding timestamp of decoded audio frame, audio timeline is updated;
Judge on the audio timeline with the presence or absence of the time corresponding with the timestamp of the video frame;
It is if there is the time corresponding with the timestamp of the video frame on the audio timeline, the video frame is corresponding Video data synchronization is to the time corresponding audio data;
If the time corresponding with the timestamp of the video frame is not present on the audio timeline, will not have described in corresponding to Audio data corresponding with the latter time of the time arest neighbors in the video data synchronization to audio timeline of time.
6. a kind of decoding synchronous method of live data streams according to claim 4, which is characterized in that the audio-video is same Walking strategy includes:
According to the corresponding timestamp of decoded audio frame, audio timeline is updated;
Judge with the presence or absence of the time corresponding with the timestamp of the video frame on the audio timeline, if the audio time There is the time corresponding with the timestamp of the video frame on axis, then by the corresponding video data synchronization of the video frame to described Time corresponding audio data;
It is if the time corresponding with the timestamp of the video frame is not present on the audio timeline, the video frame is corresponding Video data abandon.
7. the decoding synchronous method of a kind of live data streams according to claim 2, which is characterized in that when the calculating is maximum Between the specific steps stabbed are as follows:
Obtain the time difference for playing the corresponding system time of first frame audio frame to present system time;
Determine the timestamp of the corresponding audio frame of currently playing video frame;
After the timestamp is added with the maximum Playout delay that preset user allows with the decoding delay and described defeated The sum of time delay compares out, obtains the maximum time stamp.
8. a kind of decoding synchronous device of live data streams characterized by comprising
Audio buffer module, including the audio buffer queue for buffered audio frame;
Computing module, for calculating maximum time stamp according to specified parameter, wherein the specified parameter includes the decoding of audio frame The output time delay of time delay and decoded audio data, the maximum time stamp indicate that the audio buffer queue is cacheable The maximum value of the timestamp of audio frame;
Judgment module, when the timestamp of the non-first frame audio frame for the audio buffer queue more to be added is with the maximum Between stab, will the non-first frame audio frame preservation if the timestamp of the non-first frame audio frame is not more than the maximum time stamp It, will be described non-if the timestamp of the non-first frame audio frame is greater than the maximum time stamp in the audio buffer queue First frame audio frame abandons;
Audio decoder module, for obtaining audio frame from the audio buffer queue, decoding obtains audio data.
9. a kind of decoding synchronous device of live data streams according to claim 8, which is characterized in that the specified parameter Further include: the corresponding system time of first frame audio frame and timestamp that are stored in recording unit and preset user allow Maximum Playout delay, wherein the head frame audio frame is corresponding with the video frame of sowing time.
10. a kind of decoding synchronous device of live data streams according to claim 8, which is characterized in that by the non-head After frame audio frame is stored in the audio buffer queue, the judgment module is also used to traverse in the audio buffer queue Audio frame, and by timestamp be more than maximum time stamp audio frame abandon.
11. a kind of decoding synchronous device of live data streams according to claim 8 or 10, which is characterized in that the dress It sets further include: video cache module, including the video cache queue for buffered video frame;
Video decoding module is decoded for obtaining video frame from the video cache queue, obtains video data;
Audio-visual synchronization module is synchronized for the audio data and the video data to be based on audio-visual synchronization strategy Processing.
12. a kind of decoding synchronous device of live data streams according to claim 11, which is characterized in that the audio-video Synchronization module includes:
Time shaft updating unit, for updating audio timeline according to the corresponding timestamp of decoded audio frame;
Synchronization unit whether there is the time corresponding with the timestamp of the video frame on the audio timeline, if depositing In the time corresponding with the timestamp of the video frame, then the corresponding video data synchronization of the video frame is corresponding to the time Audio data;If the time corresponding with the timestamp of the video frame is not present on the audio timeline, will not have Sound corresponding with the latter time of the time arest neighbors in the video data synchronization to audio timeline of the corresponding time Frequency evidence.
13. a kind of decoding synchronous device of live data streams according to claim 11, which is characterized in that the audio-video Synchronization module further include:
Time shaft updating unit, for updating audio timeline according to the corresponding timestamp of decoded audio frame;
Synchronization unit whether there is the time corresponding with the timestamp of the video frame on audio timeline described in multilevel iudge, If there is the time corresponding with the timestamp of the video frame on the audio timeline, by the corresponding video of the video frame Data are synchronized to the time corresponding audio data;If there is no the timestamps with the video frame on the audio timeline The corresponding time then abandons the corresponding video data of the video frame.
14. a kind of decoding synchronous device of live data streams according to claim 9, which is characterized in that the calculating mould Block calculates the specific steps of the maximum time stamp are as follows:
Obtain the time difference for playing the corresponding system time of first frame audio frame to present system time;
Determine the timestamp of the corresponding audio frame of currently playing video frame;
After the timestamp is added with the maximum Playout delay that preset user allows with the decoding delay and described defeated Time delay compares out, obtains the maximum time stamp.
15. a kind of decoding synchronizer of live data streams, which is characterized in that the decoding synchronizer include: processor and Memory;
The memory is for storing executable computer instruction;
The processor is for realizing claim 1 to 7 any the method when executing computer instruction the step of.
CN201811637340.XA 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream Active CN109714634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811637340.XA CN109714634B (en) 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811637340.XA CN109714634B (en) 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream

Publications (2)

Publication Number Publication Date
CN109714634A true CN109714634A (en) 2019-05-03
CN109714634B CN109714634B (en) 2021-06-29

Family

ID=66259584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811637340.XA Active CN109714634B (en) 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream

Country Status (1)

Country Link
CN (1) CN109714634B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010603A (en) * 2019-12-18 2020-04-14 浙江大华技术股份有限公司 Video caching and forwarding processing method and device
CN111093107A (en) * 2019-12-18 2020-05-01 深圳市麦谷科技有限公司 Method and device for playing real-time live stream
CN111601135A (en) * 2020-05-09 2020-08-28 青岛海信传媒网络技术有限公司 Method for synchronously injecting audio and video elementary streams and display equipment
CN112004030A (en) * 2020-07-08 2020-11-27 北京兰亭数字科技有限公司 Visual VR (virtual reality) director system for meeting place control
CN112235597A (en) * 2020-09-17 2021-01-15 深圳市捷视飞通科技股份有限公司 Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment
CN113014997A (en) * 2021-03-12 2021-06-22 上海哔哩哔哩科技有限公司 Cache updating method and device
CN113473229A (en) * 2021-06-25 2021-10-01 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment
CN113784118A (en) * 2021-09-14 2021-12-10 广州博冠信息科技有限公司 Video quality evaluation method and device, electronic equipment and storage medium
CN114025233A (en) * 2021-10-27 2022-02-08 网易(杭州)网络有限公司 Data processing method and device, electronic equipment and storage medium
CN114095769A (en) * 2020-08-24 2022-02-25 海信视像科技股份有限公司 Live broadcast low-delay processing method of application-level player and display equipment
CN114172605A (en) * 2021-11-18 2022-03-11 湖南康通电子股份有限公司 Synchronous playing method, system and storage medium
CN114257771A (en) * 2021-12-21 2022-03-29 杭州海康威视数字技术股份有限公司 Video playback method and device for multi-channel audio and video, storage medium and electronic equipment
CN114339381A (en) * 2021-12-28 2022-04-12 北京中交兴路信息科技有限公司 Audio and video synchronization method and device, electronic equipment and storage medium
CN114512139A (en) * 2022-04-18 2022-05-17 杭州星犀科技有限公司 Processing method and system for multi-channel audio mixing, mixing processor and storage medium
CN114866830A (en) * 2022-03-30 2022-08-05 中国经济信息社有限公司 Audio and video synchronization method and device and computer readable storage medium
CN114979712A (en) * 2022-05-13 2022-08-30 北京字节跳动网络技术有限公司 Video playing starting method, device, equipment and storage medium
CN115065860A (en) * 2022-07-01 2022-09-16 广州美录电子有限公司 Audio data processing method, device, equipment and medium suitable for stage
CN115174980A (en) * 2022-06-21 2022-10-11 浪潮卓数大数据产业发展有限公司 Audio and video synchronization method, device, equipment and medium based on security queue
CN115484494A (en) * 2022-09-15 2022-12-16 云控智行科技有限公司 Method, device and equipment for processing digital twin video stream
CN115914708A (en) * 2021-08-23 2023-04-04 西安诺瓦星云科技股份有限公司 Media audio and video synchronization method and system and electronic equipment
CN116074559A (en) * 2021-10-30 2023-05-05 杭州当虹科技股份有限公司 Design method of reference clock when multi-terminal synchronous playing contains pure audio stream
CN117376609A (en) * 2023-09-21 2024-01-09 北京国际云转播科技有限公司 Video synchronization method and device and video playing equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106330761A (en) * 2015-06-30 2017-01-11 中兴通讯股份有限公司 Congestion control method and device based on queue delay
CN106454553A (en) * 2016-11-15 2017-02-22 深圳市视维科技有限公司 A precise time delay live video network transmission control method
CN108462896A (en) * 2018-03-23 2018-08-28 北京潘达互娱科技有限公司 Live data method for stream processing, device and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100490403B1 (en) * 2002-05-04 2005-05-17 삼성전자주식회사 Method for controlling buffering of audio stream and apparatus thereof
CN101778269B (en) * 2009-01-14 2012-10-24 扬智电子科技(上海)有限公司 Synchronization method of audio/video frames of set top box
CN101902625A (en) * 2009-05-27 2010-12-01 深圳市九洲电器有限公司 Interactive-type internet protocol television video data processing method and system as well as set top box
CN102572611B (en) * 2010-12-07 2015-05-13 中国电信股份有限公司 Method for watching network live stream synchronously with different users and system thereof
CN104394421B (en) * 2013-09-23 2018-08-17 贵阳朗玛信息技术股份有限公司 The processing method and processing device of video frame
US10116989B1 (en) * 2016-09-12 2018-10-30 Twitch Interactive, Inc. Buffer reduction using frame dropping
CN108696773B (en) * 2017-04-11 2021-03-09 苏州谦问万答吧教育科技有限公司 Real-time video transmission method and device
CN108769786B (en) * 2018-05-25 2020-12-29 网宿科技股份有限公司 Method and device for synthesizing audio and video data streams

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106330761A (en) * 2015-06-30 2017-01-11 中兴通讯股份有限公司 Congestion control method and device based on queue delay
CN106454553A (en) * 2016-11-15 2017-02-22 深圳市视维科技有限公司 A precise time delay live video network transmission control method
CN108462896A (en) * 2018-03-23 2018-08-28 北京潘达互娱科技有限公司 Live data method for stream processing, device and electronic equipment

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111093107A (en) * 2019-12-18 2020-05-01 深圳市麦谷科技有限公司 Method and device for playing real-time live stream
CN111010603A (en) * 2019-12-18 2020-04-14 浙江大华技术股份有限公司 Video caching and forwarding processing method and device
CN111601135A (en) * 2020-05-09 2020-08-28 青岛海信传媒网络技术有限公司 Method for synchronously injecting audio and video elementary streams and display equipment
CN112004030A (en) * 2020-07-08 2020-11-27 北京兰亭数字科技有限公司 Visual VR (virtual reality) director system for meeting place control
CN114095769A (en) * 2020-08-24 2022-02-25 海信视像科技股份有限公司 Live broadcast low-delay processing method of application-level player and display equipment
CN114095769B (en) * 2020-08-24 2024-05-14 海信视像科技股份有限公司 Live broadcast low-delay processing method of application-level player and display device
CN112235597A (en) * 2020-09-17 2021-01-15 深圳市捷视飞通科技股份有限公司 Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment
CN112235597B (en) * 2020-09-17 2022-07-29 深圳市捷视飞通科技股份有限公司 Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment
CN113014997A (en) * 2021-03-12 2021-06-22 上海哔哩哔哩科技有限公司 Cache updating method and device
CN113473229B (en) * 2021-06-25 2022-04-12 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment
WO2022267733A1 (en) * 2021-06-25 2022-12-29 荣耀终端有限公司 Method for dynamically adjusting frame-dropping threshold value, and related devices
CN113473229A (en) * 2021-06-25 2021-10-01 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment
CN115914708A (en) * 2021-08-23 2023-04-04 西安诺瓦星云科技股份有限公司 Media audio and video synchronization method and system and electronic equipment
CN113784118A (en) * 2021-09-14 2021-12-10 广州博冠信息科技有限公司 Video quality evaluation method and device, electronic equipment and storage medium
CN114025233A (en) * 2021-10-27 2022-02-08 网易(杭州)网络有限公司 Data processing method and device, electronic equipment and storage medium
CN114025233B (en) * 2021-10-27 2023-07-14 网易(杭州)网络有限公司 Data processing method and device, electronic equipment and storage medium
CN116074559A (en) * 2021-10-30 2023-05-05 杭州当虹科技股份有限公司 Design method of reference clock when multi-terminal synchronous playing contains pure audio stream
CN114172605A (en) * 2021-11-18 2022-03-11 湖南康通电子股份有限公司 Synchronous playing method, system and storage medium
CN114172605B (en) * 2021-11-18 2024-03-08 湖南康通电子股份有限公司 Synchronous playing method, system and storage medium
CN114257771B (en) * 2021-12-21 2023-12-01 杭州海康威视数字技术股份有限公司 Video playback method and device for multipath audio and video, storage medium and electronic equipment
CN114257771A (en) * 2021-12-21 2022-03-29 杭州海康威视数字技术股份有限公司 Video playback method and device for multi-channel audio and video, storage medium and electronic equipment
CN114339381A (en) * 2021-12-28 2022-04-12 北京中交兴路信息科技有限公司 Audio and video synchronization method and device, electronic equipment and storage medium
CN114339381B (en) * 2021-12-28 2024-06-11 北京中交兴路信息科技有限公司 Audio and video synchronization method and device, electronic equipment and storage medium
CN114866830A (en) * 2022-03-30 2022-08-05 中国经济信息社有限公司 Audio and video synchronization method and device and computer readable storage medium
CN114512139A (en) * 2022-04-18 2022-05-17 杭州星犀科技有限公司 Processing method and system for multi-channel audio mixing, mixing processor and storage medium
CN114979712A (en) * 2022-05-13 2022-08-30 北京字节跳动网络技术有限公司 Video playing starting method, device, equipment and storage medium
CN115174980A (en) * 2022-06-21 2022-10-11 浪潮卓数大数据产业发展有限公司 Audio and video synchronization method, device, equipment and medium based on security queue
CN115065860B (en) * 2022-07-01 2023-03-14 广州美录电子有限公司 Audio data processing method, device, equipment and medium suitable for stage
CN115065860A (en) * 2022-07-01 2022-09-16 广州美录电子有限公司 Audio data processing method, device, equipment and medium suitable for stage
CN115484494A (en) * 2022-09-15 2022-12-16 云控智行科技有限公司 Method, device and equipment for processing digital twin video stream
CN115484494B (en) * 2022-09-15 2024-04-02 云控智行科技有限公司 Digital twin video stream processing method, device and equipment
CN117376609A (en) * 2023-09-21 2024-01-09 北京国际云转播科技有限公司 Video synchronization method and device and video playing equipment

Also Published As

Publication number Publication date
CN109714634B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN109714634A (en) A kind of decoding synchronous method, device and the equipment of live data streams
US10575042B2 (en) Media content synchronization
KR102536652B1 (en) Dynamic reduction of alternative content playback to support aligning the end of the alternative content with the end of the substitute content.
CN109792545B (en) Method for transmitting video content from server to client device
CN106470352B (en) Live channel playing method, device and system
US8505058B2 (en) Synchronization and automation in an ITV environment
JP6509826B2 (en) Synchronize multiple over-the-top streaming clients
EP1002424B1 (en) Processing coded video
CN107566918B (en) A kind of low delay under video distribution scene takes the neutrel extraction of root
JP2015515208A (en) Buffer management method for synchronization of correlated media presentations
US10638180B1 (en) Media timeline management
KR102469142B1 (en) Dynamic playback of transition frames while transitioning between media stream playbacks
US20150113576A1 (en) Method and apparatus for ip video signal synchronization
US8195829B2 (en) Streaming media player and method
US11758245B2 (en) Interactive media events
CN110519627B (en) Audio data synchronization method and device
KR20100058625A (en) System and method for an early start of audio-video rendering
KR20170045733A (en) Method for fast channel change and corresponding device
US20170353747A1 (en) Quality of Media Synchronization
WO2021111988A1 (en) Video playback device, video playback system, and video playback method
JPWO2014115389A1 (en) Video display device and video display method
US10694240B2 (en) Method for decoding an audio/video stream and corresponding device
GB2544796B (en) Video content synchronisation
JPH09311689A (en) Information outputting device
CN118590693A (en) Audio-visual data synchronization method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after: Hisense Visual Technology Co., Ltd.

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant