CN106658135A - Audio and video playing method and device - Google Patents
Audio and video playing method and device Download PDFInfo
- Publication number
- CN106658135A CN106658135A CN201611234859.4A CN201611234859A CN106658135A CN 106658135 A CN106658135 A CN 106658135A CN 201611234859 A CN201611234859 A CN 201611234859A CN 106658135 A CN106658135 A CN 106658135A
- Authority
- CN
- China
- Prior art keywords
- audio
- video
- time
- frame
- buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4392—Processing of audio elementary streams involving audio buffer management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Abstract
The embodiment of the invention discloses an audio and video playing method and device which are applied to a multimedia player in an audio direct mode. The method comprises the steps of: at an interval of a preset synchronization duration or every when a preset synchronization time point is reached, calculating current buffer time of an audio buffer region; according to the buffer time and a first target timestamp, determining audio play time; according to the audio play time and a second target timestamp, determining whether audio frames currently stored in the audio buffer region and video frames currently stored in a video buffer region meet synchronous playing conditions; and if yes, pushing the video frames currently stored in the video buffer region to video equipment for playing, and pushing the audio frames currently stored in the audio buffer region to audio equipment for playing. By applying the audio and video playing method and device disclosed by the embodiment of the invention, the audio frames and the video frames, which meet the synchronous playing conditions, are simultaneously played, synchronous playing of an audio and a video when the multimedia player is in the audio direct mode is implemented, and a playing effect of the multimedia player is improved.
Description
Technical field
The present invention relates to multimedia technology field, more particularly to a kind of audio and video playing method and device.
Background technology
As the development of Internet technology, the channel that user obtains multimedia messages are more and more convenient, various multimedias are broadcast
Put device and the trend for developing rapidly is presented, audio-visual synchronization broadcasting is one of the service quality for embodying multimedia player important interior
Hold.
Audio-visual synchronization refers to the synchronized relation that keep on audio frame and video frame time, can be by reducing audio frame
It is poor with the reproduction time of frame of video realizing.Because each audio frame carries audio time stamp, when each frame of video carries video
Between stab, audio and video synchronization method mainly uses timestamp synchronous method, i.e., select a reference clock (to require reference first
Time on clock is linear increment);Time when generating audio and video data streams on foundation reference clock is to each audio frequency number
Timestamp (generally comprising time started and end time) is all stamped according to block and block of video data;When playing, read block
On timestamp, arrange with reference to the time on current reference clock play:If the time started of data block is more than worked as
Time on front reference clock, then it is not eager to play the data block, until reference clock reaches the time started of data block;If
The time started of data block less than the time on current reference clock, then plays as early as possible this block number evidence, so that playing progress rate catch up with
Reference clock.
Multimedia player could be played, multimedia for the audio frame and frame of video that receive after being required for being decoded
Player plays audio frame by audio frequency apparatus, and by video equipment frame of video is played.Multimedia player is processing audio frame
When, mainly there is two ways:One kind is that multimedia player is needed audio frame decoded back into original audio data, i.e. PCM
After (Pulse Code Modulation, Linear Pulse Code Modulation) voice data, then PCM audio data is pushed to into sound
Frequency equipment is played out, and decoding is completed by the built-in decoder of player;Another kind is that multimedia player need not be by
Audio frame decoded back directly exports audio frame to audio frequency apparatus into original audio data, is voluntarily decoded by audio frequency apparatus and is broadcast
Put, this mode is also referred to as the straight-through output mode of audio frequency.
For the multimedia player led directly under output mode in audio frequency, it is one to realize that audio-visual synchronization is played
Technological difficulties.At present, some multimedia players do not carry out audio-visual synchronization process, for example:Almost without multimedia
Device supports the synchronization process of Doby EAC3 audio frequency.As can be seen here, existing multimedia player is led directly under output mode in audio frequency,
Shortage realizes the player method of audio-visual synchronization, causes the result of broadcast of multimedia player not good.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of audio and video playing method and device, to realize multimedia player
Under audio frequency direct mode operation, the synchronous broadcasting of audio frame and frame of video improves the result of broadcast of multimedia player.
To reach above-mentioned purpose, the embodiment of the invention discloses a kind of audio and video playing method and device, is applied to be in
Multimedia player under audio frequency direct mode operation, methods described includes:
When every pre-determined synchronization duration or often reaching pre-determined synchronization time point, when calculating the buffering of present video relief area
Between, wherein, the audio frame of the present video relief area storage is according to the audio frame after default encapsulation standard packaging;
According to the buffer time and first object timestamp, audio presentation time is determined, wherein, the first object
Timestamp is the timestamp of the minimum audio frame of timestamp in the currently stored audio frame in the audio buffer;
Stabbed according to the audio presentation time and the second object time, determine the sound of presently described audio buffer storage
Whether frequency frame and the frame of video of current video relief area storage meet synchronous playing condition, wherein, the second object time stamp is institute
State the timestamp of the minimum frame of video of timestamp in the currently stored frame of video of screen buffer;
If it is satisfied, the currently stored frame of video of screen buffer is pushed to into video equipment play, and by Audio Buffer
The currently stored audio frame in area is pushed to audio frequency apparatus broadcasting.
Optionally, the default encapsulation standard is:The standards of IEC 61937;The audio frame is:Sound based on EAC3 forms
Frequency frame.
Optionally, the buffer time for calculating present video relief area, including:
According to formula:Calculate the buffer time of present video relief area;
Wherein, T is the buffer time of present video relief area, and M is all audio frames of the audio buffer storage
Total bytes, S is the sample rate for audio frame, and N is the byte number of single audio frame.
Optionally, it is described according to the buffer time and first object timestamp, determine audio presentation time, including:
Buffer time time sum corresponding with the first object timestamp is defined as into audio presentation time.
Optionally, it is described to be stabbed according to the audio presentation time and the second object time, determine that presently described audio frequency delays
Whether the frame of video of the audio frame and the storage of current video relief area that rush area's storage meets synchronous playing condition, including:
Judge whether the audio presentation time stabs the corresponding time more than second object time, if it is, determining
The audio frame of presently described audio buffer storage and the frame of video of current video relief area storage meet synchronous playing condition.
To reach above-mentioned purpose, the embodiment of the invention also discloses a kind of audio and video display device, is applied in audio frequency
Multimedia player under direct mode operation, described device includes:
Computing module, during for every pre-determined synchronization duration or often reaching pre-determined synchronization time point, calculating present video and delaying
The buffer time in area is rushed, wherein, the audio frame of the present video relief area storage is according to after default encapsulation standard packaging
Audio frame;
First determining module, for according to the buffer time and first object timestamp, determining audio presentation time,
Wherein, the first object timestamp is the minimum audio frame of timestamp in the currently stored audio frame in the audio buffer
Timestamp;
Second determining module, for being stabbed according to the audio presentation time and the second object time, determines presently described
Whether the audio frame of audio buffer storage and the frame of video of current video relief area storage meet synchronous playing condition, wherein,
Second object time stamp is the timestamp of the minimum frame of video of timestamp in the currently stored frame of video of the screen buffer;
Playing module, for regarding for the audio frame that stores in presently described audio buffer and the storage of current video relief area
In the case that frequency frame meets synchronous playing condition, the currently stored frame of video of screen buffer is pushed to into video equipment and is played,
And the currently stored audio frame in audio buffer is pushed to into audio frequency apparatus broadcasting.
Optionally, the default encapsulation standard is:The standards of IEC 61937;The audio frame is:Sound based on EAC3 forms
Frequency frame.
Optionally, the computing module, specifically for:
When every pre-determined synchronization duration or often reaching pre-determined synchronization time point, according to formula:Calculate and work as
The buffer time of front audio buffer;
Wherein, T is the buffer time of present video relief area, and M is all audio frames of the audio buffer storage
Total bytes, S is the sample rate for audio frame, and N is the byte number of single audio frame.
Optionally, first determining module, specifically for:
According to the buffer time and first object timestamp, by the buffer time and the first object timestamp
Corresponding time sum is defined as audio presentation time.
Optionally, second determining module, specifically for:
Stabbed according to the audio presentation time and the second object time, judge the audio presentation time whether more than institute
Stated for the second object time and stab the corresponding time, if it is, determining the audio frame of presently described audio buffer storage and working as forward sight
The frame of video of frequency relief area storage meets synchronous playing condition.
As seen from the above technical solutions, audio and video playing method and device provided in an embodiment of the present invention, every default
When synchronous duration or often arrival pre-determined synchronization time point, the buffer time of present video relief area is calculated, wherein, present video delays
The audio frame for rushing area's storage is the audio frame after according to default encapsulation standard packaging;According to buffer time and first object
Timestamp, determines audio presentation time, wherein, first object timestamp is the time in the currently stored audio frame in audio buffer
The timestamp of the minimum audio frame of stamp;Stabbed according to audio presentation time and the second object time, determine present video relief area
Whether the audio frame of storage and the frame of video of current video relief area storage meet synchronous playing condition, wherein, during the second target
Between stamp for the minimum frame of video of timestamp in the currently stored frame of video of screen buffer timestamp;If it is satisfied, by video
The currently stored frame of video in relief area is pushed to video equipment broadcasting, and the currently stored audio frame in audio buffer is pushed to
Audio frequency apparatus is played.
Using the embodiment of the present invention, the audio frame and frame of video that meet synchronous playing condition are played simultaneously, realized many
Under audio frequency direct mode operation, the synchronous broadcasting of audio frequency and video improves the result of broadcast of multimedia player to media player.
Certainly, implementing arbitrary product or method of the present invention must be not necessarily required to while reaching all the above excellent
Point.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of audio and video playing method provided in an embodiment of the present invention;
Fig. 2 is a kind of structural representation of audio and video display device provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
The embodiment of the invention discloses a kind of audio and video playing method and device, is applied under audio frequency direct mode operation
Multimedia player, is described in detail individually below.
Referring to Fig. 1, Fig. 1 is a kind of schematic flow sheet of audio and video playing method provided in an embodiment of the present invention, including such as
Lower step:
S101, when every pre-determined synchronization duration or often reaching pre-determined synchronization time point, calculates the slow of present video relief area
Rush the time.
It should be noted that the audio frame of present video relief area storage is according to the audio frequency after default encapsulation standard packaging
Frame.
Exemplary, it is assumed that a length of 600ms during pre-determined synchronization, then every 600ms, calculate the buffering of present video relief area
Time;Or, it is assumed that a length of 2 hours during audio frequency, pre-determined synchronization time point be respectively the 10th minute of audio frequency, the 30th minute,
60 minutes, the 90th minute and the 105th minute, then often reach the 10th minute, the 30th minute, the 60th minute, the 90th minute and the 105th
During minute, the buffer time of present video relief area is calculated.
Specifically, presetting encapsulation standard is:The standards of IEC 61937, audio frame is:Audio frame based on EAC3 forms.IEC
61937 standards are set by International Electrotechnical Commission (International Electro technical Commission, IEC)
Meter, can be used to encapsulate the international standard of audio frame, therefore, there is versatility according to the audio frame of the standard packagings of IEC 61937,
Audio frequency apparatus for playing audio frame all supports decapsulation and decoding to the audio frame according to the standard packagings of IEC 61937
Play.Audio format refers to the coding techniques of audio file, has various implementations, for example:AC3 (Audio Coding-3, sound
Frequency coding 3), AAC (Advanced Audio Coding, Advanced Audio Coding), EAC3 (Enhanced AC-3, enhanced edition AC-
3) etc., the corresponding sample rate of audio frame of different-format and code encoding/decoding mode are different.According to default encapsulation standard packaging
Audio frame, can add a header field, by obtaining header packet information so that audio frequency apparatus need not decode audio frequency before audio frame
Frame, can just obtain the audio format of audio frame and the start-stop byte location of audio frame, so as to improve the decoding speed of audio frame
And accuracy.
For example:According to the audio frame of the standard packaging EAC3 forms of IEC 61937, then can be before the audio frame of EAC3 forms
Plus a packet header, the length in a packet header is 8 bytes, and packet header includes taking Pa, Pb, Pc and the Pd of two bytes respectively,
Wherein, Pa and Pb is sync byte, and Pc is Burst (burst) information, and for record data type, Pd is the length letter of load
Breath.It is 0xF872 when Pa values are detected, when Pb values are 0x4E1F, it is possible to which the original position for obtaining audio frame is Pa bytes place
Position, when it is 0x0015 (i.e. 21) to detect Pc values, it may be determined that the audio format of the audio frame is EAC3 forms, and can
To determine the audio frame length of the audio frame as 6144*4, by calculating the position of the byte of packet header the 8th with 6144*4's and being worth, obtain
Obtain the final position of audio frame.
In actual applications, the buffer time of present video relief area is calculated, can be according to formula:
Calculate the buffer time of present video relief area;
Wherein, T is the buffer time of present video relief area, and M is total word of all audio frames of audio buffer storage
Joint number, S is the sample rate for audio frame, and N is the byte number of single audio frame.
It should be noted that the buffer time for calculating present video relief area can be according to the audio frame of current buffer
The buffer time B of number A and each audio frame, calculates the buffer time T=A × B of present video relief area, wherein,M is the total bytes of all audio frames of audio buffer storage, and L is according to default encapsulation standard packaging
The corresponding sample number of an audio frame afterwards, N is the byte number of single audio frame;Wherein,S be for
The sample rate of audio frame, so,
Compared to the multimedia player under the existing direct mode operation in audio frequency, the calculating to un-encapsulated audio frame
Buffer time formula:M is the total bytes of all audio frames of audio buffer storage, and P is one un-encapsulated
Audio frame byte number, the embodiment of the present invention considers sample rate and the corresponding sample of an audio frame after encapsulation
The impact of several buffer time to present video relief area so that the result of calculation of the buffer time of present video relief area is more
Accurately.
It should be noted that the original sound signal of collection is continuously varying analog signal, accordingly, it would be desirable to simulation is believed
Number through sampling, quantify, coding, to obtain digital signal, be easy to transmission and process of the multimedia player to voice data.By
In the frequency range of, the acoustical signal to be heard of human ear be 20Hz-20KHZ, therefore, the maximum bandwidth of audio frequency is 20KHZ, root
According to nyquist (NYQUIST) sampling thheorem, being sampled with the frequent rate of a twice sine wave just can completely truly
Reduce the waveform.So, for acoustical signal, want digital signal is carried out to be reduced into analogue signal, it is necessary to will sample
Calibration is more than 40KHz.It is 48KHz generally, for the sample rate of the audio frame of EAC3 forms, 48KHz sample rates are represented
The sound waveform of 1 second is described with 48000 data.
It is assumed that the total bytes of all audio frames of audio buffer storage are 100KB, it is for the sample rate of audio frame
48KHz, the byte number of single audio frame is 4B, then the buffer time of present video relief area is 521ms.
S102, according to buffer time and first object timestamp, determines audio presentation time.
It should be noted that first object timestamp is that timestamp is minimum in the currently stored audio frame in audio buffer
The timestamp of audio frame, audio frame is to be stored in audio buffer successively from small to large according to timestamp, is also root when playing
Play successively from small to large according to timestamp, therefore, the corresponding audio frame of first object timestamp, is to broadcast at first in audio buffer
The audio frame put, remaining audio frame is temporally stabbed to be played successively from small to large.
Specifically, it is according to buffer time and first object timestamp, buffer time is corresponding with first object timestamp
Time sum be defined as audio presentation time.
It is assumed that the buffer time of present video relief area is 521ms, the first object timestamp corresponding time is 23ms,
Then 521+23=544ms is defined as into audio presentation time.
S103, stabs according to audio presentation time and the second object time, determines presently described audio buffer storage
Whether audio frame and the frame of video of current video relief area storage meet synchronous playing condition, if it is satisfied, performing S104.
It should be noted that the second object time stamp is timestamp minimum in the currently stored frame of video of screen buffer
It is the frame of video played at first in screen buffer that the timestamp of frame of video, i.e. the second object time stab corresponding frame of video.
Specifically, stabbed according to audio presentation time and the second object time, determine presently described audio buffer storage
Audio frame and current video relief area storage frame of video whether meet synchronous playing condition, Ke Yiwei:
Judge whether audio presentation time stabs the corresponding time more than the second object time, if it is, determining present video
The audio frame of relief area storage and the frame of video of current video relief area storage meet synchronous playing condition.
It should be noted that in the case where audio presentation time stabs the corresponding time more than the second object time, entering one
Step, it is also contemplated that audio presentation time and the second object time stab whether the difference of corresponding time is less than predetermined threshold value, such as
It is really, it is determined that the audio frame of present video relief area storage and the frame of video of current video relief area storage meet synchronous broadcasting
Condition.
It is understood that it is the second object time to stab broadcasting for corresponding frame of video that the second object time stabbed the corresponding time
Put the time, selection audio presentation time is the reference time, by the corresponding relation for setting up audio frame reproduction time and reference time,
And video frame time stamp and the corresponding relation of reference time, the relative time relationship of audio frame and frame of video is determined, so as to true
The reproduction time of audio frame and frame of video is determined, to realize that the synchronous of audio frequency and video is played.When audio presentation time is reached, Ji Jiangbo
What is put is the audio frame of the timestamp minimum of present video relief area storage, when audio presentation time is stabbed more than the second object time
During the corresponding time, illustrate that now frame of video lags behind audio frame, need to play the corresponding video of the second object time stamp immediately
Frame, could realize that audio-visual synchronization is played.Wherein, the corresponding relation and frame of video of audio frame time stamp and reference time are set up
Timestamp and the corresponding relation of reference time, belong to prior art, will not be described here.
It is assumed that audio presentation time is 544ms, the second object time stabbed the corresponding time for 520ms, and predetermined threshold value is
100ms, then stab corresponding time 520ms according to audio presentation time 544ms more than the second object time, it may be determined that current sound
The audio frame of frequency relief area storage and the frame of video of current video relief area storage meet synchronous playing condition;Or, further
Consider that audio presentation time and the second object time stab the difference of corresponding time and be:544ms-520ms=24ms, less than 100ms,
Then can determine the synchronous broadcasting of the frame of video satisfaction of audio frame and the storage of current video relief area that present video relief area stores
Condition.
S104, is pushed to the currently stored frame of video of screen buffer video equipment and plays, and audio buffer is worked as
The audio frame of front storage is pushed to audio frequency apparatus broadcasting.
It should be noted that audio frequency apparatus and video equipment can be same equipment, for example:Have simultaneously and play audio frequency
The television set of frame and frame of video.Or, audio frequency apparatus and video equipment can be two independent equipment, for example:Audio frequency apparatus
For power amplifying device, video equipment is display.Because the embodiment of the present invention is applied to the multimedia under audio frequency direct mode operation
Player, therefore, the currently stored audio frame in audio buffer is the audio frame not decoded, and needs currently to deposit audio buffer
The audio frame of storage is pushed to audio frequency apparatus, so as to could play audio frame after audio frequency apparatus decapsulation, decoding.
Exemplary, it is assumed that audio frequency apparatus is power amplifying device, and video equipment is display, then can be by a number word tone
Video interface, for example:HDMI (High Definition Multimedia Interface, HDMI) or
SPDIF (Sony/Philips Digital Interface Format, digital audio interface), by audio frame power amplifier is transferred to
Equipment, so that audio frame is played in power amplifying device decapsulation, decoding, also, can be with by another digital audio/video interface general
Video frame transmission is to display device, so that display device plays frame of video.
It is exemplary, it is assumed that audio frequency apparatus and video equipment are television set, then can by above-mentioned digital audio/video interface,
By audio frame and video frame transmission to television set, so that television for play frame of video, and decapsulate, decode broadcasting audio frame.
It can be seen that, using the embodiment of the present invention, the audio frame and frame of video that meet synchronous playing condition are played simultaneously, realize
Under audio frequency direct mode operation, the synchronous broadcasting of audio frequency and video improves the result of broadcast of multimedia player to multimedia player.
Corresponding with above-mentioned embodiment of the method, the embodiment of the present invention also provides a kind of audio and video display device.
Referring to Fig. 2, a kind of structural representation of audio and video display device that Fig. 2 is provided by the embodiment of the present invention, including:
Computing module 201, during for every pre-determined synchronization duration or often reaching pre-determined synchronization time point, calculates present video
The buffer time of relief area, wherein, the audio frame of the present video relief area storage is according to after default encapsulation standard packaging
Audio frame;
First determining module 202, during for according to the buffer time and first object timestamp, determining that audio frequency is played
Between, wherein, the first object timestamp is the minimum audio frequency of timestamp in the currently stored audio frame in the audio buffer
The timestamp of frame;
Second determining module 203, for being stabbed according to the audio presentation time and the second object time, it is determined that current institute
Whether the frame of video of the audio frame and the storage of current video relief area of stating audio buffer storage meets synchronous playing condition, its
In, the second object time stamp is the time of the minimum frame of video of timestamp in the currently stored frame of video of the screen buffer
Stamp;
Playing module 204, the audio frame and current video relief area for storing in presently described audio buffer is stored
Frame of video meet synchronous playing condition in the case of, the currently stored frame of video of screen buffer is pushed to into video equipment and is broadcast
Put, and the currently stored audio frame in audio buffer is pushed to into audio frequency apparatus and play.
Wherein, the default encapsulation standard of the embodiment of the present invention is:The standards of IEC 61937, the audio frame of the embodiment of the present invention
For:Audio frame based on EAC3 forms.
Wherein, computing module 201, specifically for:
When every pre-determined synchronization duration or often reaching pre-determined synchronization time point, according to formula:Calculate and work as
The buffer time of front audio buffer;
Wherein, T is the buffer time of present video relief area, and M is total word of all audio frames of audio buffer storage
Joint number, S is the sample rate for audio frame, and N is the byte number of single audio frame.
Wherein, the first determining module 202, specifically for:
According to buffer time and first object timestamp, by the time corresponding with first object timestamp buffer time it
Be defined as audio presentation time.
Wherein, the second determining module 203, specifically for:
Stabbed according to audio presentation time and the second object time, when whether judging audio presentation time more than the second target
Between stab the corresponding time, if it is, determining the audio frame of present video relief area storage and regarding for current video relief area storage
Frequency frame meets synchronous playing condition.
It can be seen that, using the embodiment of the present invention, the audio frame and frame of video that meet synchronous playing condition are played simultaneously, realize
Under audio frequency direct mode operation, the synchronous broadcasting of audio frequency and video improves the result of broadcast of multimedia player to multimedia player.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposit between operating
In any this actual relation or order.And, term " including ", "comprising" or its any other variant are intended to
Nonexcludability is included, so that a series of process, method, article or equipment including key elements not only will including those
Element, but also including other key elements being not expressly set out, or also include for this process, method, article or equipment
Intrinsic key element.In the absence of more restrictions, the key element for being limited by sentence "including a ...", it is not excluded that
Also there is other identical element in process, method, article or equipment including the key element.
Each embodiment in this specification is described by the way of correlation, identical similar portion between each embodiment
Divide mutually referring to what each embodiment was stressed is the difference with other embodiment.Especially for device reality
For applying example, because it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method
Part explanation.
One of ordinary skill in the art will appreciate that realize that all or part of step in said method embodiment is can
Completed with instructing the hardware of correlation by program, described program can be stored in computer read/write memory medium,
The storage medium for obtaining designated herein, such as:ROM/RAM, magnetic disc, CD etc..
Presently preferred embodiments of the present invention is the foregoing is only, protection scope of the present invention is not intended to limit.It is all
Any modification, equivalent substitution and improvements made within the spirit and principles in the present invention etc., are all contained in protection scope of the present invention
It is interior.
Claims (10)
1. a kind of audio and video playing method, it is characterised in that the multimedia player being applied under audio frequency direct mode operation, institute
The method of stating includes:
When every pre-determined synchronization duration or often reaching pre-determined synchronization time point, the buffer time of present video relief area is calculated, its
In, the audio frame of the present video relief area storage is according to the audio frame after default encapsulation standard packaging;
According to the buffer time and first object timestamp, audio presentation time is determined, wherein, the first object time
Stab as the timestamp of the minimum audio frame of timestamp in the currently stored audio frame in the audio buffer;
Stabbed according to the audio presentation time and the second object time, determine the audio frame of presently described audio buffer storage
Whether meet synchronous playing condition with the frame of video of current video relief area storage, wherein, the second object time stamp is regarded for described
The timestamp of the minimum frame of video of timestamp in the currently stored frame of video in frequency relief area;
If it is satisfied, the currently stored frame of video of screen buffer is pushed to into video equipment playing, and audio buffer is worked as
The audio frame of front storage is pushed to audio frequency apparatus broadcasting.
2. method according to claim 1, it is characterised in that the default encapsulation standard is:The standards of IEC 61937;Institute
Stating audio frame is:Audio frame based on EAC3 forms.
3. method according to claim 1, it is characterised in that the buffer time of the calculating present video relief area, bag
Include:
According to formula:Calculate the buffer time of present video relief area;
Wherein, T is the buffer time of present video relief area, and M is total word of all audio frames of the audio buffer storage
Joint number, S is the sample rate for audio frame, and N is the byte number of single audio frame.
4. method according to claim 1, it is characterised in that described according to the buffer time and the first object time
Stamp, determines audio presentation time, including:
Buffer time time sum corresponding with the first object timestamp is defined as into audio presentation time.
5. the method according to any one of Claims 1-4, it is characterised in that it is described according to the audio presentation time with
And the second object time stamp, determine the audio frame of presently described audio buffer storage and the video of current video relief area storage
Whether frame meets synchronous playing condition, including:
Judge whether the audio presentation time stabs the corresponding time more than second object time, if it is, determining current
The audio frame of the audio buffer storage and the frame of video of current video relief area storage meet synchronous playing condition.
6. a kind of audio and video display device, it is characterised in that the multimedia player being applied under audio frequency direct mode operation, institute
Stating device includes:
Computing module, during for every pre-determined synchronization duration or often reaching pre-determined synchronization time point, calculates present video relief area
Buffer time, wherein, the audio frame of present video relief area storage is according to the audio frequency after default encapsulation standard packaging
Frame;
First determining module, for according to the buffer time and first object timestamp, determining audio presentation time, its
In, the first object timestamp be in the currently stored audio frame in the audio buffer the minimum audio frame of timestamp when
Between stab;
Second determining module, for stabbing according to the audio presentation time and the second object time, determines presently described audio frequency
Whether the audio frame of relief area storage and the frame of video of current video relief area storage meet synchronous playing condition, wherein, second
Object time stamp is the timestamp of the minimum frame of video of timestamp in the currently stored frame of video of the screen buffer;
Playing module, for the audio frame that stores in presently described audio buffer and the frame of video of current video relief area storage
In the case of meeting synchronous playing condition, the currently stored frame of video of screen buffer is pushed to into video equipment and is played, and will
The currently stored audio frame in audio buffer is pushed to audio frequency apparatus broadcasting.
7. device according to claim 6, it is characterised in that the default encapsulation standard is:The standards of IEC 61937;Institute
Stating audio frame is:Audio frame based on EAC3 forms.
8. device according to claim 6, it is characterised in that the computing module, specifically for:
When every pre-determined synchronization duration or often reaching pre-determined synchronization time point, according to formula:Calculate current sound
The buffer time of frequency relief area;
Wherein, T is the buffer time of present video relief area, and M is total word of all audio frames of the audio buffer storage
Joint number, S is the sample rate for audio frame, and N is the byte number of single audio frame.
9. device according to claim 6, it is characterised in that first determining module, specifically for:
It is according to the buffer time and first object timestamp, the buffer time is corresponding with the first object timestamp
Time sum be defined as audio presentation time.
10. the device according to any one of claim 6 to 9, it is characterised in that second determining module, specifically for:
Stabbed according to the audio presentation time and the second object time, judge whether the audio presentation time is more than described the
Two object times stabbed the corresponding time, if it is, determining that the audio frame and current video of presently described audio buffer storage are slow
The frame of video for rushing area's storage meets synchronous playing condition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611234859.4A CN106658135B (en) | 2016-12-28 | 2016-12-28 | A kind of audio and video playing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611234859.4A CN106658135B (en) | 2016-12-28 | 2016-12-28 | A kind of audio and video playing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106658135A true CN106658135A (en) | 2017-05-10 |
CN106658135B CN106658135B (en) | 2019-08-09 |
Family
ID=58832175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611234859.4A Active CN106658135B (en) | 2016-12-28 | 2016-12-28 | A kind of audio and video playing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106658135B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107509100A (en) * | 2017-09-15 | 2017-12-22 | 深圳国微技术有限公司 | Audio and video synchronization method, system, computer installation and computer-readable recording medium |
CN109168059A (en) * | 2018-10-17 | 2019-01-08 | 上海赛连信息科技有限公司 | A kind of labial synchronization method playing audio & video respectively on different devices |
CN109729391A (en) * | 2018-12-18 | 2019-05-07 | 北京华夏电通科技有限公司 | A kind of sending method and system of multipath media stream |
CN112235597A (en) * | 2020-09-17 | 2021-01-15 | 深圳市捷视飞通科技股份有限公司 | Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment |
CN113300797A (en) * | 2021-03-29 | 2021-08-24 | 广州市保伦电子有限公司 | Multi-device self-adaptive audio synchronous playing method and processing terminal |
CN115834943A (en) * | 2023-01-10 | 2023-03-21 | 荣耀终端有限公司 | Audio and video synchronization method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070081562A1 (en) * | 2005-10-11 | 2007-04-12 | Hui Ma | Method and device for stream synchronization of real-time multimedia transport over packet network |
CN101237586A (en) * | 2008-02-22 | 2008-08-06 | 上海华平信息技术股份有限公司 | Synchronous playing method for audio and video buffer |
CN101282482A (en) * | 2008-05-04 | 2008-10-08 | 中兴通讯股份有限公司 | Apparatus, system and method for synchronously playing video data and audio data |
CN101466044A (en) * | 2007-12-19 | 2009-06-24 | 康佳集团股份有限公司 | Method and system for synchronously playing stream medium audio and video |
CN102868939A (en) * | 2012-09-10 | 2013-01-09 | 杭州电子科技大学 | Method for synchronizing audio/video data in real-time video monitoring system |
CN103167320A (en) * | 2011-12-15 | 2013-06-19 | 中国电信股份有限公司 | Audio and video synchronization method and audio and video synchronization system and mobile phone live broadcast client-side |
-
2016
- 2016-12-28 CN CN201611234859.4A patent/CN106658135B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070081562A1 (en) * | 2005-10-11 | 2007-04-12 | Hui Ma | Method and device for stream synchronization of real-time multimedia transport over packet network |
CN101466044A (en) * | 2007-12-19 | 2009-06-24 | 康佳集团股份有限公司 | Method and system for synchronously playing stream medium audio and video |
CN101237586A (en) * | 2008-02-22 | 2008-08-06 | 上海华平信息技术股份有限公司 | Synchronous playing method for audio and video buffer |
CN101282482A (en) * | 2008-05-04 | 2008-10-08 | 中兴通讯股份有限公司 | Apparatus, system and method for synchronously playing video data and audio data |
CN103167320A (en) * | 2011-12-15 | 2013-06-19 | 中国电信股份有限公司 | Audio and video synchronization method and audio and video synchronization system and mobile phone live broadcast client-side |
CN102868939A (en) * | 2012-09-10 | 2013-01-09 | 杭州电子科技大学 | Method for synchronizing audio/video data in real-time video monitoring system |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107509100A (en) * | 2017-09-15 | 2017-12-22 | 深圳国微技术有限公司 | Audio and video synchronization method, system, computer installation and computer-readable recording medium |
CN109168059A (en) * | 2018-10-17 | 2019-01-08 | 上海赛连信息科技有限公司 | A kind of labial synchronization method playing audio & video respectively on different devices |
CN109168059B (en) * | 2018-10-17 | 2021-06-18 | 上海赛连信息科技有限公司 | Lip sound synchronization method for respectively playing audio and video on different devices |
CN109729391A (en) * | 2018-12-18 | 2019-05-07 | 北京华夏电通科技有限公司 | A kind of sending method and system of multipath media stream |
CN109729391B (en) * | 2018-12-18 | 2021-07-02 | 北京华夏电通科技股份有限公司 | Method and system for sending multi-path media streams |
CN112235597A (en) * | 2020-09-17 | 2021-01-15 | 深圳市捷视飞通科技股份有限公司 | Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment |
CN113300797A (en) * | 2021-03-29 | 2021-08-24 | 广州市保伦电子有限公司 | Multi-device self-adaptive audio synchronous playing method and processing terminal |
CN113300797B (en) * | 2021-03-29 | 2022-09-16 | 广州市保伦电子有限公司 | Multi-device self-adaptive audio synchronous playing method and processing terminal |
CN115834943A (en) * | 2023-01-10 | 2023-03-21 | 荣耀终端有限公司 | Audio and video synchronization method and device |
CN115834943B (en) * | 2023-01-10 | 2023-07-04 | 荣耀终端有限公司 | Audio and video synchronization method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106658135B (en) | 2019-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106658135A (en) | Audio and video playing method and device | |
CN108616800B (en) | Audio playing method and device, storage medium and electronic device | |
CN102577360B (en) | Synchronized playback of media players | |
US10546599B1 (en) | Systems and methods for identifying a mute/sound sample-set attribute | |
CN108111997A (en) | Bluetooth equipment audio synchronization method and system | |
US10180981B2 (en) | Synchronous audio playback method, apparatus and system | |
KR101459319B1 (en) | Method and apparatus for controlling audio volume | |
CN109979472A (en) | Dynamic range control for various playback environments | |
CN104795083B (en) | A kind of information processing method and electronic equipment | |
JP2010538571A (en) | Audio signal decoding method and apparatus | |
US9756437B2 (en) | System and method for transmitting environmental acoustical information in digital audio signals | |
CN104333802A (en) | Video playing method and video player | |
CN101448186B (en) | System and method for automatic regulating sound effect of a loudspeaker | |
CN106937137A (en) | A kind of synchronous method of multi-channel digital audio coding audio-visual | |
CN110741435A (en) | Mixed audio signal synchronization based on correlation and attack analysis | |
CN104038774B (en) | Generate the method and device of ring signal file | |
CN101751967B (en) | Multimedia file producing and playing method, multimedia file producing device and player | |
CN103873919A (en) | Information processing method and electronic equipment | |
Cho et al. | Robust data hiding for MCLT based acoustic data transmission | |
JP4471028B2 (en) | Information processing apparatus, information processing method, and program | |
JP2008299032A (en) | Linguistic training aid, and character data regenerator | |
CN111190568A (en) | Volume adjusting method and device | |
CN106782598A (en) | Television image and peripheral hardware synchronous sound control method and device | |
JPWO2002058053A1 (en) | Digital audio data encoding and decoding methods | |
CN205283753U (en) | Intelligent Mobile Terminal and multichannel media files synchronized play system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |