CN102056026B - Audio/video synchronization detection method and system, and voice detection method and system - Google Patents
Audio/video synchronization detection method and system, and voice detection method and system Download PDFInfo
- Publication number
- CN102056026B CN102056026B CN2009102374145A CN200910237414A CN102056026B CN 102056026 B CN102056026 B CN 102056026B CN 2009102374145 A CN2009102374145 A CN 2009102374145A CN 200910237414 A CN200910237414 A CN 200910237414A CN 102056026 B CN102056026 B CN 102056026B
- Authority
- CN
- China
- Prior art keywords
- audio
- video
- time
- short
- audioref
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses an audio/video synchronization detection method, an audio/video synchronization detection system, a voice detection method and a voice detection system. The audio/video synchronization detection method comprises the following steps of: determining the staring play time of an audio band matched with audio reference data and the staring play time of a video frame matched with video reference data in an audio/video file played at a target terminal; according to the staring play time of the audio band matched with the audio reference data and the staring play time of the video frame matched with the video reference data, determining audio/video play time difference when the audio/video file is played at the target terminal; acquiring audio/video play time difference when the audio/video file is played at a source terminal; and according to audio/video play time difference when the audio/video file is played at the source terminal and at the target terminal, determining an audio/video synchronization condition when the audio/video file is played at the target terminal. By the invention, accuracy of audio/video synchronization detection can be improved.
Description
Technical field
The present invention relates to the audio frequency and video detection technique in the communications field, relate in particular to a kind of audio-visual synchronization detection method and system thereof, and a kind of speech detection method and system thereof.
Background technology
In the mobile communication video business, because Voice ﹠ Video do not carry temporal information in cataloged procedure, the synchronizing information that therefore the obtains audio frequency and video difficult that becomes.
If add respectively temporal information in the packets of audio data in advance behind audio/video coding and the video packets of data, then the audio-video document after encoding is after Internet Transmission arrives receiving terminal, resolve by the audio-video document that receiving terminal is received, parse the temporal information of carrying in packets of audio data and the video packets of data, then judge the synchronous situation of audio frequency and video according to the temporal information that parses.
But there is following problem in above-mentioned audio-visual synchronization detection method:
(1) although Voice ﹠ Video is carrying temporal information after the packing respectively, but the temporal information after the two grouping packing does not have corresponding corresponding relation, moreover the frame length of Voice ﹠ Video and the size of packet are not identical, therefore can't accurately determine the relative time delay of Voice ﹠ Video;
(2) result who according to the temporal information of carrying in packets of audio data and the video packets of data packet header audio-visual synchronization is detected synchronously, the propagation delay time that only can reflect network, and in the actual play process, the audio-video document player of receiving terminal is provided with buffer memory, audio stream and video flowing through decoding are adjusted by buffer memory synchronously by player, therefore, carry out result that audio-visual synchronization detects according to the temporal information of carrying in packets of audio data and the video packets of data packet header and can not reflect the impact that after the audio-video document player is adjusted synchronously audio-visual synchronization is produced, that is, adopting this kind mode to carry out audio-visual synchronization, to detect resulting result inaccurate.
Summary of the invention
The embodiment of the invention provides a kind of audio-visual synchronization detection method and system thereof, in order to solve the existing low problem of audio-visual synchronization detection accuracy.
The technical scheme that the embodiment of the invention provides comprises:
A kind of audio-visual synchronization detection method comprises the steps:
Determine in the audio-video document that destination end plays, with the initial reproduction time of the audio section of audioref Data Matching, and with the initial reproduction time of the frame of video of video reference Data Matching;
According to the initial reproduction time of the audio section of described and audioref Data Matching, and the initial reproduction time of the frame of video of described and video reference Data Matching, it is poor to determine the audio frequency and video reproduction time of described audio-video document when destination end is play;
It is poor to obtain the audio frequency and video reproduction time of described audio-video document when source is play, poor according to the audio frequency and video reproduction time of described audio-video document when source and destination end are play, determine the audio-visual synchronization situation of described audio-video document when described destination end is play;
Wherein, described audioref data are speech data, determine and the process of the initial reproduction time of the audio section of audioref Data Matching, comprising: detect the voice segments and the start-stop reproduction time thereof that comprise in the audio-video document of playing; By detected voice segments and described audioref data are carried out voice recognition processing, determine the voice segments with described audioref Data Matching; Wherein, the voice segments that comprises in the audio-video document of determining to play and the process of start-stop reproduction time thereof comprise:
In the audio-video document of playing, search for audio signal according to the voice signal short-time average magnitude, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, be designated as the first current time; And when behind this first current time, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
When searching backward short-time average magnitude and drop to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continue along former direction of search search audio signal according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
When searching forward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
A kind of audio-visual synchronization detection system comprises:
The audio identification module be used for to be determined the audio-video document that destination end is play, with the initial reproduction time of the audio section of audioref Data Matching;
The video identification module be used for to be determined the audio-video document that destination end is play, with the initial reproduction time of the frame of video of video reference Data Matching;
The time difference determination module, be used for initial reproduction time that determine according to described audio identification module and the audio section audioref Data Matching, and the described video identification module initial reproduction time with the frame of video video reference Data Matching that determine, it is poor to determine the audio frequency and video reproduction time of described audio-video document when destination end is play;
Synchronous detection module, it is poor to be used for obtaining the audio frequency and video reproduction time of described audio-video document when source is play, poor according to the audio frequency and video reproduction time that the poor and described time difference determination module of described audio frequency and video reproduction time that gets access to is determined, determine the audio-visual synchronization situation of described audio-video document when described destination end is play;
Wherein, described audioref data are speech data; Described audio identification module is determined and the process of the initial reproduction time of the audio section of audioref Data Matching, being comprised: detect the voice segments and the start-stop reproduction time thereof that comprise in the audio-video document of playing; By detected voice segments and described audioref data are carried out voice recognition processing, determine the voice segments with described audioref Data Matching; Wherein, the voice segments that comprises in the audio-video document that described audio identification module is determined to play and the process of start-stop reproduction time thereof comprise:
In the audio-video document of playing, search for audio signal according to the voice signal short-time average magnitude, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, be designated as the first current time; And when after this moment, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
When searching backward short-time average magnitude and drop to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continue along former direction of search search audio signal according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
When searching forward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
The above embodiment of the present invention, the audio-video document of playing for destination end, determine the initial reproduction time of the audio section of itself and audioref Data Matching, and with the initial reproduction time of the frame of video of video reference Data Matching, thereby the audio frequency and video reproduction time when obtaining the destination end broadcast is poor, then with audio frequency and video reproduction time poor compare of this audio-video document when source is play, thereby determine the audio-visual synchronization situation of this audio-video document when described destination end is play, compared with prior art, the audio-visual synchronization of the embodiment of the invention detects the temporal information that does not rely in the audio, video data bag, but detect synchronously according to the audio-video document of destination end institute actual play, simultaneously the factor of in the audio/video decoding course of destination end audio-visual synchronization being adjusted is taken into account, therefore resulting audio-visual synchronization testing result is more accurate.Be particularly useful for the process to the audio-visual synchronization situation detection of audio frequency and video behind Internet Transmission.
The embodiment of the invention also provides a kind of speech detection method and system thereof, is used for solving the low problem of prior art speech detection accuracy.
The technical scheme that the embodiment of the invention provides comprises:
A kind of speech detection method comprises the steps:
According to the voice signal short-time average magnitude, in audio frequency to be measured, search for audio signal, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, be designated as the first current time; And when after this moment, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
When searching backward short-time average magnitude and drop to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continue along former direction of search search audio signal according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
When searching forward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
A kind of speech detection system comprises:
The first search module is used for searching for audio signal according to the voice signal short-time average magnitude in audio frequency to be measured, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, is designated as the first current time; And when after this moment, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
The second search module, be used for when described the first search module searches backward short-time average magnitude and drops to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continuing to search for audio signal along the former direction of search according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
The voice segments determination module, be used for when described the second search module searches forward short-time average zero-crossing rate and drops to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
The above embodiment of the present invention, in the speech detection process, come identification effective for voice segments standby average energy when background noise is smaller, come the effective characteristics of identification at the average zero-crossing rate of time standby that background noise is larger, short-time average magnitude and the short-time average zero-crossing rate of voice signal have been considered, on the basis based on the short-time average magnitude detection method, investigate again the short-time average zero-crossing rate of voice signal, utilize amplitude and zero-crossing rate double characteristic to carry out the voice signal terminal and detect, thereby make detected voice segments terminal more accurate.
Description of drawings
Fig. 1 is the schematic flow sheet that audio-visual synchronization detects in the embodiment of the invention;
Fig. 2 is the schematic flow sheet that IP network video telephone audio-visual synchronization detects in the embodiment of the invention;
Fig. 3 is the dynamic route search schematic diagram of speech recognition process in the embodiment of the invention;
Fig. 4 is the audio-visual synchronization Rating Model schematic diagram in the embodiment of the invention;
Fig. 5 is the structural representation of the audio sync detection system in the embodiment of the invention;
Fig. 6 is the structural representation of the speech detection system in the embodiment of the invention.
Embodiment
The problems referred to above for the prior art existence, the embodiment of the invention provides a kind of audio-visual synchronization detection method and system thereof, adopt the mode of pattern recognition to carry out the audio-visual synchronization detection, namely respectively the audio-video document of broadcast and the reference data of these audio frequency and video are carried out pattern recognition at transmitting terminal and receiving terminal, record respectively the audio frame that is complementary with audioref data and video reference data and the initial reproduction time of frame of video, the audio frequency and video reproduction time that obtains transmitting terminal and receiving terminal is poor, again by poor the comparing of the audio frequency and video reproduction time of transmitting terminal and receiving terminal calculated delay inequality, thus the audio-visual synchronization situation the when audio-video document that obtains receiving terminal is play.
In the embodiment of the invention, before carrying out the audio-visual synchronization detection, to prepare first audioref data and video reference data, be used for detecting in synchronization detection process audioref point and the video reference point of audio-video document, thereby determine the audio-visual synchronization parameter according to audioref point and video reference point.The audioref data can be the audio volume control data, and the video reference data can be vedio datas, and audioref data and video reference data can be pre-stored in feature database.
Referring to Fig. 1, be the schematic flow sheet of audio-visual synchronization detection in the embodiment of the invention.This flow process can be applicable to the critic network transmission to the impact of audio-visual synchronization, also can be used for assessing the different ends of playing to the impact of audio-visual synchronization.If for assessment of the impact of Internet Transmission on audio-visual synchronization, then the source in this flow process refers to that transmitting terminal, the destination end of audio-video document refer to the receiving terminal that audio-video document arrives behind Internet Transmission; If play end to the impact of audio-visual synchronization for assessment of difference, then the source in this flow process can be the audio frequency and video broadcast end that preferably audio frequency and video broadcast of audio-visual synchronization quality end, destination end refer to carry out the audio-visual synchronization quality evaluation.This flow process comprises the steps:
In the step 101 and step 102 of above-mentioned flow process, the time of recording can be the current system time of destination end, also can be the time of playing starting point with respect to this audio-video document.Step 101 in the above-mentioned flow process and step 102 are not strict with in sequential, that is, this two step is upper the exchange sequentially, also can executed in parallel.
Usually, audioref data and video reference data are one to one, and in order to make synchronous detection more accurate, how right audioref data and video reference data are generally.For many situations to audioref data and video reference data, the reproduction time that the step 103 of flow process shown in Figure 1 is determined is poor also to be to one to one with audioref data and video reference data, namely, determine initial reproduction time with the audio section of its coupling for audioref data, for determining initial reproduction time with the frame of video of its coupling with the corresponding video reference data of these audioref data, it is poor to corresponding audio frequency and video reproduction time that both time differences are with these audioref data and video reference data; In like manner, can obtain in the step 104, audio-video document is when source is play and the audio section of audioref Data Matching, poor with the audio frequency and video reproduction time of the frame of video of video reference Data Matching.
Can obtain in the above described manner the audio frequency and video time difference that this detects the audio-video document of usefulness synchronously at transmitting terminal in advance, and when follow-up this audio-video document of each use carries out the audio-visual synchronization detection, directly use this in advance the audio frequency and video time difference of detected transmitting terminal audio frequency and video time difference and receiving terminal compare, thereby determine the audio-visual synchronization situation of this audio-video document after transmitting.
Generally, in order accurately to detect the audio-visual synchronization situation, audio-visual synchronization detects the audioref data of usefulness and video reference data should be had and comparatively significantly be convenient to the feature identifying and be convenient to carry out pattern matching, audio-visual synchronization detect then comprise in the audio-video document of usefulness with the audio section of audioref Data Matching and with the frame of video of video reference Data Matching.Preferably, audio-visual synchronization is detected in the video file of usefulness, with the initial reproduction time of the audio section of audioref Data Matching, and with the initial reproduction time of the frame of video of corresponding video reference Data Matching, identical on the sampled point meaning, namely the audio frequency and video time difference is 0.In this case, in the step 104 of flow process shown in Figure 1, because the audio frequency and video reproduction time of audio-video document when source is play is poor to be 0, the audio frequency and video reproduction time of then can be directly determining according to step 103 is poor, makes the audio-visual synchronization situation that this destination end is play this audio-video document.
Detect as example take IP network video telephone audio-visual synchronization, audio-video document as synchronous detection usefulness, aspect audio frequency, comprise the pronunciation of numeral 1,2,3,4,5, the picture that aspect video, comprises 5 kinds of different human body gestures that show before the solid background, and during the pronunciation of a numeral of every appearance, show corresponding a kind of gesture on the picture in the playing process; The audioref data are the audio volume control data of each numeric utterance in the numeral 1,2,3,4,5, are stored in the audio frequency characteristics storehouse; The video reference data are the vedio data of each gesture in 5 kinds of human body gestures under the solid background, are stored in the video features storehouse; This audio-video document is when transmitting terminal is play, and each numeric utterance is known with the synchronization time difference of corresponding gesture picture.In network transmission process, the Voice ﹠ Video in this audio-video document transmits respectively, forms WAV audio file and AVI video file at receiving terminal.Detect this audio-video document in the process of the audio-visual synchronization situation of receiving terminal, can as shown in Figure 2, comprise the steps:
Obtain the WAV audio file (step 201) in the audio-video document that the audio frequency and video receiving terminal receives, determine that according to audio signal the terminal of wherein each voice segments is to find out voice segments (step 202), adopt the audio mode recognition methods, the speech data of each numeric utterance in each voice segments and the audio frequency characteristics storehouse is compared, determine respectively numeral 1 in each voice segments, 2,3,4, the voice segments (step 203) of 5 pronunciations, and record the start-stop reproduction time of these voice segments, thereby in time (then corresponding the record the more time of repetition being arranged such as the digital pronunciation in the WAV audio file) (step 204) that the audio frequency and video receiving terminal can record at least 5 audio sections;
Obtain the AVI video file (step 205) in the audio-video document that the audio frequency and video receiving terminal receives, extract the every two field picture (step 206) in the AVI video file, adopt video mode recognition method, the view data of various gestures in each video frame images and the video features storehouse is compared, determine respectively the wherein frame of video of various gestures, usually only get the frame of video (step 207) that first identifies, and record the initial reproduction time of these frame of video, thereby in time (then corresponding the record the more time of repetition being arranged such as the gesture picture in the AVI video file) (step 208) of at least 5 frame of video of audio frequency and video receiving terminal record;
The initial reproduction time of frame of video of the gesture that the numeral 1 of the initial reproduction time of numeral 1 pronunciation of record and record is corresponding subtracts each other, the audio frequency and video reproduction time that obtains digital 1 correspondence poor (time of recording all is that system time take receiving terminal is as benchmark), the like, obtain respectively audio frequency and video reproduction time poor (step 209) corresponding to other numerals;
The resulting audio frequency and video reproduction time of step 209 is poor, compare in that the reproduction time of transmitting terminal is poor with known this audio-video document, determine with respect to the audio frequency and video time delay (210) of this audio-video document of transmitting terminal at receiving terminal;
According to the result of step 210, determine corresponding audio-visual synchronization credit rating or MOS score value (step 211).
In the embodiment of the invention aspect the arranging of audioref data, consider that people's subjective feeling is to the starting point (from noiseless to sound) of audio frequency and the asynchronous relatively sensitivity of terminating point (from sound to noiseless) and picture material, preferably, audioref is chosen at voice segments (such as the voice segments of digital 1-5 pronunciation), therefore, when the audio section of definite and audioref Data Matching, at first to detect the terminal position of each voice segments in the audio volume control of this audio-video document, then voice segments and the audioref data determined be carried out audio mode identification.
For detecting the voice segments in the audio file, the embodiment of the invention can adopt traditional voice segments waveforms detection method based on short-time energy or short-time average magnitude.Traditional voice segments waveforms detection method based on short-time energy or short-time average magnitude is a kind of detection method of simple gate limit in essence, a kind of stronger than conventional method adaptability in order to obtain, the audiotime message of extracting is sound end detecting method more accurately, the invention process is also improved traditional speech detection method, and adopts the speech detection method after improving to carry out speech detection.Speech detection method after the improvement, come identification effective for voice segments standby average energy when background noise is smaller, come the effective characteristics of identification at the average zero-crossing rate of time standby that background noise is larger, short-time average magnitude and the short-time average zero-crossing rate of voice signal have been considered, on the basis based on the short-time average magnitude detection method, investigate again the short-time average zero-crossing rate of voice signal, utilize amplitude and zero-crossing rate double characteristic to carry out the voice signal terminal and detect.
The foundation that can realize these judgements is that the various in short-term parameters of voice of different nature have different probability density functions and adjacent some frame voice should have consistent characteristics of speech sounds, and namely they can not undergone mutation at voiced sound, voiceless sound, between noiseless.Usually, the short-time average magnitude of voice signal voiced sound is maximum, and noiseless short-time average magnitude is minimum; The short-time average zero-crossing rate of voiceless sound is maximum, and noiseless placed in the middle, the short-time average zero-crossing rate of voiced sound is minimum.
In the speech detection method that the embodiment of the invention adopts, at first rule of thumb value is determined two amplitude threshold parameter MH and ML(MH〉ML), and a short-time zero-crossing rate threshold value Z0.The value of MH should be set highlyer, so that when the short-time average magnitude M of frame voice signal value surpasses MH, just can to determine this frame voice signal be not noiseless surely and sizable possibility is arranged is voiced sound.When the short-time average magnitude M of voice signal when being reduced to ML greatly, adopt short-time average zero-crossing rate to proceed judgement, when the short-time average zero-crossing rate of voice signal is lower than threshold value Z0, can determine that it is the end points (beginning or end) of voice segments.
The statistical analysis of short-time average magnitude and short-time average zero-crossing rate be can carry out according to a large amount of speech samples, and amplitude threshold value MH and ML determined in conjunction with the short-time average magnitude of actual sample.The process of determining amplitude threshold MH according to speech samples is:
Data in each speech samples are carried out windowing divide frame.Result out according to people's physilogical characteristics and a large amount of data statistics generally is made as 20ms with window length, and step-length is set as long half of window, the then total amount of frame=total sampling number/step-length;
According to the short-time average magnitude in the computing formula unit of account frame of following short-time average magnitude:
According to the short-time average zero-crossing rate in the computing formula unit of account frame of following short-time average zero-crossing rate;
All speech frames in each speech samples are traveled through statistical analysis, with the short-time average magnitude that draws speech samples and the distribution situation of short-time average zero-crossing rate;
Distribution situation according to short-time average magnitude and the short-time average zero-crossing rate of speech samples, short-time average magnitude according to quiet period, set out the threshold value MH of a thresholding, with fixed larger of this threshold value, to guarantee that short-time average magnitude in each speech samples is voice segments greater than the part of MH, then to get the zero-crossing rate threshold value Z0 of period three short-time average zero-crossing rate doubly as voice segments that mourn in silence.
According to the amplitude threshold MH that determines and ML and short-time average zero-crossing rate thresholding Z0, the speech detection process of the embodiment of the invention is:
Determine former and later two time points A1 and A2 in the audio signal to be detected according to MH, wherein, when the short-time average magnitude M of voice signal surpasses MH, this is designated as A1 constantly, the moment when backward voice signal being dropped to MH first from A1 is designated as A2; Substantially can be defined as voice segments between A1 and the A2;
Continue search before A1 and in the voice signal after the A2; When being searched for forward by A1, if the short-time average magnitude M of voice signal reduces to ML from big to small, then current time can be designated as B1; In like manner, when being searched for backward by A2, if the short-time average magnitude M of voice signal reduces to ML from big to small, then current time is designated as B2.Still can determine it is voice segments between B1 and the B2;
Continuation is searched for forward and by B2 backward by B1.When being searched for forward by B1, if the short-time zero-crossing rate Z of voice signal, thinks then that these voice signals still belong to voice segments all the time greater than Z0, until Z drops to suddenly Z0 when following, current time is designated as C1 and as the starting point of voice segments; In like manner, when being searched for backward by B2, if the short-time zero-crossing rate Z of voice signal, thinks then that these voice signals still belong to voice segments all the time greater than Z0, until Z drops to suddenly Z0 when following, current time is designated as C2 and as the terminal point of this voice segments;
The like, detect all audio sections and starting point and terminal point in the audio file voice signal.
Take the reason of this algorithm to be: before the B1 and B2 may be one section voiceless consonant section afterwards, a little less than their energy equivalence, rely on short-time average magnitude not differentiate they and unvoiced segments fully, but their short-time average zero-crossing rate but will be apparently higher than noiseless, thereby enough this parameters of energy are judged the cut-point of the two, namely real starting point and the terminal point of voice accurately.
This kind algorithm not only is adapted to the voice segments testing process in the embodiment of the invention, also is applicable to the application scenarios that other need to detect the voice segments in the audio signal.
After obtaining the temporal information of voice segments, also need the voice segments that obtains is carried out pattern recognition, to determine the voice segments with the audioref Data Matching.The embodiment of the invention adopts the linear forecasting technology (LPCC) in the audio frequency to carry out audio mode identification.
Obtaining of LPCC characteristic parameter mainly is divided into four steps: preliminary treatment, autocorrelation calculation, moral guest's Algorithm for Solving linear predictor coefficient (LPC) regular equation and LPCC recursion.Wherein, in preliminary treatment, the preemphasis employing promotes high frequency to the mode that voice signal adds single order FIR filter, is used for the decay of compensation glottal excitation and the radiation-induced high frequency spectrum of mouth and nose; The preferred window shape Hamming window of this algorithm picks of window adding technology is as window function.
Voice signal has just changed into one group of LPCC characteristic vector after each frame is extracted the LPCC characteristic parameter.Speech recognition is exactly the speech feature vector of this stack features and reference audio data will be carried out pattern matching, thereby seeks the shortest pattern of distance.
Adopt pattern matching method to carry out speech recognition and usually be divided into two classes: training stage and cognitive phase.Form standard form in the training stage, at cognitive phase, the speech characteristic vector to be known after the transmission attenuation and the standard form vector in the standard form are carried out similarity calculating.In the embodiment of the invention, be the characteristic vector of audioref data by formed standard form of training stage.
But consider the impact of the factors such as decay packet loss of audio file in transmission course, voice sequence length after the raw tone sequence is transmitted with process may be unequal, for addressing this problem, the embodiment of the invention adopts based on the DTW recognizer of dynamic time warping coupling carries out pattern recognition.
In the DTW method that the embodiment of the invention provides, at first calculate input pattern (being the audio signal characteristic vector of each voice segments to be identified) and reference model (being the characteristic vector of audioref data) apart from matrix, then, in distance matrix, find out an optimal path, the accumulation distance in this path is minimum, and this paths is exactly the non-linear relation between the time calculation degree of two patterns.Its algorithm principle is as follows:
Suppose that input pattern to be identified and reference model represent with T and R respectively, in order to compare the similarity between them, can calculate the distortion D[T between them, R], the less similarity of the distortion factor is higher.In order to calculate this distortion, the distortion from T and R between each corresponding frame is counted.If N and M are respectively the totalframes among T and the R, n and m are respectively optional frame numbers among T and the R, D[T (n), R (m)] represent the distortion between these two characteristic vectors, then:
When N=M(is that the T pattern is identical with the frame number of R pattern) time, directly T (1) and R (1) frame, T (2) and R (2) frame,, T (m) and R (m) frame coupling calculate D[T (1), R (1)], D[T (2), R (2)] ... D[T (m), R (m)] the distortion factor, and ask itself and, namely obtain total distortion;
When N ≠ M(is that the frame number of T pattern and R pattern is not identical) time, adopt dynamic programming method to carry out route searching, be specially: (n=1~N) transverse axis in a two-dimensional direct angle coordinate system marks with each frame number among the T, (m=1~M) ordinate at this coordinate system marks with each frame number among the R, as shown in Figure 3, each crosspoint (n in the formed grid of transverse and longitudinal coordinate, m) plotted point of a certain frame among the expression T, the route searching process just can be summed up as seeks one by the path in some crosspoints in these grids, and the crosspoint that the path is passed through namely is the voice frame number that carries out distortion computation among T and the R.
Wherein, the path is not elective, consider that the speed of voice may change, but the precedence of each several part can not change, therefore selected path should be from the lower left corner, finish in the upper right corner.Secondly, in order to prevent planless search, can further leave out those to the n axle or to the undue path that tilts of m axle, this be because the pressure of the voice in the reality, expand always limited, so just can in the path respectively maximum and the minimum value of G-bar in the path by point limited, usually, greatest gradient is decided to be 2, minimum slope location 1/2.
The path cost function that defines in the present embodiment is: d[(ni, mi)], its meaning be from starting point (n0, the m0), computing formula was as follows to each frame distortion aggregate-value of current point (ni, mi):
d[(ni,mi)]=D[T(ni),R(mi)]+d[(ni-1,mi-1)]
d[(ni-1,mi-1)]=min{d[(ni-1,mi)],d[(ni-1,mi-1)],d[(ni-1,mi-2)]}
According to above formula, can be in the hope of needed D[T (ni), R (mi)] value.More than the path cost function of definition only is a kind of example, does not get rid of the algorithm of other path costs.
The video mode recognition method that the embodiment of the invention adopts refers to image-recognizing method, namely, each frame of video that intercepting is play compares each two field picture of intercept and the video frame images in the feature database, thus find out with feature database in the video frame images frame of video of mating.This image recognition processes mainly is divided into two stages: video interception and image recognition.
Video interception can utilize the AVIFile library file of windows operating system to realize, is specially:
At first, initialization AVIFile storehouse, then open and treat the avi file that detects synchronously and obtain its file interface address, if open file successfully (being that video format meets the requirements), then according to the needed avi file information of file interface address acquisition, these information can comprise: the data rate of file maximum (bytes per second), document flow number, file height (pixels), width (pixels), sample rate (samples per second), file size (frames), kind of document etc.; Can obtain the interface IP address of AVI stream according to the file interface address, interface IP address according to AVI stream, obtain the avi file stream information, because audio/video flow is separately to process, so the stream information that obtains here only is video flowing, these information can comprise: the kind class description of document flow kind, frame rate (fps), start frame, end frame, image quality value, document flow etc.;
Then, process the Video stream information obtain, call the address that corresponding decoding functions obtains data behind the decompress(ion), and the memory address of every frame data (being used for preserving into the BMP file), so far, just obtained needed image data information;
At last, again write the header file of this image data information, it is preserved into needed BMP file.The frame number of BMP file AVI video flowing by name, frame time can multiply by the frame time interval by current frame number and obtain, wherein frame period information can find in being specifically designed to the structure of preserving avi file information, for example, the file playback rate is 15fps, it is 66666ns that interframe is divided into 1/15, so it is poor with respect to the reproduction time of start frame to be easy to obtain each frame.
Intercept out the BMP picture from avi file after, the known BMP file of preserving is 24 RGB bitmaps, and further work namely is that the BMP picture is carried out image recognition.Image recognition processes can be: be the binary picture of 8RGB with the colored bitmap-converted of 24RGB, the feature of outstanding target object, adopt pixel statistics and profile track algorithm to ask the area and perimeter of detected image target object, it and image in the feature database are compared, specifically can be divided into following several step:
Step 1, with target image (image that namely is truncated to) gray processing, obtain corresponding grey value profile;
Step 2, grey value profile is carried out interative computation, calculate threshold value;
Step 3, according to threshold value with image binaryzation (be converted into black and white picture, white is background, and black is target object);
Step 4, the image of binaryzation is carried out pixels statistics, calculate the area (pixel number) of target object;
Step 5, carry out next step image and process, depict the profile of target object;
Step 6, carry out pixels statistics, calculate the girth of objects' contour;
The information of the respective image of storing in the area and perimeter that step 7, usefulness obtain and the feature database is compared, and judges whether this image is required target image, is then to record reproduction time.
In the embodiment of the invention, when the audio-visual synchronization situation is estimated, can compare according to audio ﹠ video the degree of lead and lag, mapping obtains corresponding audio-visual synchronization grade and corresponding MOS score value.
The MOS score value of the audio-visual synchronization in the embodiment of the invention is with reference to the scoring algorithm in ITU-R.BT 1359 standards, the method of copying its segmentation to calculate, according to the subjective feeling of people to the audio-visual synchronization situation, set the threshold value of 4 kinds of audio-visual synchronization credit ratings.The audio-visual synchronization Rating Model can be as shown in Figure 4, transverse axis is the time of audio frequency hysteresis video among the figure, vertical pivot represents the score value of marking, and A, B, C, A ', B ', C ' each point represent the Three Estate thresholding formulated, will estimate score value and be divided into 4 grades, the corresponding MOS score value of each audio-visual synchronization credit rating, maximum score value is 4.0, and minimum score value is 1.0, and floating space is 0.3, each audio-visual synchronization grade and thresholding thereof and corresponding MOS score value, can be as shown in table 1:
Table 1
In order more accurately to estimate objectively the audio-visual synchronization quality, a plurality of monitoring points are set to detect the audio-visual synchronization situation and to carry out the audio-visual synchronization quality evaluation in the embodiment of the invention, when carrying out the audio sync quality evaluation, with the synchronous MOS score value addition of these a plurality of monitoring points, then obtain overall synchronous MOS score value.The MOS score value of general synchronization can be used as the MOS score value that draws the video traffic total quality after an important indicator and audio frequency MOS, the video MOS score value weighted calculation.
Based on the technical conceive identical with audio-visual synchronization detection in the embodiment of the invention, the embodiment of the invention also provides a kind of audio-visual synchronization detection system.As shown in Figure 5, this system comprises: audio identification module 501, video identification module 502, time difference determination module 503 and synchronous detection module 504, wherein:
Time difference determination module 503, the initial reproduction time that is used for the audio section of and audioref Data Matching that determine according to audio identification module 501, and the initial reproduction time of video identification module 502 frame of video with the video reference Data Matching that determine, it is poor to determine the audio frequency and video reproduction time of audio-video document when destination end is play;
The specific implementation process of each function in above-mentioned each functional module, similar to the respective process in the aforementioned audio-visual synchronization testing process, do not repeat them here.
Based on the technical conceive identical with speech detection in the embodiment of the invention, the embodiment of the invention also provides a kind of speech detection system, as shown in Figure 6, this system comprises: the first search module 601, the second search module 602, voice segments determination module 603, wherein:
The first search module 601, receive the audio signal to be measured of input, according to the voice signal short-time average magnitude, in audio frequency to be measured, search for audio signal, when searching short-time average magnitude and surpass the audio signal of amplitude threshold MH, from current time, search for forward audio signal, and when after this moment, searching short-time average magnitude and dropping to first audio signal below the amplitude threshold MH, from current time, search for backward audio signal;
The second search module 602 is used for continuing to search for audio signal along the former direction of search according to short-time average zero-crossing rate when the first search module 601 searches forward and backward short-time average magnitude and drops to the audio signal of amplitude threshold ML;
Voice segments determination module 603, be used for when the second search module 602 searches forward short-time average zero-crossing rate and drops to audio signal below the zero-crossing rate threshold value Z0, with the starting point of current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value Z0, with the terminal point of current time as voice segments.
This system also can comprise threshold value setting module 604, be used for distributing to determine amplitude threshold MH, amplitude threshold ML and zero-crossing rate threshold value Z0 according to short-time average magnitude distribution and short-time average zero-crossing rate to speech samples data sound intermediate frequency signal, wherein, the audio signal of short-time average zero-crossing rate more than amplitude threshold MH is voice signal, in the voice signal of short-time average magnitude below amplitude threshold ML, the audio signal that short-time average zero-crossing rate is lower than zero-crossing rate threshold value Z0 is not voice signal.
The specific implementation process of each function in above-mentioned each functional module, similar to the respective process in the aforementioned speech detection flow process, do not repeat them here.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.
Claims (18)
1. an audio-visual synchronization detection method is characterized in that, comprises the steps:
Determine in the audio-video document that destination end plays, with the initial reproduction time of the audio section of audioref Data Matching, and with the initial reproduction time of the frame of video of video reference Data Matching;
According to the initial reproduction time of the audio section of described and audioref Data Matching, and the initial reproduction time of the frame of video of described and video reference Data Matching, it is poor to determine the audio frequency and video reproduction time of described audio-video document when destination end is play;
It is poor to obtain the audio frequency and video reproduction time of described audio-video document when source is play, poor according to the audio frequency and video reproduction time of described audio-video document when source and destination end are play, determine the audio-visual synchronization situation of described audio-video document when described destination end is play;
Wherein, described audioref data are speech data, determine and the process of the initial reproduction time of the audio section of audioref Data Matching, comprising: detect the voice segments and the start-stop reproduction time thereof that comprise in the audio-video document of playing; By detected voice segments and described audioref data are carried out voice recognition processing, determine the voice segments with described audioref Data Matching; Wherein, the voice segments that comprises in the audio-video document of determining to play and the process of start-stop reproduction time thereof comprise:
In the audio-video document of playing, search for audio signal according to the voice signal short-time average magnitude, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, be designated as the first current time; And when behind this first current time, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
When searching backward short-time average magnitude and drop to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continue along former direction of search search audio signal according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
When searching forward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
2. the method for claim 1 is characterized in that, it is poor to obtain the audio frequency and video reproduction time of described audio-video document when source is play, and comprising:
Determine in the audio-video document that source plays, with the initial reproduction time of the audio section of described audioref Data Matching, and with the initial reproduction time of the frame of video of described video reference Data Matching;
According to the initial reproduction time of the audio section of described and audioref Data Matching, and the initial reproduction time of the frame of video of described and video reference Data Matching, it is poor to determine the audio frequency and video reproduction time of described audio-video document when source is play.
3. the method for claim 1, it is characterized in that, described the first amplitude threshold, the second amplitude threshold and zero-crossing rate threshold value distribute according to the short-time average magnitude to speech samples data sound intermediate frequency signal and short-time average zero-crossing rate distributes to determine, wherein, the audio signal of short-time average magnitude more than the first amplitude threshold is voice signal, in the voice signal of short-time average magnitude below the second amplitude threshold, the audio signal that short-time average zero-crossing rate is lower than the zero-crossing rate threshold value is not voice signal.
4. the method for claim 1 is characterized in that, determines and the process of the voice segments of described audioref Data Matching, comprising:
According to the characteristic vector of each voice segments audio signal, and the characteristic vector of described audioref data, by the definite similarity to each other of the space length that calculates each voice segments and described audioref data;
According to the similarity of determining, get wherein the most similar to described audioref data voice segments, as with the voice segments of described audioref Data Matching.
5. method as claimed in claim 4 is characterized in that, when the audio frame number of the audio frame number of voice segments and audioref data was unequal, the process of the distance of computing voice section and described audioref data was specially:
Each audio frame frame number of described voice segments is mapped on the transverse axis in the two-dimensional direct angle coordinate system, each audio frame frame number of audioref data is mapped on the ordinate of this coordinate system, on the direction of the upper right corner, determine a paths along the lower left corner of described coordinate system; According to the coordinate points of described path process, determine the frame number of the audioref data corresponding with each frame number in the described voice segments;
Corresponding relation according to the frame number of determining, utilize the characteristic vector of audio signal, calculating has audio frame signal in the described voice segments of corresponding relation and the distortion factor of the audio frame signal in the audioref data, according to the distortion factor that calculates, determine the space length between described voice segments and the described audioref data.
6. method as claimed in claim 5, it is characterized in that, the described path of determining on along the lower left corner of described coordinate system to upper right corner direction, the slope at the joint place of the frame number that identifies at each ordinate and abscissa, be no more than the first slope threshold value, be not less than the second slope threshold value, described the first slope threshold value is greater than the second slope threshold value.
7. method as claimed in claim 1 or 2 is characterized in that, determines and the process of the initial reproduction time of the frame of video of video reference Data Matching, comprising:
Extract the frame of video that comprises in the audio-video document of playing;
Carry out image recognition processing by frame of video and the described video reference data that will extract, determine frame of video and initial reproduction time thereof with described video reference Data Matching.
8. the method for claim 1 is characterized in that, determines the audio-visual synchronization situation of described audio-video document, comprising:
Determine described audio-video document when destination end is play with respect to the audio frequency and video Delay Variation amount that when source is play, produces;
According to the audio frequency and video Delay Variation amount of determining, determine corresponding audio-visual synchronization credit rating or mark.
9. an audio-visual synchronization detection system is characterized in that, comprising:
The audio identification module be used for to be determined the audio-video document that destination end is play, with the initial reproduction time of the audio section of audioref Data Matching;
The video identification module be used for to be determined the audio-video document that destination end is play, with the initial reproduction time of the frame of video of video reference Data Matching;
The time difference determination module, be used for initial reproduction time that determine according to described audio identification module and the audio section audioref Data Matching, and the described video identification module initial reproduction time with the frame of video video reference Data Matching that determine, it is poor to determine the audio frequency and video reproduction time of described audio-video document when destination end is play;
Synchronous detection module, it is poor to be used for obtaining the audio frequency and video reproduction time of described audio-video document when source is play, poor according to the audio frequency and video reproduction time that the poor and described time difference determination module of described audio frequency and video reproduction time that gets access to is determined, determine the audio-visual synchronization situation of described audio-video document when described destination end is play;
Wherein, described audioref data are speech data; Described audio identification module is determined and the process of the initial reproduction time of the audio section of audioref Data Matching, being comprised: detect the voice segments and the start-stop reproduction time thereof that comprise in the audio-video document of playing; By detected voice segments and described audioref data are carried out voice recognition processing, determine the voice segments with described audioref Data Matching; Wherein, the voice segments that comprises in the audio-video document that described audio identification module is determined to play and the process of start-stop reproduction time thereof comprise:
In the audio-video document of playing, search for audio signal according to the voice signal short-time average magnitude, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, be designated as the first current time; And when after this moment, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
When searching backward short-time average magnitude and drop to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continue along former direction of search search audio signal according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
When searching forward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
10. system as claimed in claim 9, it is characterized in that, described synchronous detection module is obtained the audio frequency and video reproduction time of described audio-video document when source is play when poor, determine in the audio-video document that source plays, with the initial reproduction time of the audio section of described audioref Data Matching, and with the initial reproduction time of the frame of video of described video reference Data Matching; Then, in the audio-video document of playing according to source, the initial reproduction time of the audio section of described and audioref Data Matching, and the initial reproduction time of the frame of video of described and video reference Data Matching, it is poor to determine the audio frequency and video reproduction time of described audio-video document when source is play.
11. system as claimed in claim 9 is characterized in that, described audio identification module is determined and the process of the voice segments of described audioref Data Matching, being comprised:
According to the characteristic vector of each voice segments audio signal, and the characteristic vector of described audioref data, by the definite similarity to each other of the space length that calculates each voice segments and described audioref data;
According to the similarity of determining, get wherein the most similar to described audioref data voice segments, as with the voice segments of described audioref Data Matching.
12. system as claimed in claim 11 is characterized in that, when the audio frame number of the audio frame number of voice segments and audioref data was unequal, the process of the distance of described audio identification module computing voice section and described audioref data was specially:
Each audio frame frame number of described voice segments is mapped on the transverse axis in the two-dimensional direct angle coordinate system, each audio frame frame number of audioref data is mapped on the ordinate of this coordinate system, on the direction of the upper right corner, determine a paths along the lower left corner of described coordinate system; According to the coordinate points of described path process, determine the frame number of the audioref data corresponding with each frame number in the described voice segments;
Corresponding relation according to the frame number of determining, utilize the characteristic vector of audio signal, calculating has audio frame signal in the described voice segments of corresponding relation and the distortion factor of the audio frame signal in the audioref data, according to the distortion factor that calculates, determine the space length between described voice segments and the described audioref data.
13. system as claimed in claim 10 is characterized in that, described video identification module is determined and the process of the initial reproduction time of the frame of video of video reference Data Matching, being comprised:
Extract the frame of video that comprises in the audio-video document of playing;
Carry out image recognition processing by frame of video and the described video reference data that will extract, determine frame of video and initial reproduction time thereof with described video reference Data Matching.
14. system as claimed in claim 9 is characterized in that, described synchronous detection module is determined the audio-visual synchronization situation of described audio-video document, comprising:
Determine described audio-video document when destination end is play with respect to the audio frequency and video Delay Variation amount that when source is play, produces;
According to the audio frequency and video Delay Variation amount of determining, determine corresponding audio-visual synchronization credit rating or mark.
15. a speech detection method is characterized in that, comprises the steps:
According to the voice signal short-time average magnitude, in audio frequency to be measured, search for audio signal, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, be designated as the first current time; And when after this moment, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
When searching backward short-time average magnitude and drop to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continue along former direction of search search audio signal according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
When searching forward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
16. method as claimed in claim 15, it is characterized in that, described the first amplitude threshold, the second amplitude threshold and zero-crossing rate threshold value distribute according to the short-time average magnitude to speech samples data sound intermediate frequency signal and short-time average zero-crossing rate distributes to determine, wherein, the audio signal of short-time average magnitude more than the first amplitude threshold is voice signal, in the voice signal of short-time average magnitude below the second amplitude threshold, the audio signal that short-time average zero-crossing rate is lower than the zero-crossing rate threshold value is not voice signal.
17. a speech detection system is characterized in that, comprising:
The first search module is used for searching for audio signal according to the voice signal short-time average magnitude in audio frequency to be measured, when searching short-time average magnitude and surpass the audio signal of the first amplitude threshold, is designated as the first current time; And when after this moment, searching short-time average magnitude and dropping to first audio signal below the first amplitude threshold, be designated as the second current time;
The second search module, be used for when described the first search module searches backward short-time average magnitude and drops to the audio signal of the second amplitude threshold forward with from the second current time from the first current time, continuing to search for audio signal along the former direction of search according to short-time average zero-crossing rate; Described the second amplitude threshold is less than described the first amplitude threshold;
The voice segments determination module, be used for when described the second search module searches forward short-time average zero-crossing rate and drops to audio signal below the zero-crossing rate threshold value, be designated as the 3rd current time, and with the starting point of the 3rd current time as voice segments, when searching backward short-time average zero-crossing rate and drop to audio signal below the zero-crossing rate threshold value, be designated as the 4th current time, and with the terminal point of the 4th current time as voice segments.
18. system as claimed in claim 17 is characterized in that, also comprises:
The threshold value setting module, be used for distributing to determine described the first amplitude threshold, the second amplitude threshold and zero-crossing rate threshold value according to short-time average magnitude distribution and short-time average zero-crossing rate to speech samples data sound intermediate frequency signal, wherein, the audio signal of short-time average magnitude more than the first amplitude threshold is voice signal, in the voice signal of short-time average magnitude below the second amplitude threshold, the audio signal that short-time average zero-crossing rate is lower than the zero-crossing rate threshold value is not voice signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102374145A CN102056026B (en) | 2009-11-06 | 2009-11-06 | Audio/video synchronization detection method and system, and voice detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102374145A CN102056026B (en) | 2009-11-06 | 2009-11-06 | Audio/video synchronization detection method and system, and voice detection method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102056026A CN102056026A (en) | 2011-05-11 |
CN102056026B true CN102056026B (en) | 2013-04-03 |
Family
ID=43959877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009102374145A Active CN102056026B (en) | 2009-11-06 | 2009-11-06 | Audio/video synchronization detection method and system, and voice detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102056026B (en) |
Families Citing this family (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
CN103051921B (en) * | 2013-01-05 | 2014-12-24 | 北京中科大洋科技发展股份有限公司 | Method for precisely detecting video and audio synchronous errors of video and audio processing system |
JP2016508007A (en) | 2013-02-07 | 2016-03-10 | アップル インコーポレイテッド | Voice trigger for digital assistant |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
CN103974143B (en) * | 2014-05-20 | 2017-11-07 | 北京速能数码网络技术有限公司 | A kind of method and apparatus for generating media data |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10186282B2 (en) * | 2014-06-19 | 2019-01-22 | Apple Inc. | Robust end-pointing of speech signals using speaker recognition |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
CN104538041B (en) * | 2014-12-11 | 2018-07-03 | 深圳市智美达科技有限公司 | abnormal sound detection method and system |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
CN104796578B (en) * | 2015-04-29 | 2018-03-13 | 成都陌云科技有限公司 | A kind of multi-screen synchronous method based on broadcast sounds feature |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10121471B2 (en) * | 2015-06-29 | 2018-11-06 | Amazon Technologies, Inc. | Language model speech endpointing |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
CN104993901B (en) * | 2015-07-09 | 2017-08-29 | 广东威创视讯科技股份有限公司 | Distributed system method of data synchronization and device |
CN106470339B (en) * | 2015-08-17 | 2018-09-14 | 南宁富桂精密工业有限公司 | Terminal device and audio video synchronization detection method |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
CN105898498A (en) * | 2015-12-15 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | Video synchronization method and system |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
CN105608935A (en) * | 2015-12-29 | 2016-05-25 | 北京奇艺世纪科技有限公司 | Detection method and device of audio and video synchronization |
CN105609118B (en) * | 2015-12-30 | 2020-02-07 | 生迪智慧科技有限公司 | Voice detection method and device |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
CN106157952B (en) * | 2016-08-30 | 2019-09-17 | 北京小米移动软件有限公司 | Sound identification method and device |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
CN108632557B (en) * | 2017-03-20 | 2021-06-08 | 中兴通讯股份有限公司 | Audio and video synchronization method and terminal |
US10600432B1 (en) * | 2017-03-28 | 2020-03-24 | Amazon Technologies, Inc. | Methods for voice enhancement |
CN108882019B (en) * | 2017-05-09 | 2021-12-10 | 腾讯科技(深圳)有限公司 | Video playing test method, electronic equipment and system |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770429A1 (en) | 2017-05-12 | 2018-12-14 | Apple Inc. | Low-latency intelligent automated assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
CN109039994B (en) * | 2017-06-08 | 2020-12-08 | 中国移动通信集团甘肃有限公司 | Method and equipment for calculating asynchronous time difference between audio and video |
CN107920245B (en) * | 2017-11-22 | 2019-08-30 | 北京奇艺世纪科技有限公司 | A kind of method and apparatus of detection video playing starting time |
CN109859744B (en) * | 2017-11-29 | 2021-01-19 | 宁波方太厨具有限公司 | Voice endpoint detection method applied to range hood |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
CN108769559B (en) * | 2018-05-25 | 2020-12-01 | 数据堂(北京)科技股份有限公司 | Multimedia file synchronization method and device |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
CN109472487A (en) * | 2018-11-02 | 2019-03-15 | 深圳壹账通智能科技有限公司 | Video quality detecting method, device, computer equipment and storage medium |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
DK201970511A1 (en) | 2019-05-31 | 2021-02-15 | Apple Inc | Voice identification in digital assistant systems |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
CN110267083B (en) * | 2019-06-18 | 2021-12-10 | 广州虎牙科技有限公司 | Audio and video synchronization detection method, device, equipment and storage medium |
CN112447185B (en) * | 2019-08-30 | 2024-02-09 | 广州虎牙科技有限公司 | Audio synchronization error testing method and device, server and readable storage medium |
CN110585702B (en) * | 2019-09-17 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Sound and picture synchronous data processing method, device, equipment and medium |
CN110503982B (en) * | 2019-09-17 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Voice quality detection method and related device |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
CN112653916B (en) * | 2019-10-10 | 2023-08-29 | 腾讯科技(深圳)有限公司 | Method and equipment for synchronously optimizing audio and video |
CN111093108B (en) * | 2019-12-18 | 2021-12-03 | 广州酷狗计算机科技有限公司 | Sound and picture synchronization judgment method and device, terminal and computer readable storage medium |
CN113555132A (en) * | 2020-04-24 | 2021-10-26 | 华为技术有限公司 | Multi-source data processing method, electronic device and computer-readable storage medium |
US11183193B1 (en) | 2020-05-11 | 2021-11-23 | Apple Inc. | Digital assistant hardware abstraction |
CN112039612B (en) * | 2020-09-01 | 2023-02-17 | 广州市百果园信息技术有限公司 | Time delay measuring method, device, equipment, system and storage medium |
CN112351273B (en) * | 2020-11-04 | 2022-03-01 | 新华三大数据技术有限公司 | Video playing quality detection method and device |
CN113744368A (en) * | 2021-08-12 | 2021-12-03 | 北京百度网讯科技有限公司 | Animation synthesis method and device, electronic equipment and storage medium |
CN114999453B (en) * | 2022-05-25 | 2023-05-30 | 中南大学湘雅二医院 | Preoperative visit system based on voice recognition and corresponding voice recognition method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6744922B1 (en) * | 1999-01-29 | 2004-06-01 | Sony Corporation | Signal processing method and video/voice processing device |
US6928233B1 (en) * | 1999-01-29 | 2005-08-09 | Sony Corporation | Signal processing method and video signal processor for detecting and analyzing a pattern reflecting the semantics of the content of a signal |
CN101159834A (en) * | 2007-10-25 | 2008-04-09 | 中国科学院计算技术研究所 | Method and system for detecting repeatable video and audio program fragment |
CN101494049A (en) * | 2009-03-11 | 2009-07-29 | 北京邮电大学 | Method for extracting audio characteristic parameter of audio monitoring system |
-
2009
- 2009-11-06 CN CN2009102374145A patent/CN102056026B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6744922B1 (en) * | 1999-01-29 | 2004-06-01 | Sony Corporation | Signal processing method and video/voice processing device |
US6928233B1 (en) * | 1999-01-29 | 2005-08-09 | Sony Corporation | Signal processing method and video signal processor for detecting and analyzing a pattern reflecting the semantics of the content of a signal |
CN101159834A (en) * | 2007-10-25 | 2008-04-09 | 中国科学院计算技术研究所 | Method and system for detecting repeatable video and audio program fragment |
CN101494049A (en) * | 2009-03-11 | 2009-07-29 | 北京邮电大学 | Method for extracting audio characteristic parameter of audio monitoring system |
Also Published As
Publication number | Publication date |
---|---|
CN102056026A (en) | 2011-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102056026B (en) | Audio/video synchronization detection method and system, and voice detection method and system | |
US11631404B2 (en) | Robust audio identification with interference cancellation | |
CN108900725B (en) | Voiceprint recognition method and device, terminal equipment and storage medium | |
AU2011276467B2 (en) | Systems and methods for detecting call provenance from call audio | |
CN108877823B (en) | Speech enhancement method and device | |
US20060053009A1 (en) | Distributed speech recognition system and method | |
CN101199208A (en) | Method, system, and program product for measuring audio video synchronization | |
CN100356446C (en) | Noise reduction and audio-visual speech activity detection | |
CN106372653A (en) | Stack type automatic coder-based advertisement identification method | |
CN110223678A (en) | Audio recognition method and system | |
CN115798518B (en) | Model training method, device, equipment and medium | |
KR101022519B1 (en) | System and method for voice activity detection using vowel characteristic, and method for measuring sound spectral similarity used thereto | |
CN111009261B (en) | Arrival reminding method, device, terminal and storage medium | |
US10522160B2 (en) | Methods and apparatus to identify a source of speech captured at a wearable electronic device | |
CN107274892A (en) | Method for distinguishing speek person and device | |
CN109829691B (en) | C/S card punching method and device based on position and deep learning multiple biological features | |
CN113239903B (en) | Cross-modal lip reading antagonism dual-contrast self-supervision learning method | |
Eveno et al. | A speaker independent" liveness" test for audio-visual biometrics. | |
CN110556114B (en) | Speaker identification method and device based on attention mechanism | |
Eyben et al. | Audiovisual vocal outburst classification in noisy acoustic conditions | |
JP2001520764A (en) | Speech analysis system | |
CN109065024B (en) | Abnormal voice data detection method and device | |
CN110265062A (en) | Collection method and device after intelligence based on mood detection is borrowed | |
CN113160796B (en) | Language identification method, device and equipment for broadcast audio and storage medium | |
CN113077784B (en) | Intelligent voice equipment for role recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |