CN110544270A - method and device for predicting human face tracking track in real time by combining voice recognition - Google Patents

method and device for predicting human face tracking track in real time by combining voice recognition Download PDF

Info

Publication number
CN110544270A
CN110544270A CN201910817876.8A CN201910817876A CN110544270A CN 110544270 A CN110544270 A CN 110544270A CN 201910817876 A CN201910817876 A CN 201910817876A CN 110544270 A CN110544270 A CN 110544270A
Authority
CN
China
Prior art keywords
period
speaking
image
determining
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910817876.8A
Other languages
Chinese (zh)
Inventor
汪俊
李索恒
张志齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yitu Information Technology Co Ltd
Original Assignee
Shanghai Yitu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yitu Information Technology Co Ltd filed Critical Shanghai Yitu Information Technology Co Ltd
Priority to CN201910817876.8A priority Critical patent/CN110544270A/en
Publication of CN110544270A publication Critical patent/CN110544270A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention relates to the technical field of communication, in particular to a method and a device for predicting a face tracking track in real time by combining voice recognition. The method comprises the following steps: carrying out voice recognition on the audio signals collected in the second time period, and determining a fourth speaking object and an audio frame corresponding to the fourth speaking object; when the fourth speaking object is determined to be an associated object, predicting the predicted position of the associated object in the video signal acquired in the second time period according to the position of the associated object in the image frame of the first time period; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.

Description

Method and device for predicting human face tracking track in real time by combining voice recognition
Technical Field
the invention relates to the technical field of communication, in particular to a method and a device for predicting a face tracking track in real time by combining voice recognition.
background
In the current society, monitoring equipment is distributed in various public places such as streets, communities, buildings and the like due to the requirement of security management. When an alarm occurs, the police officer uses the monitoring camera to search for the suspect.
however, as the scale of the monitoring network is enlarged, video data is increased in large quantities. When an alarm condition occurs, it is increasingly difficult to obtain useful information or information from a large number of images based on images of suspects, so that the efficiency is low, and the labor cost is high.
In addition, due to the requirement of a conference scene, especially a teleconference scene, only a video can be displayed in the conference, and a voice recognition result of a speaker cannot be displayed on a display interface, so that the efficiency of the conference is not high, and moreover, additional manpower is required to record the information in the conference, so that a large amount of labor cost is consumed, and the service efficiency cannot be improved.
disclosure of Invention
The embodiment of the invention provides a method and a device for predicting a face tracking track in real time by combining voice recognition, which are used for improving the combined monitoring of the voice recognition and the image recognition in a monitoring scene, improving the monitoring efficiency and the monitoring and conference effects, and meeting the business requirements of a conference by the combined display of the voice recognition and the image recognition in the conference scene.
The embodiment of the invention provides a method for predicting a face tracking track in real time by combining voice recognition, which comprises the following steps:
Carrying out voice recognition on the audio signals collected in the second time period, and determining a fourth speaking object and an audio frame corresponding to the fourth speaking object;
When the fourth speaking object is determined to be an associated object, predicting the predicted position of the associated object in the video signal acquired in the second time period according to the position of the associated object in the image frame of the first time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time;
for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period;
determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
In the embodiment of the invention, the audio signal collected in the second time interval is subjected to voice recognition, and the video signal collected in the second time interval is used for predicting the predicted position of the associated object in the video signal collected in the second time interval according to the position of the associated object in the image frame of the first time interval; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; and determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame which contains the associated object and is acquired in the second time interval, so that under the condition of obtaining the corresponding relation in the monitoring scene or the conference scene, the position of the associated object in the image frame in the second time interval can be determined according to the predicted position, the face recognition is prevented from being carried out again in the second time interval, the resource consumption caused by the image recognition is reduced, and the voice tracking efficiency is improved, so that the method is suitable for more monitoring environments and conference scenes.
In one possible implementation, the method further includes:
If the audio signal collected in the second time interval and the audio signal collected in the second time interval are determined to be continuous audio signals aiming at the fourth speaking object;
Determining an image frame containing the fourth speaking object from the video signal acquired in the second time period;
Establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
in the technical scheme, the continuous audio signals are determined, and the audio signals acquired in the second time period are directly associated with the image frames of the associated objects determined in the first time period in the second time period, so that the image processing time is reduced, the online voice tracking efficiency is improved, and the online voice tracking effect is improved.
A possible implementation manner, where the associated object is an object indicated in a correspondence relationship established according to an audio signal acquired in a first period and a video signal acquired in a first period, includes:
carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object;
Carrying out face recognition on the video signal acquired in the first period, and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
And determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determining an object in the corresponding relation as the associated object.
In the embodiment of the invention, voice recognition is carried out on the audio signal collected in the first time period, and face recognition is carried out on the video signal collected in the first time period; determining a first speaking object and an image frame corresponding to the first speaking object; and determining the corresponding relation between the audio frame corresponding to the second speaking object and the image frame corresponding to the first speaking object, so that more monitoring information of the object to be identified can be obtained in the monitoring scene, and the voice identification result corresponding to the first speaking object does not need to be searched in an off-line process, so that more monitoring data can be provided for security protection, the monitoring efficiency can be improved, and the method can be suitable for more monitoring environments and conference scenes.
one possible implementation manner is that the performing face recognition on the video signal acquired in the first period to determine a second speaking object and an image frame corresponding to the second speaking object includes:
Carrying out face recognition on N frames of images in the video signal acquired in the first period of time to determine a second object;
performing lip movement detection on a lip region of the second object in the M frames of images including the second object, and determining lip movement characteristics of each frame of image in the M frames of images;
Determining the confidence coefficient of lip movement according to the lip movement characteristics of each frame of image in the M frames of images, if the confidence coefficient of lip movement of K frames of images in the M frames of images is greater than a first preset threshold value, determining the second object as the second speaking object, and determining the K frames of images as the image frame corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to K.
According to the technical scheme, the lip region in the image is subjected to lip movement detection, the image frame of the second speaking object is determined, the voice recognition result in the audio signal is further associated on line, the voice recognition result corresponding to the second speaking object does not need to be searched in the off-line process, and the monitoring and conference effects are improved.
One possible implementation manner is that the performing face recognition on the video signal acquired in the first period to determine a second speaking object and an image frame corresponding to the second speaking object includes:
Carrying out face recognition on N frames of images in the video signal acquired in the second time period to determine a second object;
performing lip language detection on a lip region of the second object in the M frames of images including the second object, and determining lip language features of each frame of image in the M frames of images; if the confidence coefficient that lip language exists in L frames of images in the M frames of images is greater than a second preset threshold value, determining that the second object is the second speaking object, and determining the L frames of images as image frames corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to L.
In the technical scheme, lip language detection is carried out on the lip region in the image to determine the image frame of the second speaking object, so that the voice recognition result in the online associated audio signal is realized, and the voice recognition result corresponding to the second speaking object does not need to be searched in the offline process.
The embodiment of the invention provides a device for predicting a human face tracking track in real time by combining voice recognition, which comprises:
the audio processing module is used for carrying out voice recognition on the audio signals acquired in the second time period and determining a fourth speaking object and an audio frame corresponding to the fourth speaking object;
the identification processing module is used for predicting the predicted position of the associated object in the video signal acquired in the second time interval according to the position of the associated object in the image frame of the first time interval when the fourth speaking object is determined to be the associated object; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period; determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
in one possible implementation manner, the identification processing module is further configured to:
if the audio signal collected in the second time interval and the audio signal collected in the second time interval are determined to be continuous audio signals aiming at the fourth speaking object; determining an image frame containing the fourth speaking object from the video signal acquired in the second time period; establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
In one possible implementation, the audio processing module is configured to: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object;
The device further comprises: the image processing module is used for carrying out face recognition on the video signals acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
The identification processing module is further configured to: and determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determining an object in the corresponding relation as the associated object.
An embodiment of the present invention provides a storage medium storing a program of a method for speech recognition, where the program is executed by a processor to perform the method according to any one of the above embodiments.
an embodiment of the present invention provides a computer device, including one or more processors; and one or more computer-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method of any of the above embodiments.
drawings
FIG. 1 is a system architecture diagram according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for real-time associating a speaker with a speech recognition result thereof according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a real-time human face tracking method with speech recognition according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a method for predicting a face tracking trajectory in real time in combination with speech recognition according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a method for tracking a body trajectory in real time in combination with speech recognition according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for real-time association between a speaker and a speech recognition result thereof according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a real-time human face tracking apparatus incorporating speech recognition according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an apparatus for predicting a human face tracking trajectory in real time in combination with speech recognition according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a body trajectory real-time tracking device incorporating speech recognition according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an apparatus for real-time associating a speaker with a speech recognition result thereof according to an embodiment of the present invention.
Detailed Description
preferred embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of a system architecture to which an embodiment of the present invention is applicable, in which a monitoring device 101 and a server 102 are included. The monitoring device 101 may collect a video stream in real time, and then send the collected video stream to the server 102, where the server 102 includes a voice recognition device, and the server 102 acquires an image frame from the video stream, and then determines an object to be recognized and a corresponding voice recognition result in the image frame. The monitoring device 101 is connected to the server 102 via a wireless network, and is an electronic device having an image capturing function and a sound signal, such as a camera, a video recorder, a microphone, and the like. The server 102 is a server or a server cluster composed of several servers or a cloud computing center.
based on the system architecture shown in fig. 1, fig. 2 exemplarily shows a flowchart corresponding to a method for associating a speaker and a voice recognition result thereof in real time according to an embodiment of the present invention, where the flowchart of the method may be executed by a voice recognition device, and the voice recognition device may be the server 102 shown in fig. 1, as shown in fig. 2, the method specifically includes the following steps:
Step 201: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object;
the first period may be 1 second, and the specific length may be determined according to the characteristics of the audio signal, or may be determined according to the need of speech recognition, for example, the accuracy of online recognition, and the like, which is not limited herein.
Specifically, the audio signal may be at least one voice signal in corresponding sound signals acquired from at least one microphone; or, any at least two paths of voice signals can be selected from the corresponding voice signals acquired by at least one microphone, and the voice signals are combined to obtain more voice information. In practical applications, the sound signal is transmitted in signal frames, and the speech recognition apparatus needs to continuously detect the sound signal frames.
in a specific speech recognition process, speech in the audio signal can be recognized through a speech model to determine a speech recognition result. Taking the speech model as an example, when the speech model is established, the speech recognition apparatus may perform the following operations:
Firstly, respectively extracting acoustic features of the sound signal on set N frequency bands by a voice recognition device to serve as the acoustic features of the sound signal;
There are various ways to represent the acoustic characteristics of the sound signal in the frequency band, such as energy value, amplitude value, etc.
then, the speech recognition device uses the acoustic features on the N frequency bands as feature vectors, applies Gaussian Mixed Models (GMMs) to establish corresponding speech models, and calculates likelihood ratios of each acoustic feature based on the speech models.
specifically, in calculating the likelihood ratio, based on the feature vector, the GMM may be used to obtain characteristic parameters of the voice-like signal (e.g., mean value of the voice-like signal, variance of the voice-like signal, etc.) in each frequency band, and the GMM may be used to obtain characteristic parameters of the interference-like signal (e.g., mean value of the interference-like signal, variance of the interference-like signal, etc.) in each frequency band, and the likelihood ratio of each acoustic feature may be calculated by using the obtained parameters, and when the likelihood ratio of any one acoustic feature reaches a set threshold, the existence probability of the desired sound source may be set to a specified value indicating the existence of the desired sound source, so as to determine the existence of the voice signal.
of course, the GMM is only an example, and in practical applications, it is also possible to use other methods to establish a corresponding speech model. For example: a Support Vector Machine (SVM) algorithm, a Deep Neural Network (DNN) algorithm, a Convolutional Neural Network (CNN) algorithm, a Recurrent Neural Network (RNN) algorithm, etc.).
Step 202: carrying out face recognition on the video signal acquired in the first period, and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
Specifically, the video signal acquired in the first period may be N image frames captured by the monitoring device in the first period. The monitoring equipment collects a video stream in real time, the video stream is composed of a plurality of frames of image frames, and the image frames in the video stream can be marked according to time intervals according to a time sequence.
There are various ways to mark the image frame, and one possible implementation way is to mark the image in the video signal that needs to be subjected to human face object detection as the detection frame image. For example, a video signal is set to include 10 image frames, the first frame image and the fifth frame image may be marked as image frames for face recognition, and all the frame images may be used as image frames for face recognition. The image frames may be marked according to whether there is a human face or not, or according to whether there is a voice signal or not, and other factors, which are not limited herein.
further, when the video image frame is determined to be a face-identified image frame, the corresponding predicted image information of each face object in the image frame may be further determined. Specifically, prediction image information corresponding to each human face object in the image frame can be predicted according to image information corresponding to each human face object in the recognized image; the recognized images may be images adjacent to the image frame, and the image information corresponding to the human face object is determined or predicted.
Alternatively, when the image frame is determined to be a face-recognized image frame, face detection may be performed on the image frame, so as to determine detected image information corresponding to each face object in the image frame.
The method comprises the steps of dividing N frames of images in a video to be processed, which are acquired by monitoring equipment, into detection frame images and non-detection frame images, judging whether the image frames are image frames for face recognition or not when the image frames are acquired, if so, detecting face objects in the image frames, otherwise, predicting the face objects in the image frames by using the face objects in other image frames, and therefore, each frame of image does not need to be detected and recognized, the calculated amount of determining the face objects in the image frames in the video signals is reduced, and meanwhile, the efficiency is improved.
further, object detection may be performed on the image frame first, and a detection image region corresponding to each recognition object in the image frame may be determined, and further, image information in the detection image region corresponding to each recognition object, that is, image information corresponding to each recognition object may be determined. For example, body information, face information, objects associated with the object, etc. of the object may be determined. The image area may be an image frame having a regular shape or an image frame not having a regular shape.
One possible implementation manner is that the performing face recognition on the video signal acquired in the first period to determine a second speaking object and an image frame corresponding to the second speaking object includes:
Carrying out face recognition on N frames of images in the video signal acquired in the first period of time to determine a second object;
performing lip movement detection on a lip region of the second object in the M frames of images including the second object, and determining lip movement characteristics of each frame of image in the M frames of images;
Determining the confidence coefficient of lip movement according to the lip movement characteristics of each frame of image in the M frames of images, if the confidence coefficient of lip movement of K frames of images in the M frames of images is greater than a first preset threshold value, determining the second object as the second speaking object, and determining the K frames of images as the image frame corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to K.
specifically, lip movement characteristics of the second object in the image frame may be determined according to a lip movement characteristic extraction model, and then the lip movement characteristics of each frame of image are determined to determine the confidence coefficient of the lip movement. Wherein, the value of the confidence coefficient can be a numerical value of [0,1 ].
In another implementation manner, whether lip motion exists or not can be determined through a classifier according to the lip motion characteristics of each frame of image. For example, if 0 indicates that there is no lip movement, the image frame is excluded, and if 1 indicates that there is lip movement, the image frame is regarded as the image frame of the second speaking object.
After the lip movement characteristics are determined, the face characteristic image of the first speaking object can be determined according to the lip movement characteristics determined by the first speaking object, and then all the face characteristic images corresponding to the first speaking object and all the image frames corresponding to the face characteristic images in the image frames are determined, so that the image frames can be conveniently and subsequently corresponded, repeated recognition of subsequent face recognition is avoided, and the recognition efficiency is improved.
According to the technical scheme, the lip region in the image is subjected to lip movement detection, the image frame of the second speaking object is determined, the voice recognition result in the audio signal is further associated on line, the voice recognition result corresponding to the second speaking object does not need to be searched in the off-line process, and the monitoring and conference effects are improved.
One possible implementation manner is that the performing face recognition on the video signal acquired in the first period to determine a second speaking object and an image frame corresponding to the second speaking object includes:
Carrying out face recognition on N frames of images in the video signal acquired in the second time period to determine a second object;
performing lip language detection on a lip region of the second object in the M frames of images including the second object, and determining lip language features of each frame of image in the M frames of images; if the confidence coefficient that lip language exists in L frames of images in the M frames of images is greater than a second preset threshold value, determining that the second object is the second speaking object, and determining the L frames of images as image frames corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to L.
According to the technical scheme, lip language detection is carried out on the lip region in the image, the image frame of the second speaking object is determined, the voice recognition result in the audio signal is further associated on line, the voice recognition result corresponding to the second speaking object does not need to be searched in the off-line process, and the monitoring and conference effects are improved.
step 203: determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object; the correspondence is used to indicate that the audio frame and the image frame having the correspondence are for the same object.
In one possible implementation, the image frame determined as the second speaking object is associated with the audio frame corresponding to the speech recognition result of the first speaking object.
For example, if it is determined that the number of frames of the image frame and the audio frame in the first period is the same, the frame number of the image frame for which the second speaking object is determined may be associated with the frame number of the audio frame for which the first speaking object is determined; for example, if the frame 5 is determined as the frame number of the image frame of the second speaking object, the same speaking object is associated with the audio frame of the frame 5 in the speech recognition result, and the other frames corresponding to the first speaking object in the remaining speech recognition results are associated with the image frame of the second object. For example, if the remaining speech recognition results include the 6 th frame to the 10 th frame corresponding to the first speaking object, the 6 th frame to the 10 th frame of the image frame of the second object are associated with each other.
in another possible implementation manner, if it is determined that the number of frames of the image frame and the audio frame in the first period is different, the association may be performed according to the correspondence between the number of frames. For example, if it is determined that the image frame includes 20 frames and the audio frame includes 30 frames in the first period, the association of the image frame with the audio frame may be made in proportion. For example, if the 2 nd frame is determined as the frame number of the image frame of the second speaking object, the audio frame of the 3 rd frame in the speech recognition result is associated with the same speaking object.
Of course, the image frame and the audio frame may be associated according to the time point, and the time point of each frame of the image frame and the audio frame may be in one-to-one correspondence, and if it is determined that the image frame of the second speaking object and the audio frame of the first speaking object may be associated at a certain time point, it is determined that the second speaking object and the first speaking object start to establish a correspondence at the time point.
in the embodiment of the invention, voice recognition is carried out on the audio signal collected in the first time period, and face recognition is carried out on the video signal collected in the first time period; determining a second speaking object and an image frame corresponding to the second speaking object; and determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, so that more monitoring information of the object to be identified can be obtained in a monitoring scene or a conference scene, and a voice identification result corresponding to the second speaking object does not need to be searched in an off-line process, so that more monitoring data can be provided for security protection, the monitoring or conference efficiency can be improved, and the method is suitable for more monitoring or conference environments.
After determining the correspondence between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, a possible implementation further includes:
And displaying the voice recognition result of the audio frame corresponding to the first speaking object on the image frame corresponding to the audio frame in an object indication mode, wherein the object indication mode is to establish an association display relationship between the voice recognition result of the audio frame and the second speaking object.
In the technical scheme, after the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object is determined, the voice recognition result of the audio frame corresponding to the first speaking object is directly displayed on the image by utilizing the corresponding relation, so that the effect of real-time display is achieved, and the visualization of monitoring and conferences is improved.
In one possible implementation, the method further includes: and determining key points in the face of the second speaking object in the image frame corresponding to the audio frame, and displaying the key points on the image frame.
In the technical scheme, the lip movement effect of the first speaking object can be visualized by determining the key points in the face of the second speaking object in the image frame which has the corresponding relation with the audio frame, so that the visual monitoring and conference effects are improved.
In order to further improve the recognition efficiency and improve the monitoring and conference effects, as shown in fig. 3, an embodiment of the present invention further provides a real-time face trajectory tracking method in combination with speech recognition, including:
step 301: and carrying out voice recognition on the audio signals acquired in the second time period, and determining a third speaking object and an audio frame corresponding to the third speaking object.
in one possible implementation manner, the second time interval is a current time interval, and the first time interval is earlier than the second time interval; for example, the first period may be 1 second, the second period may also be 2 seconds, the length of the first period may be different from or the same as that of the second period, and the second period is located after the first period; the first time period may be continuous with the second time period, or discontinuous with the second time period, and may be determined according to the characteristics of the audio signal, or may be determined according to the need of speech recognition, which is not limited herein.
In the technical scheme, the audio signal acquired in the first time period is directly associated with the image frame of the associated object determined in the second time period by determining the continuous audio signal, so that the image processing time is reduced, the online voice tracking efficiency is improved, the online voice tracking effect is improved, and the monitoring and conference effects are improved.
In another possible implementation manner, the audio signal acquired in the second time period may be subjected to speech recognition, and it is determined that the audio signal acquired in the first time period and the audio signal acquired in the second time period are continuous audio signals for the third speaking object; the first period is earlier than the second period.
in the technical scheme, the audio signal acquired in the first time period is directly associated with the image frame of the associated object determined in the second time period in the first time period by determining the continuous audio signal, so that the image processing time is reduced, the online voice tracking efficiency is improved, the online voice tracking effect is improved, and the monitoring and conference effects are improved.
Step 302: when the third speaking object is determined to be an associated object, matching the image frame of the video signal acquired in the second time period with the face image of the associated object, and determining the image frame containing the associated object from the video signal acquired in the second time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the face image of the associated object is obtained through the video signal acquired in the first time period;
A possible implementation manner is that the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period, and a specific implementation manner may refer to an embodiment in the foregoing speech recognition method, and details are not repeated here.
Step 303: determining the corresponding relation between the audio frame corresponding to the third speaking object and the image frame containing the related object acquired in the second time period.
for example, taking the same number of frames of the image frame and the audio frame as an example, it is assumed that the associated subject is the first speaking subject, and the image frames of the first speaking subject determined at the second period are the 21 st frame and the 23 rd frame; if the audio frames corresponding to the third speaking object are determined to be the 20 th frame to the 25 th frame, the frames which can be associated, namely the 21 st frame and the 23 rd frame corresponding to the third speaking object are associated with the 21 st frame and the 23 rd frame corresponding to the audio frames.
in the embodiment of the invention, the audio signal acquired in the second time period is subjected to voice recognition, the video signal acquired in the second time period is associated according to the voice recognition result of the associated object matched with the first time period and the audio signal acquired in the second time period, and the corresponding relation between the audio frame corresponding to the third speaking object and the image frame including the associated object acquired in the second time period is determined, so that under the condition of acquiring the associated object in a monitoring scene or a conference scene, the association of the speaking object can be directly carried out, the resource consumption brought by image recognition is reduced, the voice tracking efficiency is improved, and the monitoring or conference efficiency is adapted to more monitoring or conference environments.
Further, to improve the association efficiency and improve the monitoring and conference effects, as shown in fig. 4, an embodiment of the present invention provides a method for predicting a face tracking trajectory in real time in combination with speech recognition, including:
step 401: carrying out voice recognition on the audio signals collected in the second time period, and determining a fourth speaking object and an audio frame corresponding to the fourth speaking object;
Wherein the second time period may be a current time period, and the first time period is earlier than the second time period; for example, the first period may be 1 second, the second period may also be 2 seconds, and the length of the first period and the length of the second period may be different or the same; the first time period may be continuous with the second time period, or discontinuous with the second time period, and may be determined according to the characteristics of the audio signal, or may be determined according to the need of speech recognition, which is not limited herein.
In a possible implementation manner, if it is determined that the audio signal acquired in the second time period and the audio signal acquired in the second time period are continuous audio signals for the fourth speaking object; the second period of time is a period of time after the first period of time;
determining an image frame containing the fourth speaking object from the video signal acquired in the second time period;
establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
in the technical scheme, the audio signals acquired in the second time period are directly associated with the image frames of the associated objects determined in the first time period in the second time period by determining the continuous audio signals, so that the image processing time is reduced, the online voice tracking efficiency is improved, the online voice tracking effect is improved, and the monitoring and conference effects are improved.
Step 402: when the fourth speaking object is determined to be an associated object, predicting the predicted position of the associated object in the video signal acquired in the second time period according to the position of the associated object in the image frame of the first time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time;
for example, if the first predicted position of the associated object in the image frame in the second period is predicted according to the first position information of the associated object in the image frame in the first period, when the image frame in the second period is obtained, the face recognition area in the position can be obtained according to the first predicted position, and face recognition is not needed. Of course, if it is determined that there is no face image of the associated object in the obtained face recognition area at the first prediction position, the face image can be re-recognized in the image frame, and the accuracy of image recognition is ensured.
another possible scenario is that, on at least one image frame adjacent to the image frame, a face recognition area at a predicted position of the image frame predicted according to an associated object is a face image of the associated object, but a face image without an associated object in the image frame may be unrecognizable by an image recognition device due to image shake or for other reasons, and a face image without an associated object in the image frame is absent on the image frame.
step 403: for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period;
Step 404: determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
For example, taking the same number of frames of the image frame and the audio frame as an example, assuming that the associated object is the first speaking object, and the image frames of the first speaking object determined in the second period are the 21 st frame and the 23 rd frame, the predicted frames are the 20 th and 22 th frames; if the audio frames corresponding to the fourth speaking object are determined to be the 20 th frame to the 25 th frame, the frames which can be associated, namely the 20 th frame to the 23 th frame corresponding to the third speaking object are associated with the 20 th frame to the 23 th frame corresponding to the audio frames.
in the embodiment of the invention, the audio signal collected in the second time interval is subjected to voice recognition, and the video signal collected in the second time interval is used for predicting the predicted position of the associated object in the video signal collected in the second time interval according to the position of the associated object in the image frame of the first time interval; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; and determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame which is acquired in the second time period and contains the associated object, so that under the condition of obtaining the corresponding relation in the monitoring scene or the conference scene, the position of the associated object in the image frame in the second time period can be determined according to the predicted position, the face recognition in the second time period is avoided, the resource consumption caused by the image recognition is reduced, the voice tracking efficiency is further improved, and the monitoring or conference efficiency is further improved, so that the method is suitable for more monitoring or conference environments.
In order to further improve the association effect, as shown in fig. 5, an embodiment of the present invention provides a body trajectory real-time tracking method in combination with speech recognition, including:
Step 501: carrying out voice recognition on the audio signals collected in the second time period, and determining a fifth speaking object and an audio frame corresponding to the fifth speaking object;
Step 502: when the fifth speaking object is determined to be an associated object, matching the image frames of the video signals acquired in the second time period with the body images of the associated object, and determining the image frames containing the associated object from the video signals acquired in the second time period;
wherein the associated object is an object indicated in a correspondence relationship established according to the audio signal acquired at the first period and the video signal acquired at the first period;
One possible implementation further includes: when the fifth speaking object is determined to be an associated object, determining that no image frame matched with the face image of the associated object exists in the image frames of the video signals acquired in the second time period.
according to the technical scheme, irrelevant image frames can be removed in advance before the relevant objects are matched, images needing to be matched can be greatly reduced, the time required by image association is reduced, and the association efficiency is improved.
For the determination mode of the body image of the associated object, the body image of the second speaking object can be determined from the image frame corresponding to the second speaking object; and associating the body image of the second speaking object with the face image of the second speaking object.
According to the technical scheme, the first body image is associated according to the face image of the first speaking object, so that the association effect is improved, the association of the body images in the second time period is realized, and the monitoring and conference effects are improved.
Specifically, the correspondence is used to indicate that the audio frame and the image frame having the correspondence are for the same object; the body image of the associated subject is determined by the video signal acquired over the first period of time; the specific implementation may refer to an embodiment in which the first speaking object is associated with the second speaking object in the speech recognition method, and details are not described herein again.
One possible implementation manner of matching the image frames of the video signal acquired in the second period with the body image of the associated subject may include:
determining a first body image of a first speaking object according to a face image of the first speaking object in an image frame corresponding to the associated object in the first time interval; matching image frames in the video signal acquired during the second period with the first body image, determining image frames of the associated object in the second period from the video signal acquired during the second period.
According to the technical scheme, the image frames which cannot identify the face image but can identify the body image in the second time period are associated according to the face image and the first body image of the first speaking object, so that the monitoring and conference effects are improved.
step 503: determining the corresponding relation between the audio frame corresponding to the fifth speaking object and the image frame containing the related object acquired in the second time period.
In the embodiment of the invention, the audio signal acquired in the second time period is subjected to voice recognition, the video signal acquired in the second time period is associated according to the voice recognition result of the associated object matched with the first time period and the audio signal acquired in the second time period, and the corresponding relation between the audio frame corresponding to the fifth speaking object and the image frame including the associated object acquired in the second time period is determined, so that under the condition of acquiring the face image and the body image of the associated object in a monitoring scene or a conference scene, the association of the speaking object can be directly carried out, the resource consumption caused by image recognition is reduced, the recall rate of the associated object is improved, the tracking efficiency of the voice object is improved, and the monitoring or conference efficiency is adapted to more monitoring or conference environments.
Based on the embodiment, referring to fig. 6, an embodiment of the present invention provides an apparatus for associating a speaker and a voice recognition result thereof in real time, including:
The audio processing module 601 is configured to perform speech recognition on an audio signal acquired at a first time period, and determine a first speaking object and an audio frame corresponding to the first speaking object;
The image processing module 602 is configured to perform face recognition on the video signal acquired in the first period, and determine a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
The recognition processing module 603 is configured to determine a correspondence between an audio frame corresponding to the first speaking object and an image frame corresponding to the second speaking object; the correspondence is used to indicate that the audio frame and the image frame having the correspondence are for the same object.
in a possible implementation manner, the audio processing module 601 is specifically configured to:
Carrying out face recognition on N frames of images in the video signal acquired in the first period of time to determine a second object; performing lip movement detection on a lip region of the second object in the M frames of images including the second object, and determining lip movement characteristics of each frame of image in the M frames of images; determining the confidence coefficient of lip movement according to the lip movement characteristics of each frame of image in the M frames of images, if the confidence coefficient of lip movement of K frames of images in the M frames of images is greater than a first preset threshold value, determining the second object as the second speaking object, and determining the K frames of images as the image frame corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to K.
In one possible implementation, the apparatus further includes:
and the display module is used for displaying the voice recognition result of the audio frame corresponding to the first speaking object on the image frame corresponding to the audio frame in an object indication mode, wherein the object indication mode is to establish an association display relationship between the voice recognition result of the audio frame and the second speaking object.
in one possible implementation manner, the image processing module 602 is further configured to: and determining key points in the face of the second speaking object in the image frame corresponding to the audio frame, and displaying the key points on the image frame through the display module.
an embodiment of the present invention provides a storage medium storing a program of a method for speech recognition, where the program is executed by a processor to perform the method according to any one of the above embodiments.
an embodiment of the present invention provides a computer device, including one or more processors; and one or more computer-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method of any of the above embodiments.
Based on the same inventive concept, as shown in fig. 7, an embodiment of the present invention provides a real-time human face trajectory tracking device combining speech recognition, which is characterized by comprising:
The audio processing module 701 is configured to perform speech recognition on an audio signal acquired at a second time period, and determine a third speaking object and an audio frame corresponding to the third speaking object;
A recognition processing module 702, configured to, when it is determined that the third speaking object is an associated object, match an image frame of the video signal acquired in the second time period with a face image of the associated object, and determine an image frame including the associated object from the video signal acquired in the second time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the face image of the associated object is obtained through the video signal acquired in the first time period; determining the corresponding relation between the audio frame corresponding to the third speaking object and the image frame containing the related object acquired in the second time period.
in a possible implementation manner, the audio processing module 701 is specifically configured to: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object; the device further comprises:
The image processing module is used for carrying out face recognition on the video signals acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
The identification processing module 702 is configured to determine the correspondence between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determine an object in the correspondence as the associated object.
In one possible implementation manner, the image processing module is specifically configured to: carrying out face recognition on N frames of images in the video signal acquired in the first period of time to determine a second object; performing lip movement detection on a lip region of the second object in the M frames of images including the second object, and determining lip movement characteristics of each frame of image in the M frames of images; determining the confidence coefficient of lip movement according to the lip movement characteristics of each frame of image in the M frames of images, if the confidence coefficient of lip movement of K frames of images in the M frames of images is greater than a first preset threshold value, determining the second object as the second speaking object, and determining the K frames of images as the image frame corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to K.
In one possible implementation manner, the second time interval is a current time interval, and the first time interval is earlier than the second time interval; or, the audio processing module 701 is configured to: performing voice recognition on the audio signals acquired in the second time period, and determining that the audio signals acquired in the first time period and the audio signals acquired in the second time period are continuous audio signals for the third speaking object; the first period is earlier than the second period.
as shown in fig. 8, an embodiment of the present invention provides a device for predicting a face tracking trajectory in real time in combination with speech recognition, including:
The audio processing module 801 is configured to perform speech recognition on an audio signal acquired at a second time interval, and determine a fourth speaking object and an audio frame corresponding to the fourth speaking object;
a recognition processing module 802, configured to, when it is determined that the fourth speaking object is an associated object, predict a predicted position of the associated object in the video signal acquired in the second time period according to a position of the associated object in the image frame in the first time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period; determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
in a possible implementation manner, the recognition processing module 802 is further configured to:
If the audio signal collected in the second time interval and the audio signal collected in the second time interval are determined to be continuous audio signals aiming at the fourth speaking object; the second period of time is a period of time after the first period of time; determining an image frame containing the fourth speaking object from the video signal acquired in the second time period; establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
In one possible implementation manner, the audio processing module 801 is configured to: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object;
the device further comprises: the image processing module is used for carrying out face recognition on the video signals acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
The identification processing module 802 is further configured to: and determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determining an object in the corresponding relation as the associated object.
As shown in fig. 9, an embodiment of the present invention provides a body trajectory real-time tracking device with voice recognition, including:
the audio processing module 901 is configured to perform speech recognition on an audio signal acquired at a second time interval, and determine a fifth speaking object and an audio frame corresponding to the fifth speaking object;
A recognition processing module 902, configured to, when it is determined that the fifth speaking object is an associated object, match the image frames of the video signal acquired in the second time period with the body image of the associated object, and determine the image frames containing the associated object from the video signal acquired in the second time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the body image of the associated subject is determined by the video signal acquired over the first period of time; determining the corresponding relation between the audio frame corresponding to the fifth speaking object and the image frame containing the related object acquired in the second time period.
in a possible implementation manner, the audio processing module 901 is specifically configured to: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object; the device further comprises:
The image processing module is used for carrying out face recognition on the video signals acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
the identification processing module is configured to determine the correspondence between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determine an object in the correspondence as the associated object.
In a possible implementation manner, the identification processing module 902 is configured to:
determining a first body image of a first speaking object according to a face image of the first speaking object in an image frame corresponding to the associated object in the first time interval; matching image frames in the video signal acquired during the second period with the first body image, determining image frames of the associated object in the second period from the video signal acquired during the second period.
based on the above embodiments, referring to fig. 10, a schematic structural diagram of a computer device in an embodiment of the present invention is shown.
an embodiment of the present invention provides a computer device, where the computer device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and optionally, the user interface 1003 may include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting of computer devices and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The memory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a voice recognition or voice tracking program. The operating system is a program for managing and controlling hardware and software resources of a parameter acquisition system for voice recognition or voice tracking, voice trajectory tracking or voice object tracking, and supports the operation of the voice recognition or voice tracking program and other software or programs.
The user interface 1003 is mainly used for connecting the servers to perform data communication with each server; the network interface 1004 is mainly used for connecting a background server and performing data communication with the background server; and the processor 1001 may be configured to invoke a speech recognition program stored in the memory 1005 and perform the following operations: the processor 1001 is configured to perform speech recognition on an audio signal acquired at a second time period, and determine a fourth speaking object and an audio frame corresponding to the fourth speaking object; when the fourth speaking object is determined to be an associated object, predicting the predicted position of the associated object in the video signal acquired in the second time period according to the position of the associated object in the image frame of the first time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period; determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
Further, the processor 1001 may be configured to call a voice tracking program stored in the memory 1005, and the processor 1001 may further perform: if the audio signal collected in the second time interval and the audio signal collected in the second time interval are determined to be continuous audio signals aiming at the fourth speaking object; determining an image frame containing the fourth speaking object from the video signal acquired in the second time period; establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
further, the processor 1001 may be configured to call a voice trace tracking program stored in the memory 1005, and the processor 1001 may further perform: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object; carrying out face recognition on the video signal acquired in the first period, and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal; and determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determining an object in the corresponding relation as the associated object.
as will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
the present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
while preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (10)

1. The method for predicting the human face tracking track in real time by combining the voice recognition is characterized by comprising the following steps of:
carrying out voice recognition on the audio signals collected in the second time period, and determining a fourth speaking object and an audio frame corresponding to the fourth speaking object;
When the fourth speaking object is determined to be an associated object, predicting the predicted position of the associated object in the video signal acquired in the second time period according to the position of the associated object in the image frame of the first time period; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time;
for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period;
determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
2. The method of claim 1, wherein the method further comprises:
If the audio signal collected in the second time interval and the audio signal collected in the second time interval are determined to be continuous audio signals aiming at the fourth speaking object;
determining an image frame containing the fourth speaking object from the video signal acquired in the second time period;
establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
3. The method of claim 1, wherein the associated object is an object indicated in a correspondence established from the audio signal captured during the first period and the video signal captured during the first period, comprising:
Carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object;
Carrying out face recognition on the video signal acquired in the first period, and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
And determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determining an object in the corresponding relation as the associated object.
4. the method as claimed in claim 3, wherein said performing face recognition on the video signal acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object comprises:
carrying out face recognition on N frames of images in the video signal acquired in the first period of time to determine a second object;
performing lip movement detection on a lip region of the second object in the M frames of images including the second object, and determining lip movement characteristics of each frame of image in the M frames of images;
Determining the confidence coefficient of lip movement according to the lip movement characteristics of each frame of image in the M frames of images, if the confidence coefficient of lip movement of K frames of images in the M frames of images is greater than a first preset threshold value, determining the second object as the second speaking object, and determining the K frames of images as the image frame corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to K.
5. The method as claimed in claim 3, wherein said performing face recognition on the video signal acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object comprises:
carrying out face recognition on N frames of images in the video signal acquired in the second time period to determine a second object;
Performing lip language detection on a lip region of the second object in the M frames of images including the second object, and determining lip language features of each frame of image in the M frames of images; if the confidence coefficient that lip language exists in L frames of images in the M frames of images is greater than a second preset threshold value, determining that the second object is the second speaking object, and determining the L frames of images as image frames corresponding to the second speaking object; n is greater than or equal to M; m is greater than or equal to L.
6. Combine speech recognition and predict face in real time and track device, its characterized in that includes:
the audio processing module is used for carrying out voice recognition on the audio signals acquired in the second time period and determining a fourth speaking object and an audio frame corresponding to the fourth speaking object;
the identification processing module is used for predicting the predicted position of the associated object in the video signal acquired in the second time interval according to the position of the associated object in the image frame of the first time interval when the fourth speaking object is determined to be the associated object; the associated object is an object indicated in a corresponding relationship established according to the audio signal acquired in the first period and the video signal acquired in the first period; the correspondence is used for indicating that the audio frame and the image frame with the correspondence are directed to the same object; the second period of time is a period of time after the first period of time; for any image frame of the video signals acquired in the second time interval, matching an image corresponding to the predicted position in the image frame with a face image of the associated object, and determining the image frame containing the associated object; the face image of the associated object is obtained through the video signal acquired in the first time period; determining the corresponding relation between the audio frame corresponding to the fourth speaking object and the image frame containing the related object acquired in the second time period.
7. the apparatus of claim 6, wherein the identification processing module is further configured to:
If the audio signal collected in the second time interval and the audio signal collected in the second time interval are determined to be continuous audio signals aiming at the fourth speaking object; determining an image frame containing the fourth speaking object from the video signal acquired in the second time period; establishing the correspondence between the audio signal acquired during the second period and the image frame including the fourth speaking object acquired during the second period.
8. The apparatus of claim 6, wherein the audio processing module is to: carrying out voice recognition on audio signals acquired in a first period, and determining a first speaking object and an audio frame corresponding to the first speaking object;
the device further comprises: the image processing module is used for carrying out face recognition on the video signals acquired in the first period and determining a second speaking object and an image frame corresponding to the second speaking object; the second speaking object is determined according to the lip movement characteristics of the same face in the image frame of the video signal;
The identification processing module is further configured to: and determining the corresponding relation between the audio frame corresponding to the first speaking object and the image frame corresponding to the second speaking object, and determining an object in the corresponding relation as the associated object.
9. A storage medium, characterized in that a program for a voice trajectory tracking method is stored, which program, when being executed by a processor, performs the method according to any one of claims 1 to 5.
10. A computer device comprising one or more processors; and one or more computer-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method of any of claims 1-5.
CN201910817876.8A 2019-08-30 2019-08-30 method and device for predicting human face tracking track in real time by combining voice recognition Pending CN110544270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910817876.8A CN110544270A (en) 2019-08-30 2019-08-30 method and device for predicting human face tracking track in real time by combining voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910817876.8A CN110544270A (en) 2019-08-30 2019-08-30 method and device for predicting human face tracking track in real time by combining voice recognition

Publications (1)

Publication Number Publication Date
CN110544270A true CN110544270A (en) 2019-12-06

Family

ID=68711160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910817876.8A Pending CN110544270A (en) 2019-08-30 2019-08-30 method and device for predicting human face tracking track in real time by combining voice recognition

Country Status (1)

Country Link
CN (1) CN110544270A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818385A (en) * 2020-07-22 2020-10-23 Oppo广东移动通信有限公司 Video processing method, video processing device and terminal equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101752A (en) * 2007-07-19 2008-01-09 华中科技大学 Monosyllabic language lip-reading recognition system based on vision character
CN104834900A (en) * 2015-04-15 2015-08-12 常州飞寻视讯信息科技有限公司 Method and system for vivo detection in combination with acoustic image signal
CN104951730A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 Lip movement detection method, lip movement detection device and electronic equipment
CN105512348A (en) * 2016-01-28 2016-04-20 北京旷视科技有限公司 Method and device for processing videos and related audios and retrieving method and device
CN108924617A (en) * 2018-07-11 2018-11-30 北京大米科技有限公司 The method of synchronizing video data and audio data, storage medium and electronic equipment
CN109903054A (en) * 2019-03-07 2019-06-18 百度在线网络技术(北京)有限公司 A kind of operation acknowledgement method, apparatus, electronic equipment and storage medium
CN110196914A (en) * 2019-07-29 2019-09-03 上海肇观电子科技有限公司 A kind of method and apparatus by face information input database
CN110475093A (en) * 2019-08-16 2019-11-19 北京云中融信网络科技有限公司 A kind of activity scheduling method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101752A (en) * 2007-07-19 2008-01-09 华中科技大学 Monosyllabic language lip-reading recognition system based on vision character
CN104951730A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 Lip movement detection method, lip movement detection device and electronic equipment
CN104834900A (en) * 2015-04-15 2015-08-12 常州飞寻视讯信息科技有限公司 Method and system for vivo detection in combination with acoustic image signal
CN105512348A (en) * 2016-01-28 2016-04-20 北京旷视科技有限公司 Method and device for processing videos and related audios and retrieving method and device
CN108924617A (en) * 2018-07-11 2018-11-30 北京大米科技有限公司 The method of synchronizing video data and audio data, storage medium and electronic equipment
CN109903054A (en) * 2019-03-07 2019-06-18 百度在线网络技术(北京)有限公司 A kind of operation acknowledgement method, apparatus, electronic equipment and storage medium
CN110196914A (en) * 2019-07-29 2019-09-03 上海肇观电子科技有限公司 A kind of method and apparatus by face information input database
CN110475093A (en) * 2019-08-16 2019-11-19 北京云中融信网络科技有限公司 A kind of activity scheduling method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JOON SON CHUNG ET AL.: "Out of Time: Automated Lip Sync in the Wild", 《ACCV 2016: COMPUTER VISION – ACCV 2016 WORKSHOPS 》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818385A (en) * 2020-07-22 2020-10-23 Oppo广东移动通信有限公司 Video processing method, video processing device and terminal equipment
CN111818385B (en) * 2020-07-22 2022-08-09 Oppo广东移动通信有限公司 Video processing method, video processing device and terminal equipment

Similar Documents

Publication Publication Date Title
CN110545396A (en) Voice recognition method and device based on positioning and denoising
CN110544491A (en) Method and device for real-time association of speaker and voice recognition result thereof
US11398235B2 (en) Methods, apparatuses, systems, devices, and computer-readable storage media for processing speech signals based on horizontal and pitch angles and distance of a sound source relative to a microphone array
CN110544479A (en) Denoising voice recognition method and device
CN107481327B (en) About the processing method of augmented reality scene, device, terminal device and system
EP3591633B1 (en) Surveillance system and surveillance method using multi-dimensional sensor data
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN109672853A (en) Method for early warning, device, equipment and computer storage medium based on video monitoring
CN110503957A (en) A kind of audio recognition method and device based on image denoising
US20150049247A1 (en) Method and apparatus for using face detection information to improve speaker segmentation
CN109286848B (en) Terminal video information interaction method and device and storage medium
CN110674728B (en) Method, device, server and storage medium for playing mobile phone based on video image identification
CN114758271A (en) Video processing method, device, computer equipment and storage medium
CN110544270A (en) method and device for predicting human face tracking track in real time by combining voice recognition
JP2007114885A (en) Classification method and device by similarity of image
CN111784750A (en) Method, device and equipment for tracking moving object in video image and storage medium
CN114819110B (en) Method and device for identifying speaker in video in real time
CN111310595A (en) Method and apparatus for generating information
CN109800678A (en) The attribute determining method and device of object in a kind of video
CN109800685A (en) The determination method and device of object in a kind of video
CN114220175B (en) Motion pattern recognition method and device, equipment, medium and product thereof
CN111445499B (en) Method and device for identifying target information
CN112668364B (en) Behavior prediction method and device based on video
CN112115740B (en) Method and apparatus for processing image
CN110517295A (en) A kind of the real-time face trace tracking method and device of combination speech recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191206

RJ01 Rejection of invention patent application after publication