WO2005114576A1 - 動作内容判定装置 - Google Patents
動作内容判定装置 Download PDFInfo
- Publication number
- WO2005114576A1 WO2005114576A1 PCT/JP2005/009376 JP2005009376W WO2005114576A1 WO 2005114576 A1 WO2005114576 A1 WO 2005114576A1 JP 2005009376 W JP2005009376 W JP 2005009376W WO 2005114576 A1 WO2005114576 A1 WO 2005114576A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature amount
- operation content
- utterance
- hmm
- Prior art date
Links
- 238000001514 detection method Methods 0.000 claims abstract description 364
- 238000000034 method Methods 0.000 claims abstract description 312
- 238000000605 extraction Methods 0.000 claims abstract description 90
- 238000012545 processing Methods 0.000 claims abstract description 85
- 230000008569 process Effects 0.000 claims description 254
- 238000012706 support-vector machine Methods 0.000 claims description 74
- 230000004397 blinking Effects 0.000 claims description 53
- 238000004364 calculation method Methods 0.000 claims description 36
- 230000037007 arousal Effects 0.000 claims description 28
- 230000008859 change Effects 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 13
- 230000009471 action Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 5
- 238000013500 data storage Methods 0.000 abstract description 35
- 210000001508 eye Anatomy 0.000 description 142
- 238000010586 diagram Methods 0.000 description 54
- 230000000694 effects Effects 0.000 description 29
- 238000001228 spectrum Methods 0.000 description 23
- 230000004048 modification Effects 0.000 description 22
- 238000012986 modification Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 18
- 210000000744 eyelid Anatomy 0.000 description 11
- 206010041349 Somnolence Diseases 0.000 description 9
- 238000000513 principal component analysis Methods 0.000 description 9
- 230000007704 transition Effects 0.000 description 9
- 210000000214 mouth Anatomy 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 230000009467 reduction Effects 0.000 description 7
- 230000002123 temporal effect Effects 0.000 description 7
- 230000002159 abnormal effect Effects 0.000 description 6
- 230000003183 myoelectrical effect Effects 0.000 description 5
- 230000007958 sleep Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 210000004709 eyebrow Anatomy 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 230000035479 physiological effects, processes and functions Effects 0.000 description 3
- 230000036626 alertness Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010062519 Poor quality sleep Diseases 0.000 description 1
- 208000032140 Sleepiness Diseases 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005549 size reduction Methods 0.000 description 1
- 230000037321 sleepiness Effects 0.000 description 1
- 230000005061 slumber Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
- G10L15/25—Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
- G06F18/295—Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/84—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
- G06V10/85—Markov-related models; Markov random fields
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
Definitions
- the present invention relates to a device for determining the content of a subject's motion, and in particular, to a motion content determination device suitable for determining the content of a subject's motion from a captured image including a face of the subject, an utterance
- the present invention relates to a content determination device, a car navigation system, an alarm system, an operation content determination program, and an operation content determination method.
- the voice recognition device of Patent Document 1 captures a speaker by a camera, processes the captured image by an image processing ECU, and determines the presence or absence of utterance from the state of the appearance of the speaker. For example, the appearance state power such as the face direction, the lip movement, and the line-of-sight direction also determines the presence or absence of utterance.
- a pattern 'matching method is used in processing a captured image to detect the direction of the face, the movement of the lips, and the direction of the line of sight. That is, recognition accuracy is improved by performing voice recognition when it is determined that the speaker is speaking.
- the template 'matching method in the pattern' matching method ' is used in advance for the face or other parts to be detected. This is a method that prepares a tabular image pattern or an average image pattern as a template, and searches for an image area closest to the template image from the entire image to realize face detection and other part detection.
- the image recognition device of Patent Document 2 includes an image acquiring unit for acquiring a distance image stream for a target object, and an oral part extracting unit for extracting an oral part from the distance image stream acquired by the image acquiring unit. And an image recognition unit for recognizing at least one of the shape of the lips and the movement of the lips based on the distance image stream of the mouth part extracted by the mouth part extraction unit! / For extraction of the oral cavity, a template matching method or the like is used as in the speech recognition device of Patent Document 1. In addition, the image recognition unit prepares templates of the shape of the oral cavity corresponding to the pronunciation of “A” and “I” in advance, and performs matching between these templates and the extracted image of the oral cavity to obtain the utterance content. Recognize.
- Patent Document 3 As a technique for photographing a face image of a target person and processing the photographed image to detect whether or not the driver is awake, a driving state detection device described in Patent Document 3 is disclosed. There is a dozing state detection device described in Patent Document 4 and a dozing driving prevention device described in Patent Document 5.
- the driving state detection device described in Patent Document 3 detects a driver's eye region by performing a correlation operation on a captured image using a target template, and detects a driver's eye region from the detected image of the eye region. Is determined.
- the dozing state detection device described in Patent Document 4 detects the density of pixels along a vertical pixel row of a face image, and detects one for each local increase in density in the pixel row. Pixels are defined as extraction points, and the extraction points adjacent to each other in the pixel column direction are connected to detect the position of the eye from a group of curves extending in the horizontal direction of the face. The position of the eye is detected, and then the open / closed state of the eye is determined in a predetermined area including the eye, and the dozing state is detected based on the change in the open / closed state.
- the dozing driving prevention device described in Patent Document 5 sequentially captures a video including an eye of a driver of a car as a moving image by a video camera, and stores the latest video and a frame memory. Calculate the area of the area where the brightness has changed between the previous image and the Calculation is performed to obtain a correlation coefficient between the time-series pattern of the area difference between the reduced area and the reduced area and the standard blink waveform. When the correlation coefficient exceeds the reference value, the instant of the blink is extracted, and the awake state of the driver is determined based on the extraction of the blink.
- Patent document 1 JP-A-11-352987
- Patent Document 2 JP-A-11 219421
- Patent Document 3 JP-A-8-175218
- Patent Document 4 JP-A-10-275212
- Patent Document 5 JP-A-2000-40148
- a template matching method is used to detect a lip portion from an image captured by a fixed camera.
- the detection accuracy may be significantly reduced depending on the contents of the prepared template.
- the search for the lips is performed on the image of the entire face, there is a problem that the number of search points increases and the processing becomes heavy.
- the image recognition device of Patent Document 2 since the size of the oral cavity region at the time of opening is determined by a certain threshold to detect a speech section, for example, an ambiguity such as discrimination between lack extension and speech is made. It was difficult to judge the operation contents even with a great image power.
- the frequency of blinks within a certain period of time, the integrated value of the opening and closing times of blinks within a certain period of time, and the like are used to determine the awake state.
- the arousal state is determined by considering information such as the amplitude, duration, and speed of each blink, which is considered to be effective in determining the arousal state from the viewpoint of physiology I can't.
- the present invention has been made by focusing on such unresolved problems of the conventional technology, and is intended to determine the operation content of the subject from a captured image including the face of the subject. It is an object of the present invention to provide a suitable operation content determination device, utterance content determination device, car navigation system, alarm system, operation content determination program, and operation content determination method.
- an operation content determination device determines an operation content of the subject based on a photographed image including a predetermined part constituting a face of the subject.
- Image capturing means for capturing an image including the predetermined part
- a feature amount extracting unit configured to extract a feature amount in the image of the predetermined part based on an image captured by the image capturing unit;
- An HMM Hidden Markov Model
- Operation content determining means for calculating the likelihood with respect to the feature amount using the amount and the HMM, and determining the operation content related to the movement of the predetermined part of the subject based on the calculation result. It is a feature.
- the HMM is a stochastic model of a time-series signal, and transits between a plurality of stationary signal sources to model a non-stationary time-series signal.
- the time length of voice changes depending on the speaking speed, and depending on the content of the utterance, it shows a characteristic shape on the frequency (referred to as a envelope), but the shape depends on the person who speaks, the environment, the content, etc. And fluctuations occur.
- HMM is a statistical model that can absorb such fluctuations.
- HM M may be defined in any unit (for example, a word or phoneme if performing speech recognition), and each HMM (where “each” means, for example, a plurality of words if they are words) , Because there are multiple phonemes in the phoneme, as shown in Fig. 31, the state consists of a plurality of states, and each state is statistically learned, and the state transition probability (a) and output probability (b: Probability distribution such as normal distribution and mixed normal distribution). For example, the transition probability absorbs the fluctuation of the time expansion and contraction of the voice, and the output probability absorbs the fluctuation of the spectrum.
- a spectral component obtained by Fourier-transforming the image of the predetermined portion a logarithmic component of a frequency spectrum obtained by performing a Fourier transform on the image of the predetermined portion, and a frequency component obtained by performing a Fourier transform on the image of the predetermined portion are obtained.
- MFCC Mel's cepstrum
- the image of the predetermined part is the image of the predetermined part cut out from the captured image itself.
- the predetermined parts constituting the face are eyes, nose, mouth, eyebrows and the like.
- an operation content determination device determines an operation content of the target person based on a captured image including a predetermined part constituting a face of the target person.
- a content judging device determines an operation content of the target person based on a captured image including a predetermined part constituting a face of the target person.
- Image capturing means for capturing an image including the predetermined part
- a feature amount extracting unit for extracting a feature amount in the image of the predetermined part based on a detection result of the face part detecting unit
- HMM Hidden Markov Model
- the likelihood for the feature amount is calculated using the feature amount extracted by the feature amount extraction unit and the HMM (Hidden Markov Model), and the likelihood of the subject is related to the movement of the predetermined part based on the calculation result.
- an operation content determination means for determining the operation content.
- the operation content determining means inputs the feature amount extracted by the feature amount extracting means and the feature amount extracted from the image of the predetermined part,
- the likelihood for the feature amount is calculated using an HMM that outputs a likelihood for a predetermined motion content related to the movement of the predetermined part, and based on the calculation result, the likelihood of the predetermined part of the subject is calculated. It is possible to determine the operation content related to the movement.
- the predetermined part can be detected with high accuracy from various captured images, and a known HMM is used for determining the operation content. Accordingly, it is possible to determine the operation content related to the movement of the predetermined part with the concept of time, so that the operation content can be determined with higher accuracy.
- the SVM is one of the learning models for configuring a discriminator for discriminating two classes having excellent pattern recognition performance.
- the SVM exhibits high discrimination performance even for unlearned data by setting the discrimination plane based on the criterion of maximizing the margin.
- the minimum distance between the discrimination plane and the training sample is used as the evaluation function,
- the identification plane is set so that Furthermore, SVM can construct a nonlinear discriminant function using a kernel trick technique.
- the kernel trick is an extension method to a nonlinear classifier, in which a feature space vector is mapped to a higher-dimensional space using a non-linear mapping, and linear discrimination is performed in the space, thereby performing a linear discrimination in the original space. Achieve nonlinear discrimination.
- This non-linear mapping is called a kernel function, and the identification method using it is called a kernel trick.
- SVM refer to “Takio Kurita, Introduction to Support Vector Machines” on the web page of the URL “http: ⁇ www.neurosci.a ist.go.jp/ ⁇ kurita/lecture/svm/svm.html”. I want to be.
- a multi-pattern learning is performed in advance on the image of the face whose content (shape, luminance distribution, etc.) changes according to the face orientation, and the images of the parts constituting the face, and the image of the face and the image of the part are learned.
- the boundary between the image and other images is accurately separated using a curved surface, and the image of the face and its parts is detected based on this boundary, so that the content changes according to the face orientation. Accurate detection performance can be demonstrated for images of the face and its parts.
- the face part detecting means is configured to determine the predetermined part for each of a plurality of face directions of the subject in the captured image. It is characterized in that the size of the image area to be detected is changed according to each direction.
- the face region detecting means can change the size of the image area detected as the predetermined region for each of the directions of the face in the plurality of directions according to each direction. .
- the predetermined part is Since the image is taken in various shapes and sizes according to the direction of the face, even if the size of the image area of the predetermined part to be detected is changed according to the direction of the face, the necessary feature amounts are sufficiently extracted. It is possible. Therefore, the predetermined detection is performed according to the direction of the face. By changing the size of the image area of the part, it is not necessary to perform the extraction processing of the feature amount for the image of the unnecessary part, so that the speed of the extraction processing can be improved.
- the operation content determining apparatus according to any one of the first to third aspects, wherein the image photographing means includes the object person. Captures an image portion that includes the entire face of
- a positional relationship information acquiring unit configured to acquire positional relationship information between an image portion including the entire face and the image of the predetermined part
- Face direction determining means for determining the direction of the face of the target person based on the positional relationship information
- the HMM includes ones corresponding to the plurality of directions generated for each of the directions of the face in the plurality of directions,
- the operation content determination means selects an HMM corresponding to the plurality of HMM forces, the face direction of the determination result based on the determination result of the face direction determination means, and the feature quantity extracted by the feature quantity extraction means and the feature quantity. Using the selected HMM, the likelihood of the selected HMM with respect to the feature amount is calculated, and based on the calculation result, the motion content related to the movement of the predetermined part of the subject is determined. And
- the positional relationship information acquiring means can acquire positional relationship information between the image portion including the entire face and the image of the predetermined part, and the face direction determining means
- the direction of the face of the subject can be determined based on the positional relationship information
- the operation content determining means determines the plurality of HMM forces based on the determination result of the face direction determining means.
- An HMM corresponding to the resulting face direction is selected, and the likelihood of the selected HMM with respect to the feature is calculated using the feature extracted by the feature extracting unit and the selected HMM. It is possible to determine the operation content related to the movement of the predetermined part of the subject based on the target.
- the orientation of the face of the subject is determined, and the HMM corresponding to the face orientation of the determination result is selected using the HMM corresponding to the face orientation in the plurality of directions, and the selected HMM is used. Since the operation content is determined by using, for example, the inner mirror in the car When performing judgment processing of the operation content using an image including the entire face of the subject taken by one fixed camera installed in the area, the shape of the specified part whose shape changes according to various face directions It is possible to more accurately determine the operation content related to the movement of the predetermined part from the feature amount corresponding to the various face directions in the image.
- the operation content judging means comprises a predetermined number of consecutive captured images.
- the feature amount of each frame corresponding to each frame is input as a set to the HMM. Is input to the input of the first frame of the immediately preceding set so that the frames of the immediately preceding set and the next set partially overlap. On the other hand, the input is shifted by a predetermined frame.
- the image of the predetermined part is an image of a lip portion of the subject. Including a statue,
- the feature amount extracting means extracts a feature amount in the image of the lip portion based on the image of the lip portion
- the HMM includes a lip state determination HMM that receives as input a feature amount extracted from the image of the lip portion and outputs likelihood for a predetermined operation content related to the movement of the lip portion,
- the operation content determining means calculates the likelihood for the feature using the feature of the lip portion and the HMM for lip state determination, and based on the calculation result, It is characterized by judging the action contents related to the movement of the subject's lips! Puru.
- the image of the lip portion includes not only the image of the lip portion from which the captured image power is also cut out, but also an area image including the image of the lip portion and an image in the vicinity thereof.
- the invention according to claim 7 is the operation content determination device according to claim 6, wherein the HMM outputs a likelihood for at least one of a speech state and a non-speech state of the subject,
- the operation content determining means determines whether or not the subject is in a speaking state by the lip state determining HMM for each frame of the captured image.
- Utterance start point determining means for determining an utterance start point indicating an output of the lip state determination HMM corresponding to a point in time when the subject starts uttering, based on the determination result
- the operation content determining means determines an utterance section until the subject's utterance start force utterance ends based on the determination result of the utterance start point determining means.
- the utterance start point determining means can determine the utterance start point indicating the output of the HMM corresponding to the point in time when the subject starts uttering, based on the determination result. It is possible, and the operation content determining means can determine the utterance section up to the utterance start power of the subject person based on the determination result of the utterance start point determining means.
- the utterance interval can be determined with high accuracy, and the utterance interval in the determined utterance interval can be determined.
- the recognition accuracy of the utterance content of the target person in a noisy place can be improved.
- the utterance start point determining means determines that the determination result is a one-frame eyesight n (n is an integer and n ⁇ 2).
- n is an integer and n ⁇ 2
- the first frame is set as a candidate of the utterance start point, and the discrimination result power is the frame sight and m (m is an integer and m ⁇ 3) frames are continuously used.
- the first frame is determined to be the utterance start point when a state indicating utterance is reached.
- the utterance start point can be determined.
- the invention according to claim 9 is the operation content determination device according to claim 8, wherein the utterance start point determination unit is configured to perform the n-th frame eye k (k is an integer and k ⁇ m) frame, When the discrimination result indicates a non-speech state, and when the discrimination result indicates a non-speech state for n + k frame eyes and p (p is an integer and p ⁇ 10) frames continuous, When the frame is removed from the candidate power of the utterance start point, while the n + k frame is within r frames (r is an integer and r ⁇ p) and the discrimination result again shows the utterance The first frame is determined as an utterance start point.
- the invention according to claim 10 is the operation content judging device according to any one of claims 6 to 9, wherein the HMM includes an utterance state and a non-utterance state of the subject. Output the likelihood for at least one,
- the operation content determination means determines whether or not the subject is in a utterance state by the HMM for each frame of the captured image
- An utterance end point determining unit that determines an utterance end point indicating an output of the HMM corresponding to a time point at which the target person has finished speaking based on the determination result
- the operation content determining means determines the utterance section until the utterance start power of the subject based on the determination result of the utterance end point determining means.
- the utterance end point determination unit performs the following based on the determination result. It is possible to determine an utterance end point indicating an output of the HMM corresponding to a point in time at which the target person ends the utterance, and the operation content determining means determines the target based on the determination result of the utterance end point determining means. Speaker's utterance start ability It is possible to determine the utterance section up to the end of utterance.
- the utterance section can be determined with high accuracy, and the utterance data of the target person in the determined utterance section can be determined.
- voice recognition it is possible to improve the recognition accuracy of the utterance content of the target person in a noisy place.
- the utterance end point judging means judges that the judgment result is w (w is an integer and w ⁇ 20) frames consecutive and non-interval. It is characterized in that when a state indicating an utterance is reached, the first frame in the w frame is determined as the end point of the utterance.
- the discrimination result becomes something that cannot be realistically (abnormal state), for example, repetition of utterance Z and non-utterance, it is more accurate.
- the utterance end point can be determined.
- the utterance end point determining means includes: x (x is an integer and 6 ⁇ x w)
- the above-mentioned discrimination result indicates a single utterance and a state in which utterance is continued for two consecutive frames, based on the count of the state indicating non-utterance after x + 1 frame.
- the counting is continued up to the w-th frame even if either one of the above is reached, while the count is cleared when the state indicating the utterance continues for three consecutive frames.
- the discrimination result becomes something that cannot be realistically (abnormal state), for example, repetition of utterance Z and non-utterance, it is more accurate.
- the utterance end point can be determined.
- the invention according to claim 13 is the operation content determination device according to any one of claims 1 to 12, wherein the image of the predetermined part is an image of an eye part of the subject.
- the feature amount extracting means extracts a feature amount in an image of the eye portion based on the detection result of the eye portion,
- the HMM includes an eye state determination HMM that receives a feature value extracted from the image of the eye portion as input and outputs a likelihood with respect to the operation content related to the movement of the eye portion. Calculating the likelihood for the feature using the feature of the eye extracted by the feature extracting means and the HMM for eye state determination, and calculating the likelihood of the eye of the subject based on the calculation result.
- the feature is to determine the related operation contents.
- the invention according to claim 14 is the operation content determination device according to claim 13, wherein the eye state determination HMM is configured to detect a plurality of frames of the eye portion and to input a feature amount to be extracted. Outputting the likelihood for the type of blink of the subject, and using the HMM for eye state determination and the feature amounts of the detected images of the eye portions of the plurality of frames extracted by the feature amount extraction unit. Calculating the likelihood for the feature amount, and determining the type of blink of the subject based on the calculation result.
- the invention according to claim 15 is the operation content judging device according to claim 13, wherein the eye state determination HMM is configured to detect the image power of a plurality of frames of the eye portion, and to input a feature amount to be extracted. Outputting the likelihood for the type of blink speed and amplitude of the subject,
- the operation content determination unit calculates the likelihood for the feature amount using the feature amount of the detected image of the eye portion of a plurality of frames extracted by the feature amount extraction unit and the eye state determination HMM. It is characterized in that the blink speed and the type of amplitude of the subject are determined based on the result.
- the eye state determination HMM is configured such that the subject is input in response to input of a feature amount of a plurality of frames of detected images of the eye portion. Outputs the likelihood for the type of blink
- the operation content determination unit calculates the likelihood for the feature amount using the feature amount of the detected image of the eye portion of a plurality of frames extracted by the feature amount extraction unit and the eye state determination HMM.
- the awake state of the subject is determined based on the result.
- the state of blinking for example, the state of blinking, the state of falling asleep, etc., depending on the type of blink of the subject being classified, the blinking speed, the blinking eyelid closing calorie reduction force, etc. It is possible to accurately determine the arousal state of the subject.
- the invention according to claim 17 is the operation content determination device according to claim 13, wherein the eye state determination HMM is a specific type of input with respect to input of a feature amount for a plurality of frames of detected images of the eye portion. Outputs the likelihood for blinking,
- the operation content determination unit uses the feature amount of the detected image of the eye portion of the plurality of frames extracted by the feature amount extraction unit and the blinking having the specific property with respect to the feature amount using the eye state determination HMM.
- the method is characterized in that the likelihood for the type is calculated, and the awake state of the subject is determined based on the calculation result.
- the invention according to claim 18 is the operation content determination device according to claim 17, wherein The operation content determining means is configured to determine the arousal state of the subject based on a change in the frequency of occurrence of each of the specific types of blinks within a predetermined time.
- a specific type of blink within a predetermined time period such as the frequency of occurrence of a specific type of blink or a specific type of blink, which is considered to be effective in determining arousal state from a physiological point of view. It is possible to determine the arousal state with high accuracy based on a change in the frequency of occurrence of awakening.
- an utterance content determination device comprises: an image capturing unit that captures an image including a predetermined portion constituting a face of a target person;
- a face portion detecting means for detecting an image of the lip portion of the subject from the photographed image; and a feature amount for extracting a feature amount in the image of the lip portion based on the image of the lip portion detected by the face portion detecting means. Extraction means;
- An HMM HiddenMarkov for utterance content determination which receives as input a feature amount extracted from the image of the lip portion and outputs likelihood for the utterance content related to the movement of the lip portion.
- Utterance content determination means for calculating the likelihood for the feature amount using the feature amount extracted by the feature amount extraction means and the utterance content determination HMM, and determining the utterance content of the subject based on the calculation result; It is characterized by having! Puru.
- the feature amount extracting means it is possible to extract the feature amount in the image based on the image of the lip portion by the feature amount extracting means, and to use the utterance content determining means to extract the feature amount in the image.
- the likelihood for the feature amount is calculated using the feature amount extracted by the feature amount extraction means and the HMM for lip state determination, and the utterance content of the subject can be determined based on the calculation result.
- the HMM it is possible to determine the state of the utterance operation with a temporal concept, and it is possible to determine the utterance content with high accuracy even if there is no voice information. .
- a car navigation system is a car navigation system according to any one of claims 6 to 12, and a lip by the action content determination apparatus. It is characterized by comprising: voice recognition means for performing voice recognition processing based on the determination result of the motion content related to the movement; and operation processing means for performing predetermined motion processing based on the recognition result of the voice recognition means.
- the voice recognition means to perform voice recognition processing based on the determination result of the motion content related to the movement of the lip by the motion content determination device. It is possible to perform predetermined operation processing based on the recognition result of the voice recognition means.
- this system when this system is installed in a car and the subject is a car driver, conversations with passengers other than the driver, music flowing through car stereo, mouth noise, wind noise, engine
- a predetermined operation such as route search or route guidance to the destination based on the recognition result.
- the car navigation system uses an inertial navigation device or GPS (Global Positioning Satellite System) to provide information about the current position on the display screen to the occupants such as the driver during operation of the vehicle. It is a known device that provides guidance on travel routes to destinations and destinations.
- GPS Global Positioning Satellite System
- an alarm system displays the operation content determination device according to any one of claims 16 to 18 and the determination result of the awake state. Or a notifying means for giving an alarm notification.
- the notification unit With this configuration, the result of the determination of the arousal state of the subject, which is determined by the operation content determination apparatus according to any one of claims 16 to 18, by the notification unit is set as the target. It is possible to notify the person or the person concerned.
- the present system when the present system is installed in a car and the subject is a car driver, it is determined that the driver is in drowsiness, and a warning is given by a warning sound or the like. By giving, it is possible to prevent drowsy driving and the like.
- the operation content determining program according to claim 22 is configured to execute the operation content of the target person based on a captured image including a predetermined part constituting the face of the target person
- Image capturing means for capturing an image including the predetermined part
- a face part detecting means for detecting a predetermined part constituting the face of the target person from the shot image using a support vector machine (SVM) based on the shot image of the image shooting means;
- SVM support vector machine
- a feature amount extracting unit for extracting a feature amount in the image of the predetermined part based on a detection result of the face part detecting unit
- the feature amount extracted by the feature amount extraction means and the feature amount extracted from the predetermined part are input, and the likelihood for the motion content related to the movement of the predetermined part is output, using an HMM (HiddenMarkov Model).
- HMM HiddenMarkov Model
- the image photographing means photographs an image including the entire face of the subject
- a positional relationship information acquiring unit configured to acquire positional relationship information between an image portion including the entire face and the image of the predetermined part
- the program further includes a program for causing a computer to execute a process realized as a face direction determining unit that determines a face direction of the subject based on the positional relationship information, wherein the operation content determining unit includes the face direction determining unit.
- the HMM corresponding to the face direction of the determination result is selected from the HMMs corresponding to the plurality of directions generated for each of the face directions in a plurality of directions, and extracted in the feature amount extraction step.
- the likelihood of the selected HMM with respect to the feature amount is calculated, and based on the calculation result, the operation content related to the movement of the predetermined portion of the subject is calculated. It is characterized by determining.
- the invention according to claim 24 provides the operation content determination pro- cess described in claim 22 or claim 23.
- the image of the predetermined portion includes an image of the lip portion of the subject, and the feature amount extracting unit extracts a feature amount in the image of the lip portion based on the image of the lip portion,
- the motion content determining means receives the feature value of the lip portion and the feature value extracted from the image of the lip portion as inputs, and outputs the likelihood of a predetermined motion content related to the movement of the lip portion as output.
- the method is characterized in that the likelihood for the feature amount is calculated using a state determination HMM, and based on the calculation result, the motion content related to the lip movement of the subject is determined.
- the image of the predetermined part is an image of an eye portion of the subject.
- the feature amount extracting means extracts a feature amount in an image of the eye portion based on the detection result of the eye portion,
- the motion content determining means receives as input the feature quantity of the eye portion extracted by the feature quantity extracting means and the feature quantity extracted from the image power of the eye section, and likelihood for the motion content related to the movement of the eye section.
- the likelihood for the feature amount is calculated using an eye state determination HMM that outputs an image, and the motion content related to the movement of the eye of the subject is determined based on the calculation result. .
- a motion content determination method determines the motion content of the subject based on a captured image including a predetermined portion constituting the face of the subject. Operation content determination method for
- HMM Hidden Markov Model
- the image photographing means photographs an image including the entire face of the target person
- an HMM force generated for each of the face directions in a plurality of directions and corresponding to the plurality of directions is used.
- the HMM corresponding to the direction is selected, and the likelihood of the selected HMM with respect to the feature is calculated using the feature extracted in the feature extraction step and the selected HMM, and the calculation result is obtained.
- the operation content related to the movement of the predetermined part of the subject is determined based on the above.
- the image of the predetermined part includes an image of a lip portion of the subject, and the feature amount
- a feature amount in the image of the lip is extracted based on the image of the lip
- the feature amount of the lip portion and the feature amount extracted from the image of the lip portion are input, and the likelihood for a predetermined operation content related to the movement of the lip portion is input.
- the method is characterized in that the likelihood to be calculated is calculated, and based on the calculation result, the action content related to the movement of the subject's lips is determined.
- the image of the predetermined part is an image of an eye part of the subject.
- a feature amount in an image of the eye portion is extracted based on a detection result of the eye portion
- the HMM includes:
- the feature amount of the eye portion extracted in the feature amount extracting step and the feature amount extracted from the image of the eye portion are input, and the movement of the eye portion is determined.
- the likelihood for the feature is calculated using an eye state determination HMM that outputs the likelihood for the related motion, and the motion related to the motion of the eye of the subject is calculated based on the calculation result. It is characterized by determining.
- FIG. 1 is a block diagram showing a configuration of an utterance section detection device according to the present invention.
- FIG. 2 (a) is a diagram showing a concept of a process of searching for a whole face region in a detection image
- FIG. 2 (b) is a diagram showing a concept of a process of searching for a lip region from a detected whole face region.
- (c) is a diagram showing a concept of a lip region search process in the tracking mode.
- FIG. 3 (a) is a diagram showing an example of a captured image, (b) is a diagram showing a search area and a search window in a detection mode in a detection image, and (c) is a diagram showing a search window.
- FIG. 4 is a diagram showing a search area and a search window in a tracking mode in a detection image.
- FIG. 4 is a diagram showing a temporal concept in inputting a feature amount to an HMM.
- FIG. 5 is a diagram showing a flow of an utterance start point determination process based on the output of the HMM.
- FIG. 6 is a diagram showing a flow of a process of determining an utterance end point based on an output of an HMM.
- FIG. 7 is a diagram showing an example of a determination result of utterance Z non-utterance for various face directions.
- FIG. 8 is a flowchart showing an operation process of the utterance section detection device 1.
- FIG. 9 is a flowchart showing a process of generating image data for detection in the image processing unit 12.
- FIG. 10 is a flowchart showing a lip area detection process in a lip area detection unit 13.
- FIG. 11 is a flowchart showing a feature amount extraction process in a feature amount extraction unit 14.
- FIG. 12 is a flowchart showing an utterance section detection process in the utterance section detection unit 15.
- Fig. 13 is a flowchart showing an utterance start point determination process in the utterance section determination unit 15.
- Fig. 14 is a flowchart showing an utterance end point determination process in an utterance section determination unit 15.
- FIG. 15] (a) to (c) are diagrams showing an example of a lip region detected according to a face orientation.
- FIG. 16 is a flowchart showing a lip region detection process in a lip region detection unit 13 according to a modification of the first embodiment.
- FIG. 17 is a flowchart showing a feature amount extraction process in a feature amount extraction unit 14 according to a modification of the first embodiment.
- FIG. 18 is a flowchart showing an utterance section detection process in an utterance section detection unit 15 according to a modification of the first embodiment.
- FIG. 19 is a diagram showing utterance section identification probabilities in a case where an HMM is used without considering a face orientation and in a case where an HMM is used.
- FIG. 20 is a block diagram illustrating a configuration of an awake state determination device according to the present invention.
- FIG. 21 (a) is a diagram showing an example of a captured image
- FIG. 21 (b) is a diagram showing a search area and a search window in a detection mode in a detection image
- FIG. FIG. 4 is a diagram showing a search area and a search window in a tracking mode in a detection image.
- FIG. 22 is a diagram showing an electromyogram waveform for awakening state determination with respect to one blink.
- FIG. 23 is a diagram showing a fluttering waveform pattern.
- FIG. 24 is a diagram showing a coincidence between the output of the awake state determination HMM and the electromyogram waveform with respect to the waveform O in FIG. 23.
- FIG. 25 The output of the awake state determination HMM and the electromyogram waveform for waveform A in FIG. It is a figure which shows the matching relationship of.
- FIG. 26 is a diagram showing the coincidence between the output of the awake state determination HMM for waveform B in FIG. 23 and the electromyogram waveform.
- FIG. 27 is a diagram showing an example of a blink interval and a waveform pattern of a cluster.
- FIG. 28 is a flowchart showing a detection process of a left eye region in an eye region detection unit 33.
- FIG. 29 is a flowchart showing a feature amount extraction process in a feature amount extraction unit 34.
- FIG. 30 is a flowchart showing awake state determination processing in awake state determination section 35.
- FIG. 31 is a diagram showing an example of an HMM and a spectrum envelope corresponding to each state of the HMM.
- FIG. 1 is a diagram showing a first embodiment of the present invention.
- a face part detection device an operation content determination device, a face part detection device control program, an operation content determination device control program, a face part detection device control method, and an operation content determination device according to the present invention
- the control method is applied to an utterance section detection device that detects an utterance section, which is a section up to the end of utterance of a driver driving a car, will be described.
- FIG. 1 is a block diagram showing a configuration of an utterance section detection device according to the present invention.
- the utterance section detection device 1 includes an image photographing unit 10, a data storage unit 11, an image processing unit 12, a lip region detection unit 13, a feature amount extraction unit 14, an utterance
- the configuration includes the section detection unit 15 and.
- the utterance section detection device 1 is installed in a car interior and is linked with a car navigation system (hereinafter referred to as CNS) having a voice recognition function installed in the car interior not shown. Connected as possible. Then, the output of the utterance section detection device 1 is input to the CNS, which performs voice recognition based on the input information, and performs a predetermined operation based on the recognition result.
- CNS car navigation system
- the image photographing unit 10 has a configuration including a CCD (charge coupled device) camera, and outputs an image photographed in a frame unit as digital data. Then, the output image data is transmitted to the data storage unit 11.
- the CCD camera is mounted on an inner mirror in the vehicle cabin so that an image including the entire face of the person (driver) sitting in the driver's seat can be captured.
- the position of the CCD camera is not limited to the inner mirror, but may be any position such as steering, column position, center 'panel position, front' pillar position, etc. as long as it can capture an image including the entire face of the subject. Another location is fine.
- the data storage unit 11 stores data necessary for detecting an utterance section, such as an entire face detection SVM, a lip area detection SVM, an utterance section detection HMM, and an image captured by the image capturing unit 10. .
- audio data is also stored in accordance with a frame of a captured image. Therefore, in the present embodiment, a microphone for acquiring voice data spoken by a person sitting in the driver's seat is installed in the automobile.
- the image processing unit 12 performs, as a pre-process of the process of detecting the lip region from the captured image, which is performed by the lip region detection unit 13, for each frame of the captured image, reduction of color information by gray scale
- the image size is reduced by sampling.
- the captured image in which the image is reduced in size and the image is referred to as a detection image.
- the lip area detection unit 13 detects the lip area of the person to be imaged from the detection image acquired from the image processing unit 12 using the SVM.
- the whole face detection SVM that detects the entire face of the subject's face from the detection image
- the lip area detection that detects the lip area from the whole face image detected by the whole face detection SVM
- the lip region is detected in two steps using two types of SVMs: Also, once the lip region is detected, the position information of the lip region detected in the previous frame (for example, the upper left pixel of the image is set to the coordinates (1, 1) for the image for detection of the next frame)
- the search range of the lip region is set based on the coordinate information in the case of, and the SVM for lip region detection is applied to this search range.
- the lip region detection process is performed at high speed.
- a mode in which the above-described two types of SVMs are used to perform a lip region detection process is referred to as a detection mode, in which a search range of the lip region is set based on the position information of the lip region detected in the previous frame.
- the mode in which the lip region detection process is performed by applying the lip region detection SVM to the range is referred to as tracking mode.
- the information of the detection result is transmitted to the feature amount extraction unit 14.
- the feature amount extraction unit 14 Upon acquiring the information of the detection result from the lip region detection unit 13, the feature amount extraction unit 14 reads the corresponding original captured image from the data storage unit 11 based on the information, and reads the information.
- the image of the lip region is cut out from the image, and the feature amount input to the HMM for detecting a speech section described later is extracted.
- the number of dimensions is reduced by using principal component analysis or independent component analysis for the extracted feature amount.
- the clipped lip region image is Fourier-transformed, and the frequency spectrum component is extracted as a feature value.
- the extracted feature values are transmitted to the utterance section detection unit 15 as a set of five consecutive frames.
- the utterance section detection unit 15 inputs the feature amount of the lip region image acquired from the feature amount extraction unit 14 to the HMM for utterance section detection, and starts the utterance of the target person based on the output of the HMM power for this input. The utterance section until the end of the force is detected. The information of the detected speech section is transmitted to the car navigation system, not shown!
- the utterance section detection device 1 includes a processor (not shown), a RAM (Random Access Memory), and a storage medium storing a dedicated program. Control of each section described above.
- the storage medium is a semiconductor storage medium such as a RAM or a ROM, a magnetic storage type storage medium such as an FD or HD, an optical reading type storage medium such as a CD, CDV, LD, or DVD;
- Storage type A storage medium that is a computer-readable storage medium, regardless of the reading method such as electronic, magnetic, optical, etc. It includes all storage media.
- FIG. 2 (a) is a diagram showing a concept of a search process of the entire face area in the detection image
- FIG. 2 (b) is a concept of a process of searching for the lip area from the detected whole face area.
- FIG. 3C is a diagram showing a concept of a lip region search process in the tracking mode.
- FIG. 3A is a diagram showing an example of a captured image
- FIG. 3B is a diagram showing a search area and a search window in a detection mode in a detection image
- FIG. FIG. 4 is a diagram showing a search area and a search window in a tracking mode in a use image.
- FIG. 4 is a diagram showing a temporal concept in inputting a feature amount to the HMM
- FIG. 5 is a diagram showing a flow of a speech start point determination process based on the output of the HMM
- FIG. 9 is a diagram showing a flow of an utterance end point determination process based on the output of the HMM.
- FIG. 7 is a diagram showing an example of the determination result of utterance Z and non-utterance for various face directions.
- the CCD camera attached to the inner mirror is used to control the vehicle as shown in FIG.
- An image including the entire face of the subject to be photographed is taken while sitting in the driver's seat, and the photographed image data is stored in frames (here, 1/30 second) in the order of photographing.
- the information is stored in the storage unit 11.
- the captured image is a color image.
- the data storage unit 11 notifies the image processing unit 12 of the storage.
- the image processing unit 12 Upon receiving the notification from the data storage unit 11, the image processing unit 12 reads out the captured image data from the data storage unit, and performs color information reduction processing and sub-processing on the read image data by grace case conversion.
- the image size is reduced by sampling. For example, if the photographed image is a full-color image having a size of 640 X 480 pixels, the full-scale image is converted into data having a color gradation that is only halfway between white and black by grayscale shading.
- the image is sub-sampled to 1Z10 in each of the vertical and horizontal directions, and is converted into an image having a size of 64 ⁇ 48 pixels. As a result, the number of pixels is reduced to 1Z100.
- the detection image generated in this manner is transmitted to the lip region detection unit 13.
- the lip region detection unit 13 acquires the detection image from the image processing unit 12, the lip region detection unit 13 Then, as shown in FIG. 2A, the entire face image area is scanned by the search window 22 of 20 ⁇ 30 pixels for the entire detection image 20 of 64 ⁇ 48 pixels. Further, the scanned 20 ⁇ 30 pixels, that is, the grayscale values of a total of 600 pixels are input to the whole face detection SVM as 600-dimensional values. In the SVM for whole face detection, learning is performed in advance so that the whole face class and the non-whole face class in the 600-dimensional space can be identified, and the distance between the hyperplane for identification and the input value (such as the Euclidean distance) is determined.
- the similarity between the two is determined, and an area image of 20 ⁇ 30 pixels having the highest similarity is detected as an image area 200 of the entire face.
- a search area 23 of 20 ⁇ 15 pixels including the lower half image area of the image area 200 of the entire face is set as shown in FIG. 2 (b).
- the lip region is scanned by using a search window 24 of 10 ⁇ 10 pixels for the set search region 23. That is, in an actual image, the result is as shown in FIG.
- the scanned gray scale value of 100 pixels of 10 ⁇ 10 pixels, which is a total of 100 pixels is input to the lip region detection SVM as a 100-dimensional value.
- the SVM for lip region detection learning is performed in advance in a state where the lip region class and the non-lip region class in the 100-dimensional space can be distinguished, and the distance between the hyperplane for identification and the input value (the Utari distance Etc.), the similarity between the two is determined, and the region image of 10 ⁇ 10 pixels having the highest similarity is detected as the lip region image. Further, when the lip region image is detected, the position information (coordinate information) is obtained, and the mode is shifted to the tracking mode for the detection image of the next frame.
- the lip region detection unit 13 centers the position coordinates of the lip region image detected in the previous frame on the detection image of the next frame as shown in FIG.
- the processing is performed by omitting the detection processing of the image area of the entire face and limiting the search area to the search area 25 of 15 ⁇ 15 pixels, which is narrower than the search area 23 of 20 ⁇ 15 pixels. Speed up.
- the scanned gray scale value of 100 pixels of 10 ⁇ 10 pixels is input to the lip region detection SVM in the same manner as the above detection mode, and the lip region detection is performed. Outgoing processing is performed.
- the center coordinates of the lip region are transmitted to the feature amount extraction unit 14. In the tracking mode, this mode is maintained as long as the detection of the lip region is successful, and the mode shifts to the face detection mode if the detection of the lip region fails.
- the feature amount extraction unit 14 obtains the acquired center coordinates from the corresponding captured image stored in the data storage unit 11. Cut out a 64 x 64 pixel grayscale lip image centered on the center coordinates. A window function such as a Hamming window is used to reduce the effect of the lip image of each frame cut out when the nose or chin other than the lip is included. After that, two-dimensional Fourier transform processing is performed, and the amplitude vector of the lip image is obtained as a feature value. In the present embodiment, the obtained feature amount is further subjected to dimension reduction by principal component analysis in order to reduce the operation amount and remove information unnecessary for identification.
- the eigenvectors used for the principal component analysis are obtained in advance offline using various lip images of an unspecified number of people, and for example, the principal component analysis is performed using up to the tenth component of the eigenvectors.
- the order of the feature quantity of a multi-dimensional feature is reduced to 10 dimensions.
- Such feature value extraction is performed for each frame, and the extracted feature values are transmitted to the utterance section detection unit 15 as a set of five frames in the order in which they were captured.
- the utterance section detection unit 15 obtains a set of five frames of feature amounts from the feature amount extraction unit 14, as shown in FIG. 4, the first frame of the first thread 400a of the immediately preceding feature amount is input, as shown in FIG. The feature value corresponding to the first frame of the set of feature values 400b is shifted from the feature value corresponding to the feature value corresponding to the first frame of the next set of feature values 400b. Input to HMM for utterance section detection. As a result, the second to fifth frames of the set of feature values 400a and the first to fourth frames of the set of feature values 400b overlap and are input to the HMM for speech section detection. Will be.
- the first thread 400c next to the first thread 400b of the feature quantity has one frame corresponding to the first frame of the next set 400c with respect to the feature quantity corresponding to the first frame of the 400b.
- the feature amount corresponding to the first frame of the set of feature amounts 400c is input to the HMM for utterance section detection. In this way, one frame from the previous frame By inputting the feature values to the HMM for utterance interval detection by shifting each time, it is possible to obtain the output of the HMM with the time resolution of each frame.
- the HMM for detecting an utterance section detects utterance Z non-utterance with respect to a set of input feature amounts of five frames, and previously detects various lip images of an unspecified number of people. Use the one obtained by learning.
- a set of 5-frame features is input to each of the HMM for speech and the HMM for non-speech, and the model with the higher probability of occurrence is output as the identification result.
- the identification result of the 5 frames Is an utterance.
- the utterance section detection unit 15 further performs a process of determining the utterance start point and the utterance end point of the imaging subject based on the output of the HMM.
- the utterance start point and the utterance end point are determined based on the output of the utterance Z and the non-utterance using the utterance HMM and the non-utterance HMM for a set of five frame features. First, the utterance start point determination process will be described.
- the utterance start point is determined according to the flow shown in FIG.
- “S” in FIG. 5 indicates a state in which the utterance candidate points have not been determined
- “C” indicates a state in which the utterance candidate points have been determined
- “D” indicates a state in which the utterance candidate points have been demoted.
- “0” indicates that the HMM output is non-speech
- “1” indicates that the HMM output is utterance.
- the output power of the HMM is the utterance of two consecutive frames (section A in Fig. 5)
- the first frame is set as a candidate for the utterance start point, and three frames are output. From the eyes, transition to the state of "C".
- the first frame (S1 in Fig. 5) set as an utterance candidate point starts utterance. It is determined to be a point.
- the output of the HMM is in a non-speech state within three frames from the state of “C”
- the frame next to the frame in which non-speech occurs transitions to the state of “D”.
- the output of the HMM becomes non-speech state (section C in Fig. 5) for 10 consecutive frames in the state of "D”
- the first frame set as an utterance candidate point is demoted and the utterance candidate point power is excluded. Is done.
- the end point of the utterance is as shown in Fig. 6.
- the determination process is performed in a simple flow.
- “S” in FIG. 6 indicates a state in which a section in which the output of the HMM is non-utterance (section D) is searched for six consecutive frames, and “C” searches for the end point of the utterance.
- “0” indicates that the output of the HMM is not uttering
- “1” indicates that the output of the HMM is uttering.
- FIG. 6 when the output of the HMM is in a non-speech state (section D in Fig.
- the state changes to "C" in which the utterance end point is searched. Transition.
- the state force of “C” is also calculated by ignoring the case where the output of the HMM is in a single-shot utterance state and the case where the output of the HMM is in a utterance state for two consecutive frames. The state in which the user becomes non-uttered continues. On the other hand, when the output of the state force HMM of “C” becomes “1” three consecutive times, the state transits to the state “S1”. Finally, when the non-utterance state is counted 20 times in total, the first frame (state “S1” in FIG. 6) is determined as the utterance end point.
- the information is input to the CNS as utterance section information.
- the above-described utterance start point determination processing and utterance end point determination processing are performed on the lip region image in which the captured image power is also cut out by the SVM.
- FIGS. 7 (a) to 7 (d) it is possible to correctly detect the utterance section even for lip images with various face directions as shown in FIGS. 7 (a) to 7 (d).
- the lip images of (a) to (c) are determined to be in the utterance state by the HMM for utterance section detection, and the lip image of (d) is determined to be in the non-utterance state. .
- the frame power of the utterance start point and the voice data corresponding to the captured image up to the frame of the utterance end point are converted into data.
- the voice data is read from the storage unit 11, and the read voice data is recognized by voice.
- predetermined processing such as route search and information display is performed based on the speech recognition result.
- FIG. 8 is a flowchart showing an operation process of the utterance section detection device 1.
- step S100 the image capturing unit 10 captures an image of the person to be captured
- step S102 the data storage unit 11 stores the image data captured by the image capturing unit 10, and the process proceeds to step S104.
- step S104 the image processing unit 12 reads the captured image data stored by the data storage unit 11, and proceeds to step S106.
- step S106 the image processing unit 12 generates detection image data from the read image data, and outputs the generated detection image data to the lip region detection unit 1.
- step S108 the lip region detection unit 13 detects the lip region from the detection image, transmits the position information of the detected lip region to the feature amount extraction unit 14, and proceeds to step S110.
- step S110 the feature amount extraction unit 14 cuts out the image of the lip region from the captured image based on the detected position information of the lip region, extracts the extracted image force characteristic amount, and extracts the extracted feature.
- the amount is transmitted to the utterance section detection unit 15, and the process proceeds to step S112.
- step S112 the utterance section detection unit 15 inputs the feature amount obtained from the feature amount extraction unit 14 to the utterance section detection HMM, determines the state of the utterance Z non-utterance, and based on the determination result. The utterance section is detected, and the routine goes to Step S114.
- step S114 the utterance section detection unit 15 transmits information on the detected utterance section to the CNS, and ends the processing.
- FIG. 9 is a flowchart showing a process of generating image data for detection in the image processing unit 12.
- the process proceeds to step S200, and it is determined whether captured image data has been acquired from the data storage unit 11. If it is determined that the captured image data has been acquired (Yes), the process proceeds to step S202. If not, (No) wait until it is obtained.
- step S202 the sub-sampling process is performed on the acquired captured image, and the process proceeds to step S204.
- step S204 the sub-sampled photographed image data is gray-scaled to generate photographed image data for detection, and the process proceeds to step S206.
- step S206 the generated detection image data is transmitted to the lip region detection unit 13, and the process ends.
- FIG. 10 is a flowchart showing the lip area detection process in the lip area detection unit 13.
- step S300 the process proceeds to step S300, and it is determined whether or not the image for detection has been acquired from the image processing unit 12. If it is determined that the image has been acquired (Yes), the process proceeds to step S302. And
- step S302 the process proceeds to the detection mode, and 20 X in the detection image is set.
- the identification process is performed on the area scanned by the search window of 30 pixels using the whole face detection SVM, and the process proceeds to step S304.
- step S304 by the identification processing in step S302, it is determined whether or not the image area of the entire face has been detected. If it is determined that the image area has been detected (Yes), the process proceeds to step S306. No) moves to step S330.
- step S306 a search area for a lip region of 20 ⁇ 15 pixels including the lower half region in the region image of the entire detected face is set for the detection image, and Move to 308.
- step S308 an identification process is performed on the region scanned by the search window of 10 ⁇ 10 pixels in the search region set in step S306 using the lip region detection SVM, and the process proceeds to step S310.
- step S310 it is determined whether or not the detection of the lip region has succeeded based on the identification code in step S308. If it is determined that the detection has succeeded (Yes), the process proceeds to step S312. ) Shifts to step S330.
- step S312 position information of the lip region detected in step S310 is obtained, and the process proceeds to step S314.
- step S314 the setting is switched from the detection mode to the tracking mode. Move to 316.
- step S316 the image data of the next frame of the detection image in which the lip region has been detected in step S310 is obtained, and the flow shifts to step S318.
- step S318 a search area for the lip region of 15 ⁇ 15 pixels is set based on the position information of the lip region in the detection image of the immediately preceding frame, and the flow advances to step S320.
- step S320 an identification process is performed using a lip region detection SVM on a region scanned by the 10x10 pixel search window in the 15x15 pixel search region set in step S318, The process moves to step S322.
- step S322 it is determined whether or not the detection of the lip region is successful, based on the identification in step S320. If it is determined that the detection is successful (Yes), the process proceeds to step S324, otherwise (No), the process proceeds to step S324. Move to 330.
- step S324 the position information of the lip region detected in step S322 is obtained, and the process proceeds to step S326.
- step S326 it is determined whether there is an image for detection of the next frame. If it is determined that there is an image for detection (Yes), the process proceeds to step S316. Shifts to step S328.
- step S328 the acquired position information is transmitted to the feature amount extraction unit 14, and the process proceeds to step S300.
- step S330 it is determined whether or not there is an image for detection of the next frame.
- step S332 Shifts to step S300.
- step S332 the image data for detection of the next frame is acquired, and the process proceeds to step S302.
- FIG. 11 is a flowchart showing a feature amount extraction process in the feature amount extraction unit 14.
- step S400 the lip region It is determined whether or not the information has been acquired. If it is determined that the information has been acquired (Yes), the process proceeds to step S402; otherwise (No), the process waits until it is acquired.
- step S402 the image of the lip region is also cut out based on the position information acquired as described above, and the process proceeds to step S404.
- step S404 a process for reducing the influence of the image of the nose, the chin, and the like is performed using a window function, and the flow advances to step S406.
- step S406 a two-dimensional Fourier transform process is performed on the image processed by the window function to obtain an amplitude spectrum of the lip region image, and the flow shifts to step S408.
- step S 408 principal component analysis is performed on the amplitude spectrum obtained in step S 406, the number of dimensions of the amplitude spectrum is reduced to generate a feature, and the process proceeds to step S 410.
- step S410 the generated feature amount is transmitted as a set of five frames to the utterance section determination unit 15, and the process proceeds to step S400.
- FIG. 12 is a flowchart showing a process of detecting an utterance section in the utterance section detection unit 15.
- step S500 the feature amount extraction unit 14 determines whether the feature amount has been acquired. If it is determined that the feature amount has been acquired (Yes), the process proceeds to step S500. The process moves to S502, and if not, (No) stands by until it is obtained.
- step S502 the acquired set of 5 frames is input to each of the utterance HMM for utterance section detection and the non-utterance HMM,
- step S504 based on the determination result in step S502, utterance start point determination processing is performed, and the flow advances to step S506.
- step S506 it is determined whether or not the utterance start point has been detected by the determination process in step S504. If it is determined that the utterance start point has been detected (Yes), the process proceeds to step S508, otherwise, (No). Move to step S500.
- the utterance end point is determined based on the determination result of step S502, and the process proceeds to step S510.
- step S510 it is determined whether or not the utterance end point has been detected by the determination processing in step S508. If it is determined that the utterance end point has been detected (Yes), the process proceeds to step S512. No) shifts to step S500.
- step S512 the utterance section information is transmitted to the CNS based on the detected utterance start point and utterance end point, and the process ends.
- FIG. 13 is a flowchart showing the utterance start point determination process in the utterance section determination unit 15.
- step S600 in which it is determined whether or not the utterance by the utterance section detection HMM has acquired the determination result of the utterance Z non-utterance.
- the process moves to S602, and if not (No), the process waits for acquisition.
- step S602 based on the above determination result, it is determined whether or not the utterance state has continued twice from the corresponding frame. If it is determined that the utterance state has continued (Yes), Proceeding to step S604, if not (No), the determination processing is continued for the subsequent frame.
- step S604 the frame is set as the first frame (S1), this frame is set as a candidate of the utterance start point, and the process proceeds to step S606.
- step S606 the third and subsequent frames from S1 are changed to state "C", and the flow shifts to step S608.
- step S608 it is determined whether or not the non-speech state has occurred in the frame in the state “C”. If it is determined that the non-speech state has occurred (Yes), the process proceeds to step S610. If not (No), the process moves to step S620.
- step S610 the state transitions to the state “D” after the next frame after the non-uttered frame, and then proceeds to step S612.
- step S612 it is determined whether or not an utterance state has occurred in the frame in the state “D”. If it is determined that the utterance state has occurred (Yes), the process proceeds to step S614. In this case (No), the process moves to step S616.
- step S614 the first frame (S1) is determined to be the utterance start point, and the process ends.
- step S616 it is determined whether or not the non-speech state has continued 10 times continuously in the frame in the state “D”, and if it is determined that the non-speech state has continued (Yes), the process proceeds to step S616. The process moves to S618, and if not! /, In the case of (No), the process moves to step S612.
- step S618 the first frame (S1) is demoted from the utterance candidate point, and the process proceeds to step S602.
- step S608 if the non-speech state does not occur in state "C" and the process proceeds to step S620, the number of occurrences of the utterance state is counted, and whether the utterance state has occurred continuously for three frames. It is determined whether or not it has occurred. If it is determined that the error has occurred (Yes), the process proceeds to step S622; otherwise, the process proceeds to step S608.
- step S622 the first frame (S1) is determined to be the utterance start point, and the process ends.
- FIG. 14 is a flowchart showing the utterance end point determination process in the utterance section determination unit 15.
- step S700 it is determined whether or not the utterance by the utterance section detection HMM has acquired the determination result of the utterance Z non-utterance. If it is determined that the force has been acquired (Yes), The process moves to step S702. If not (No), the process waits until it is obtained.
- step S702 the number of non-speech occurrences is counted in the order of frames, and the process proceeds to step S704.
- step S704 it is determined whether the non-utterance state continues for six consecutive times, and if it is determined that the state has continued (Yes), the process proceeds to step S706, and if not, (No), the process proceeds to step S706. 70 Move to 2.
- step S706 the process transitions to the state “C” after the six consecutive frames, and then proceeds to step S708.
- step S708 the number of utterances is counted, and the frame in state "C" is counted. It is determined whether the state of the utterance has continued for three consecutive times. If it is determined that the utterance has continued (Yes), the process proceeds to step S710.If not, the process proceeds to step S712. You.
- step S710 the count of the number of non-speech occurrences is cleared, and the process proceeds to step S702.
- step S712 the count of the number of non-utterances is continued, and the process proceeds to step S714.
- step S714 it is determined whether the number of non-utterances has reached 20 in total. If it is determined that the number of non-utterances has reached 20 (Yes), the process proceeds to step S716; otherwise (No) ) Moves to step S708.
- step S716 the first frame (S1) of the six consecutive frames in step S704 is determined as the utterance end point, and the process ends.
- the utterance section detection device 1 is capable of capturing an image including the face of the capturing target person sitting in the driver's seat by the image capturing unit 10, and capturing the captured image data by the data storage unit 11.
- the image processing unit 12 can generate a detection image by converting the captured image data into grayscale and reducing the size by sub-sampling, and the lip region
- the detection unit 13 can detect the lip region from the detection image using the whole face detection SVM and the lip region detection SVM, and the feature amount extraction unit 14 can detect the position information of the lip region detected.
- the lip region image can be extracted from the original captured image based on the lip region image, and the feature amount can be extracted from the extracted lip region image.
- the utterance period detection unit 15 uses the HMM for utterance period detection. It is possible to perform the detection of the utterance interval.
- the eye image is detected by the dedicated SVM, the feature amount is extracted, and the operation related to the eye movement is performed using the dedicated HMM. It is good also as composition which can judge the contents. With such a configuration, it is possible to determine an action such as dozing, and it is possible to perform driving support such as giving a voice warning.
- the feature amount extracted by the feature amount extracting unit 14 is By inputting to the HMM for utterance content determination, it is possible to directly identify the utterance content instead of the utterance section.
- an HMM for discriminating the pronunciation contents such as “A” and “I” is created by learning using various lip images of an unspecified number of people in advance. With such a configuration, it is possible to determine the power utterance content only for the movement of the lips, so that no voice information is required, and the amount of data necessary for voice recognition can be reduced.
- the positional relationship between them is determined. It is also possible to adopt a configuration in which the direction of the face of the person to be imaged is determined using the above.
- the direction of a sound collection unit (a microphone or the like) of a sound collection device installed in a vehicle is controlled using the determination result of the face direction of a speaker, or a plurality of collection devices installed in the vehicle. It is possible to more reliably and accurately acquire the voice data of the speaker by performing control or the like of selecting and operating the sound collector in the direction of the speaker among the sound units. .
- the process of acquiring the image data of the subject by the image photographing unit 10 and the data storage unit 11 is performed by the image according to any one of claims 1, 2, 19, 22, and 26. Corresponds to shooting means.
- the processing of detecting the lip region from the captured image by the image processing unit 12 and the lip region detection unit 13 is performed by using the face part according to any one of claims 2, 19, 22, and 26. Corresponds to detection means.
- the feature amount extracting unit 14 corresponds to the feature amount extracting unit described in any one of claims 1, 2, 6, 19, 22, and 24.
- the utterance section detection unit 15 is an operation content determination unit according to any one of claims 1, 2, 5, 6, 7, 10, 22, 23, 24, and 25.
- the utterance start point determination processing in the utterance section detection unit 15 corresponds to the utterance start point determination means according to any one of claims 7, 8 and 9.
- the utterance end point determination processing in the utterance section detection unit 15 is performed by the utterance end point determination unit according to any one of claims 10, 11 and 12. Respond.
- FIGS. 15 to 19 are diagrams illustrating a face part detection device, an operation content determination device, a face part detection device control program, an operation content determination device control program, a face part detection device control method, and an operation content determination device control method according to the present invention.
- FIG. 9 is a diagram showing a modification of the first embodiment.
- the difference from the first embodiment is that HMMs for speech segment detection according to the face direction of the target person are prepared for each set face direction.
- the utterance section detection unit 15 determines the direction of the face of the target person and changes the area size of the lip region to be detected according to the face direction of the determination result. The point is that the HMM for utterance section detection is selected, and the utterance section is detected by the selected HMM.
- the portions different from the first embodiment will be described, and the description of the portions overlapping with the first embodiment will be omitted.
- the data storage unit 11 stores, as the above-mentioned speech section detection HMM, one generated in correspondence with a plurality of preset face directions.
- the lip region detection unit 13 further detects a region of the entire face of the imaging target person detected by the whole face detection SVM and position information of the lip region. And has a function of determining the direction of the face of the person to be imaged. Further, the detection size of the lip region is changed based on the determined face direction. In other words, since the shape of the lip part to be photographed differs depending on the face direction of the subject, the size of the lip area necessary to include the lip part also changes accordingly. By making the size variable in accordance with the shape rather than the size of the type, it is possible to efficiently perform the subsequent processing and improve the performance.
- the information of the detection result and the determination result of the face direction are transmitted to the feature extraction unit 14.
- the feature amount extraction unit 14 Upon acquiring the information of the detection result and the determination result of the face direction from the lip region detection unit 13, the feature amount extraction unit 14 converts the corresponding original captured image into data based on the information. It reads out from the storage unit 11, cuts out the image of the lip region having the size corresponding to the read image force face direction, and extracts a feature amount to be input to the HMM for detecting a speech section described later from the cut lip region image. That is, the difference from the first embodiment is that the cutout size is changed according to the face direction.
- the utterance section detection unit 15 selects and reads out the HMM for utterance section detection corresponding to the face direction from the data storage unit 11 based on the face direction information of the determination result from the lip area detection unit 13. Then, the feature amount of the lip region image acquired from the feature amount extraction unit 14 is input to the selected HMM for detecting the utterance section, and based on the output of the HMM power corresponding to the input, the utterance up to the end of the subject's utterance start force Detect a section.
- FIGS. 15A to 15C are diagrams illustrating an example of a lip region detected according to the face direction.
- the CCD camera is installed so as to be parallel to the mirror surface direction of the inner mirror, and when the photographing subject faces the inner mirror, the face of the photographing subject also has a strong frontal force. It is to be taken.
- the data storage unit 11 stores the direction in which the subject to be photographed faces the right window (hereinafter abbreviated as right window direction) and the right door mirror while sitting in the driver's seat!
- left door mirror HMMs for detecting six utterance sections are stored, which correspond to the direction toward the left window (hereinafter abbreviated as the left mirror direction) and the direction toward the left window (hereinafter abbreviated as the left window direction).
- These HMMs are generated by learning the feature values of the lip image extracted from the captured images of an unspecified number of subjects for each face direction as learning data. Is input as input, and the likelihood of the person being photographed for the utterance state and the likelihood for the non-utterance state are output.
- the lip region detection unit 13 when the lip region detection unit 13 obtains the detection image from the image processing unit 12, the lip region detection unit 13 shifts to the detection mode similarly to the first embodiment, and sets the whole face detection SVM to the detection mode.
- a region image of 20 ⁇ 30 pixels is detected as an image region 200 of the entire face.
- a lip region image of 10 ⁇ 10 pixels is detected using the lip region detection SVM as in the first embodiment.
- the position information (coordinate information) is acquired, and based on the image region 200 of the entire face and the orientation of the subject's face in the captured image based on the acquired position information. (Any of the above six types).
- the face orientation is determined from the difference in these position coordinates. Further, when the face direction is determined, the lip region having 10 ⁇ 10 pixels in the vertical and horizontal directions is changed to a size of 10 ⁇ 8 pixels, 10 ⁇ 5 pixels, etc., according to the face direction of the determination result.
- FIGS. 15A to 15C are diagrams showing detection results of the lip region when the face direction of the person to be imaged is the front direction, the inner mirror direction, and the right window direction.
- the size of the lip region is 10 ⁇ 10 pixels, and the number of pixels in the lip portion is the second largest when facing the front direction (or the left mirror direction).
- the size of 10 x 10 pixels is changed to 10 x 8 pixels, and the number of pixels in the lip when the right window is turned is minimized.
- the size of 10 X 10 pixels is changed to 10 X 8 pixels.
- the mode shifts to the tracking mode for the detection image of the next frame.
- the lip region detection unit 13 When the lip region detection unit 13 shifts to the tracking mode, similarly to the first embodiment, the lip region detection unit 13 applies the previous detection image to the next frame as shown in FIG.
- the lip region is scanned by the pixel search window 24.
- the scanned gray scale values of 100 pixels in total of 10 ⁇ 10 pixels are input to the lip region detection SVM in the same manner as in the above detection mode, and the lip region detection process is performed.
- the lip area When the face direction is detected and the coordinate information is acquired, the face direction is determined in the same manner as described above based on the image area 200 of the entire face already detected and the coordinate information, and the size of the lip area is changed based on the determined face direction. Do. Further, in this modification, the information of the face direction and the center coordinates of the lip region are transmitted to the feature amount extracting unit 14.
- the feature amount extraction unit 14 acquires the information of the face direction and the center coordinates of the lip region in the detection image of each frame from the lip region detection unit 13, the corresponding captured image stored by the data storage unit 11 Then, a grayscale lip image having the number of pixels (for example, a range of 64 ⁇ 48 pixels to 64 ⁇ 64 pixels in the vertical and horizontal directions) corresponding to the direction of the face centering on the acquired center coordinates is cut out. That is, similarly to the lip region, the inner mirror direction is set to the maximum size (64 ⁇ 64 pixels), and the right window direction is set to the minimum size (64 ⁇ 48 pixels). Thereafter, the same processing as in the first embodiment is performed, and the amplitude spectrum of the lip image is obtained as a feature value.
- the speech direction detection unit 15 When the utterance section detection unit 15 obtains the face direction determination result and a set of five frames of feature amounts from the feature amount extraction unit 14, first, based on the face direction determination result, the speech direction detection unit 15 reads the face direction direction from the data storage unit 11. Select and read the HMM for utterance section detection corresponding to. In other words, the HMM corresponding to the face direction of the determination result is selected from the HMMs corresponding to the six types of face directions described above. Thereafter, the speech section is detected by the same processing as in the first embodiment using the selected HMM.
- FIG. 16 is a flowchart showing a lip area detection process in the lip area detection unit 13 according to a modification of the first embodiment.
- step S800 the process proceeds to step S800, and it is determined whether or not the image for detection has been acquired from the image processing unit 12. If it is determined that the image has been acquired (Yes), the process proceeds to step S802. Otherwise (No), wait for acquisition.
- step S802 the process proceeds to the detection mode, and the detection mode is set to 20 X
- the identification process is performed on the area scanned by the search window of 30 pixels using the whole face detection SVM, and the process proceeds to step S804.
- step S804 it is determined whether or not the image area of the entire face has been detected by the identification processing in step S802. If it is determined that the image area has been detected (Yes), the process proceeds to step S806. No) proceeds to step S838.
- step S806 a search area of a 20 X 15 pixel lip area including the lower half area in the area image of the entire detected face is set for the detection image, and Move to 808.
- step S808 an identification process is performed on the area scanned by the 10 ⁇ 10 pixel search window in the search area set in step S806 using the lip area detection SVM, and the process proceeds to step S810.
- step S810 it is determined whether or not the detection of the lip region was successful based on the identification code in step S808. If it was determined that the detection was successful (Yes), the process proceeds to step S812. ) Moves to step S838.
- step S812 the positional information of the lip region detected in step S810 is obtained, and the process proceeds to step S814.
- step S814 based on the area image of the entire face detected in step S804 and the position information acquired in step S812, the direction of the face of the person to be imaged in the image for detection is determined, and the flow shifts to step S816.
- step S816 the area size of the lip area is determined based on the face direction determined in step S814, and the flow advances to step S818.
- the area size is determined based on the maximum size of 10 ⁇ 10 pixels in the face direction (inner mirror direction) where the face of the subject is in front with respect to the CCD camera. In this case, the area is changed to an area smaller than 10 ⁇ 10 pixels set in advance according to the face direction.
- step S818 the setting is switched from the detection mode to the tracking mode, and the flow shifts to step S820.
- step S820 the image data of the next frame after the detection image in which the lip region has been detected in step S810 is obtained, and the flow advances to step S822.
- step S822 a search area for the lip region of 15 ⁇ 15 pixels is set based on the position information of the lip region in the detection image of the previous frame, and the flow shifts to step S824.
- step S824 the region scanned by the 10 ⁇ 10 pixel search window in the 15 ⁇ 15 pixel search region set in step S822 is subjected to identification processing using the lip region detection SVM, and the process proceeds to step S826. I do.
- step S826 it is determined whether or not the detection of the lip region is successful based on the identification code in step S824. If it is determined that the detection is successful (Yes), the process proceeds to step S828, otherwise (No) ) Moves to step S838.
- step S828 the position information of the lip region detected in step S826 is obtained, and the process proceeds to step S838.
- step S830 the region image of the entire face detected in step S804 and the image in step S8
- step S832 Based on the position information acquired in step 28, the direction of the face of the subject in the detection image is determined, and the flow advances to step S832.
- step S832 the area size of the lip area is determined based on the face direction determined in step S830, and the flow shifts to step S834.
- step S834 it is determined whether or not there is an image for detection of the next frame. If it is determined that there is an image for detection (Yes), the process proceeds to step S820. The process moves to step S836.
- step S836 the acquired position information and the information on the face direction of the determination result are transmitted to the feature amount extraction unit 14, and the process proceeds to step S800.
- step S8308 it is determined whether or not there is an image for detection of the next frame. If it is determined that there is an image for detection (Yes), the process proceeds to step S840. ) Moves to step S800.
- step S840 the image data for detection of the next frame is acquired, and the process proceeds to step S802.
- FIG. 17 is a flowchart showing a feature amount extraction process in the feature amount extraction unit 14. is there.
- step S900 in which it is determined whether or not the information on the face direction and the position information have been obtained from the lip region detection unit 13, and when it is determined that the information has been obtained (Yes). Shifts to step S902, and if not (No), waits for acquisition.
- the captured image power stored in the data storage unit 11 is also used to convert the image of the lip region of the size corresponding to the face direction into the image. Cut out and move to step S904.
- the size according to the face direction is the maximum size in the face direction (inner mirror direction) in which the face of the subject is in front of the CCD camera, and in other face directions, the face size An area of a size smaller than the maximum size set in advance in accordance with the direction is obtained.
- step S904 a process for reducing the influence of the image of the nose, the chin, and the like is performed by the window function, and the flow advances to step S906.
- step S906 the image processed by the window function is subjected to two-dimensional Fourier transform processing to obtain an amplitude spectrum of the lip region image, and the flow shifts to step S908.
- step S908 principal component analysis is performed on the amplitude spectrum obtained in step S906, a feature amount is generated by reducing the number of dimensions of the amplitude spectrum, and the process proceeds to step S910.
- step S910 the generated feature amount is transmitted as a set of five frames to the utterance section determination unit 15, and the process proceeds to step S900.
- FIG. 18 is a flowchart illustrating a process of detecting an utterance section in the utterance section detection unit 15 according to a modification of the first embodiment.
- step S 1000 the process proceeds to step S 1000, where it is determined whether or not the feature amount extraction unit 14 has obtained face direction information and feature amounts, and it is determined that the information has been obtained. If (Yes), the process proceeds to step S1002, otherwise (No), the process waits until acquisition.
- step S1002 the face direction indicated by the face direction information is obtained from the HMM for speech section detection corresponding to the face directions in multiple directions stored in the data storage unit 11 based on the information on the face direction.
- Select HMM corresponding to direction and read out step S 1004 Move to
- step S1004 the set of feature values of the five frames obtained above is input to each of the HMM for speech and the HMM for non-speech, which are HMMs for speech segment detection, selected in step S1002. Then, the utterance of every 5 frames Z is determined, and the flow shifts to step S1006.
- step S1006 the utterance start point is determined based on the determination result in step S1004, and the process proceeds to step S1008.
- step S1008 it is determined whether or not the utterance start point has been detected by the determination processing in step S1006. If it is determined that the utterance start point has been detected (Yes), the process proceeds to step S1010. ) Shifts to step S1000.
- step S1010 the utterance end point is determined based on the determination result of step S1004, and the process proceeds to step S1012.
- step S1012 it is determined whether or not the utterance end point has been detected by the determination processing in step S1010. If it is determined that the utterance end point has been detected (Yes), the process proceeds to step S1014, otherwise (No) ) Shifts to step S1000.
- step S1014 utterance section information is transmitted to the CNS based on the detected utterance start point and utterance end point, and the process ends.
- FIG. 19 is a diagram showing the identification probabilities of the utterance sections in the case where the HMM not considering the face direction is used and the case where the HMM in which the face direction is considered is used.
- the utterance section identification probability in the example in which the utterance section is detected using one type of HMM corresponding to all directions without considering the face direction in the first embodiment Using the six types of HMMs generated for each of the six types of face orientations used in the present modification, the utterance section detection probability is compared with the identification probability of the utterance section in the embodiment in which the utterance section is detected.
- the utterance sections for the above six types of face directions of the subject described in the present modified example are represented by all face directions. Identification probability when using one type of HMM corresponding to the direction HMMs corresponding to each of the above six types of face directions are generated in consideration of the direction of the user's face, and the utterance intervals for the above six types of face directions are generated using these six types of HMMs. And the identification probability in the case where is detected.
- a comparison between the identification probability of the method of the first embodiment and the identification probability of the method of the present modification shows that the angle of the face direction of the person to be imaged with respect to the imaging direction of the CCD camera is It can be seen that in the right mirror direction and the right window direction, which are particularly large, the method in which the face direction of this modification is considered improves the identification probability by 4% as compared with the method of the first embodiment.
- the reason for this is that, due to the difference in the angle, the shape of the image of the lip portion photographed by the CCD camera becomes different depending on the magnitude of the angle. In other words, the larger the degree of deformation of the image of the lip portion (the larger the angle), the smaller the degree of deformation (the smaller the angle ⁇ ).
- the volume is extracted, it is possible to extract the feature amount depending on the angle in this way, rather than using one type of HMM to detect the utterance section, in each direction (angle range).
- Using the corresponding HMM improves the detection accuracy of the utterance section. This suggests that, as shown in Fig. 16, the probability of discrimination in all directions is higher than in the case of detecting utterances in all directions using one type of HMM that creates HMMs for each face direction. It can be understood from that.
- the utterance section detection device 1 in the present modification can capture an image including the face of the subject to be photographed sitting in the driver's seat by the image photographing unit 10, and the data storage unit 11
- the image processing unit 12 converts the photographed image data into a grayscale image. It is possible to generate a detection image by shading and reducing the size by sub-sampling.
- the lip region detection unit 13 uses the SVM for whole face detection and the S VM for lip region detection.
- the feature amount extraction unit 14 allows the lip region of a size corresponding to the face direction from the original captured image. It is possible to extract an image and extract the extracted lip region image power feature amount, and the utterance period detection unit 15 uses the HMM for utterance period detection corresponding to the face direction of the determination result to perform utterance. It is possible to detect a section.
- the process of acquiring the image data of the subject by the image capturing unit 10 and the data storage unit 11 is performed according to any one of claims 1, 2, 4, 19, 22, and 26. This corresponds to the image capturing means described in 1.
- the process of detecting the lip region from the captured image by the image processing unit 12 and the lip region detection unit 13 is described in any one of claims 2, 3, 19, 22, and 26. It corresponds to a face part detecting means.
- the process of acquiring position information by the lip region detection unit 13 corresponds to the positional relationship information acquiring unit described in claim 4 or 23.
- the feature amount extraction unit 14 corresponds to a feature amount extraction unit according to any one of claims 1, 2, 4, 6, 19, 22, and 24.
- the utterance section detection unit 15 is an operation content determination unit according to any one of claims 1, 2, 4, 5, 6, 7, 10, 22, 23, 24, and 25.
- the utterance start point determination processing in the utterance section detection unit 15 corresponds to the utterance start point determination means according to any one of claims 7, 8 and 9.
- the utterance end point determination processing in the utterance section detection unit 15 corresponds to the utterance end point determination means according to any one of claims 10, 11 and 12.
- FIG. 5 is a diagram showing a second embodiment of an awake state detection device to which the present invention is applied.
- a face part detection device, an operation content determination device, a face part detection device control program, an operation content determination device control program, and a face part detection device according to the present invention are provided.
- the position control method and the operation content determination device control method are applied to an awake state determination device that determines the awake state of a driver who drives an automobile.
- FIG. 1 is a block diagram showing a configuration of an awake state determination device according to the present invention.
- the awake state determination device 2 includes an image capturing unit 30 and a data storage unit 31.
- the arousal state determination device 2 is installed in a vehicle interior and is connected to be able to interlock with an alarm system installed in the vehicle interior (not shown). Then, the output of the arousal state determination device 2 is input to an alarm system, and the alarm system determines, based on the input information, that the driver is in a sleep state or in a state of sleeping only, It performs operations such as screen display, warning sound and warning voice message.
- the image photographing section 30 has a configuration including a CCD (charge coupled device) camera, and outputs an image photographed in a frame unit as digital data. Then, the output image data is transmitted to the data storage unit 31.
- the CCD camera is mounted on an inner mirror in the vehicle cabin so that an image including the entire face of the person (driver) sitting in the driver's seat can be captured.
- the position of the CCD camera is not limited to the inner mirror, but may be any position such as steering, column position, center 'panel position, front' pillar position, etc. as long as it can capture an image including the entire face of the subject. Another location is fine.
- the data storage unit 31 stores data necessary for determining the awake state, such as the SVM for detecting the entire face, the SVM for detecting the eye area, the HMM for determining the awake state, and the image captured by the image capturing unit 30.
- data necessary for determining the awake state such as the SVM for detecting the entire face, the SVM for detecting the eye area, the HMM for determining the awake state, and the image captured by the image capturing unit 30.
- the image processing unit 32 performs image size reduction or the like as preprocessing of the process of detecting an eye region from a captured image, which is performed by the eye region detection unit 33.
- the photographed image having the reduced image size is referred to as a detection image.
- the eye region detection unit 33 detects the eye region of the person to be imaged from the detection image acquired from the image processing unit 32 using the SVM.
- shooting from the detection image The whole face detection SVM that detects the entire area 200 of the subject's face, and the left eye area (not including the right eye) that includes the subject's left eye from the whole face image detected by the whole face detection SVM
- the left eye area is detected in two stages using two types of SVMs, the left eye area detection SVM to be detected.
- the position information of the left eye region detected in the previous frame for example, the coordinates of the upper left pixel of the image (for example, The search range of the left eye area is set based on the coordinate information in the case of 1, 1), and the SVM for left eye area detection is applied to this search range. That is, once the left eye region is detected, the detection process of the entire face image region by the whole face detection SVM is omitted for the detection image from the next frame until the left eye region is not detected. At this time, the detection process of the left eye region is speeded up by setting a search range that is narrower than the search range when initially detecting the left eye region.
- the mode in which the above two types of SVMs are used to perform detection processing of the left eye region is called a detection mode, and the search range of the left eye region is set based on the position information of the left eye region detected in the previous frame.
- the mode in which the left-eye region detection SVM is applied to this search range to perform the left-eye region detection process is referred to as a tracking mode.
- the information of the detection result is transmitted to the feature amount extraction unit 34.
- the feature amount extraction unit 34 Upon acquiring the information of the detection result from the eye region detection unit 33, the feature amount extraction unit 34 reads the corresponding original captured image from the data storage unit 11 based on this information, and reads the read image power. The image of the left eye area is cut out, and the feature quantity to be input to the HMM for awakening state determination described later is extracted. In the present embodiment, the number of dimensions is reduced by using principal component analysis or independent component analysis for the extracted feature amount. Further, in the present embodiment, the cut-out left eye region image is subjected to Fourier transform, and its frequency spectrum component is extracted as a feature amount. Further, the extracted feature amount is transmitted to the awake state determination unit 35 as a set of continuous predetermined frames (for example, 10 frames).
- the arousal state determination unit 35 inputs the feature amount of the left eye region image acquired from the feature amount extraction unit 34 to the HMM for arousal state determination, and based on the output of the HMM force with respect to this input, wakes up the subject. Determine the status. Information on the determination result is transmitted to an alarm system (not shown).
- the arousal state determination device 2 includes a processor (not shown), a RAM (Random Access Memory), and a storage medium storing a dedicated program. Control of each section described above.
- the storage medium includes semiconductor storage media such as RAM and ROM, magnetic storage media such as FD and HD, optical reading storage media such as CD, CDV, LD, and DVD, and magnetic media such as MO.
- FIG. 21A is a diagram illustrating an example of a captured image
- FIG. 21B is a diagram illustrating a search area and a search window in a detection mode in a detection image
- FIG. FIG. 5 is a diagram showing a search area and a search window in a tracking mode in a detection image.
- FIG. 22 is a diagram showing a configuration of an electromyogram waveform for awake state determination with respect to one blink.
- FIG. 23 is a diagram showing a blink waveform pattern.
- FIG. 24 is a diagram showing the correspondence between the output of the awake state determination HMM for waveform O in FIG.
- FIG. 26 is a diagram showing a matching relationship between a force and an electromyogram waveform
- FIG. 26 is a diagram showing a matching relationship between an output of the awake state determination HMM for waveform B in FIG. 23 and an electromyogram waveform
- FIG. 27 is a diagram showing an example of a blink interval and a waveform pattern of a cluster.
- the image photographing unit 10 uses a CCD camera attached to an inner mirror to control the vehicle as shown in FIG. / Take an image including the entire face of the person to be photographed (driver), and take the captured image data in units of frames (here, 1/30 second) and The images are stored in the data storage unit 31 in the order in which the images were taken.
- the captured image is a color image.
- the data storage unit 31 notifies the image processing unit 32 of the storage.
- the image processing unit 32 Upon receiving the notification from the data storage unit 31, the image processing unit 32 The image data is read out from the, and the read image data is subjected to sub-sampling to reduce the image size. For example, if the captured image is a full-color image with a size of 640 x 480 (vertical x horizontal) pixels, it is sub-sampled to 1Z8 in the vertical and horizontal directions of the image, and a size of 80 x 60 (vertical x horizontal) pixels. Converted to an image. In sub-sampling, for example, a captured image of 640 ⁇ 480 pixels is divided into 80 ⁇ 80 pixel rectangular area units, and each rectangular area is defined as an average value of the luminance values of the pixels of each rectangular area. This is performed by substituting one pixel. This reduces the number of pixels to 1Z64. The detection image generated in this way is transmitted to the eye region detection unit 33.
- the eye region detection unit 33 shifts to the detection mode, and performs the same method as that in the first embodiment described above for the entire 80 ⁇ 60 pixel detection image. Then, the image area of the entire face is scanned using a search window of 20 ⁇ 20 pixels. Further, the scanned pixel values of a total of 400 pixels of 20 ⁇ 20 pixels are input to the whole face detection SVM as 400-dimensional values.
- the SVM for whole face detection learning is performed in advance so that the whole face class and the non-whole face class in the 400-dimensional space can be identified, and the distance between the identification hyperplane and the input value (such as the Euclidean distance) , The similarity between the two is determined, and an area image of 20 ⁇ 20 pixels having the highest similarity is detected as an image area of the entire face.
- the image area including the upper half image area (the area including the left eye) of the entire image area 200 of the face is next detected in the same manner as in the first embodiment.
- a search area 26 of 10 X 20 (vertical X horizontal) pixels is set, and scanning of the left eye area is performed on the set search area by a search window 27 of 4 X 8 (vertical X horizontal) pixels. That is, in an actual image, the result is as shown in FIG. Then, a pixel value of a total of 32 pixels of the scanned 4 ⁇ 8 pixels is input to the left eye region detection SVM as a 32-dimensional value.
- the left-eye region detection SVM learning is performed in advance in a state where the left-eye region class and the non-left-eye region class in a 32-dimensional space can be distinguished, and the distance between the identification hyperplane and the input value ( The similarity between the two is determined based on the Euclidean distance, etc., and the region image of 4 ⁇ 8 pixels having the highest similarity is detected as the left eye region image. Further, when the left eye region image is detected, the position information (coordinate information) is acquired, and the detection image of the next frame is shifted to the tracking mode.
- the eye region detecting unit 33 detects the left eye region detected in the previous frame with respect to the detection image of the next frame in the same manner as in the first embodiment.
- a search area 28 of 15 ⁇ 15 pixels is set by extending the pixel by 5 pixels in the vertical and horizontal directions centering on the position coordinates of the image, and a search of 4 ⁇ 8 pixels is performed on the set search area. Scan the region. In an actual image, it is as shown in Fig. 21 (c).
- the pixel values of a total of 32 pixels of the scanned 4 ⁇ 8 pixels are input to the left eye region detection SVM in the same manner as in the above detection mode, and the left eye region detection process is performed.
- the center coordinates of the left eye region are transmitted to the feature amount extracting unit 34.
- the mode is maintained while the detection of the left eye area is successful, and the mode shifts to the face detection mode when the detection of the left eye area fails.
- the feature amount extraction section 34 Upon acquiring the center coordinates of the left eye area in the detection image of each frame from the left eye area detection section 33, the feature amount extraction section 34 obtains the acquired coordinates from the corresponding captured image stored by the data storage section 31. Cut out the left eye area image of 4 X 8 pixels centered on the center coordinates. Then, the Fourier transform processing is performed on the extracted left-eye area image of each frame by FFT or the like, and the real part coefficient after the conversion and the post-distribution Fourier transform of the left-eye area image of the immediately preceding frame are performed. A difference value from the real part coefficient is obtained as a feature value.
- other features include a frequency spectrum component obtained by Fourier-transforming the left-eye area image, a logarithmic component corresponding to a frequency spectrum obtained by performing a Fourier-transformation on the left-eye area image, and a frequency component before and after the frequency spectrum obtained by Fourier-transforming the left-eye area image.
- Frame difference component for the left-eye region image mel-cepstral (MFCC) component for the left-eye region image
- intra-frame moment component for the left-eye region image inter-frame moment component for the left-eye region image
- the Fourier-transformed frequency spectrum of the left-eye region image There are an intra-frame moment component, an inter-frame moment component for a frequency spectrum obtained by frame-transforming the left-eye area image, and a combination of these. These should be used appropriately according to the system configuration.
- the obtained feature amount is further subjected to dimension reduction by principal component analysis in order to reduce the amount of computation and remove information unnecessary for identification, as in the first embodiment. I do.
- the extraction of such feature values is performed for each frame, and the extracted feature values are transmitted to the awake state determination unit 35 as a set of predetermined frames (for example, 10 frames) in the order in which the images were captured.
- a predetermined frame for example, 10 frames
- a feature amount for one blinking image is included.
- the awake state determination unit 35 Upon acquiring a set of feature values of a predetermined frame (for example, 10 frames) from the feature value extraction unit 34, the awake state determination unit 35 inputs these feature values to the HMM for awake state determination.
- a predetermined frame for example, 10 frames
- a time from the position where the amplitude is 50% to the closing of the eyelid (falling time in Fig. 22), and the like.
- a waveform A which is a standard blinking waveform when the human is awake
- a waveform A to a waveform L other than the standard waveform O and
- Various blinking waveforms have been observed.
- waveforms A and B are typical waveforms for judging a drowsy state (hereinafter referred to as a drowsiness state).
- the amplitude and blinking speed of blinking each time. Therefore, by determining these waveforms A and B and analyzing their appearance patterns and frequencies, it is possible to determine with high accuracy whether or not the subject is awake. .
- the feature amount extracted by the feature amount extraction unit 34 is input, and the standard blinking waveform 0, blinking waveform A, blinking waveform B, and other blinking waveforms ( Prepare an awake state determination HMM that outputs the likelihood for a total of four waveforms (waveforms C to L).
- blink images moving images
- the left eye region image power detected from these images is used as learning data.HMM learning is performed using the extracted features as learning data! ⁇ , each of the above four waveforms 4 types of HMMs (each Generates an HMM corresponding to the waveform on a one-to-one basis.
- the arousal state determination unit 35 sets a set of predetermined frames (for example, 10 frames) acquired from the feature amount extraction unit 34 for the four types of HMMs for arousal state determination generated as described above. Of each of the above four types of blinking waveforms, it is checked which HMM outputs the highest likelihood, and the blinking waveform with the highest output likelihood is Determines the waveform of the subject's single blink for the input features
- FIGS. 24 to 26 show the EMG of the EMG when the subject actually attached electrodes to the EMG measurement positions of the right and left eyes, and measured the change in EMG with one blink.
- the waveform and the captured image of the subject at this time are detected using the technique of the present invention to detect the left eye region image for one blink, and the feature amount of one detected blink of the detected left eye region image is calculated as:
- FIG. 7 is a diagram showing waveforms corresponding to the HMM with the highest likelihood among the outputs, which are input to the above four types of awake state determination HMMs, respectively.
- FIG. 26 are all drawings of the application software screen for verification, and the video of the relevant video is displayed in accordance with the blinking video (left eye only) displayed at the top of the screen.
- Measurement waveforms electromyogram waveforms
- waveforms O, A, B, and other waveforms identified by applying the present invention to this blinking moving image are displayed below, and waveforms O, A, B, and other waveforms identified by applying the present invention to this blinking moving image are displayed.
- Information on the result of identification of one of the waveform types is displayed on the right side of the screen.
- FIG. 24 is a diagram showing a screen on which an electromyogram waveform when the subject blinks classified as a standard blink and a waveform identified by the HMM for arousal state determination are displayed.
- the HMM for awakening state determination displays the waveform O (normal blinking waveform) as shown in the right side of the screen in Fig. 24 as the identification result of the extracted features. It can be seen that the type of blinking waveform is correctly identified.
- FIGS. 25 and 26 show EMG waveforms obtained when the subject blinked, which is a typical blinking waveform in the determination of drowsiness, which is classified into waveforms A and B.
- Image power of blinking Power is a diagram showing the extracted feature amount and the waveform identified by the HMM for arousal state determination. As shown in Figs. 25 and 26, the HMM for awake state determination displays waveform A and waveform B as shown in Figs. 25 and 26, respectively, indicating that the type of blink waveform of the subject is accurately identified. I understand.
- the awake state determination unit 35 analyzes the appearance pattern and the appearance frequency of each waveform together with the previously determined blinking waveform, and based on the analysis result, The subject's arousal state (awake state, sleep state, sleeping state! /, Sleep state, sleep state, etc.) is determined.
- the subject's arousal state (awake state, sleep state, sleeping state! /, Sleep state, sleep state, etc.) is determined.
- a histogram process on the result of discriminating a blinking waveform at a predetermined time unit, a change in the frequency of occurrence of four blinking patterns is captured, and the arousal state of the subject is estimated. .
- the frequency of occurrence of the waveforms A to L increases, it is determined that the arousal state has decreased (the drowsiness has increased). Also, in physiology, as shown in FIG. 27, it is known that when sleepiness increases, a phenomenon referred to as a blink cluster occurs. From this, in the present embodiment, the appearance intervals of the above-mentioned four types of blink waveforms identified above are obtained, and when the frequency of occurrence of blinks increases continuously, this state is also in the awake state. Is judged to be low (increased drowsiness). Information on the result of the determination (estimated) in this manner is output to an alarm system (not shown).
- FIG. 28 is a flowchart showing the detection processing of the left eye area in the eye area detection unit 33.
- step S1100 the image processing unit 32 It is determined whether or not has been acquired. If it is determined that it has been acquired (Yes), the process proceeds to step S1102; otherwise (No), the process waits until it is acquired.
- step S1102 the process proceeds to the detection mode, and the process proceeds to step S1102.
- the identification process is performed using M, and the flow shifts to step S1104.
- step S1104 by the identification processing in step S1102, it is determined whether or not the image area of the entire face has been detected. If it is determined that the image has been detected (Yes), the process proceeds to step S1106. Otherwise! / In the case (No), the process moves to step S1130.
- step S1106 the search area of the 10 ⁇ 20 pixel eye region including the upper half region in the region image of the entire detected face is set for the detection image, and the process proceeds to step S1108. .
- step S1108 an identification process is performed on the region scanned by the search window of 4x8 pixels in the search region set in step S1106 using the left eye region detection SVM, and the process proceeds to step S1110. I do.
- step S1110 based on the identification in step S1108, it is determined whether or not the detection of the left eye region is successful. If it is determined that the detection is successful (Yes), the process proceeds to step S1112.
- step S1112 position information of the left eye region detected in step S1110 is obtained, and the process proceeds to step S1114.
- Step SI 114 switches the setting from detection mode to tracking mode.
- step S1116 the image data of the next frame to the detection image in which the left eye region has been detected in step S1110 is obtained, and the flow shifts to step S1118.
- step S1118 a search area for a 15 ⁇ 15 pixel left eye area is set based on the position information of the left eye area in the detection image of the previous frame, and the flow shifts to step S1120.
- step S1120 the left-eye area detection is performed on the area scanned by the 4x8 pixel search window in the 15x15 pixel search area set in step S1118.
- the identification process is performed by using the SVM, and the flow advances to step S1122.
- step S1122 it is determined whether or not the detection of the left eye region is successful based on the identification in step S1120. If it is determined that the detection is successful (Yes), the process proceeds to step S1124. No) proceeds to step S1130.
- step S1124 position information of the left eye region detected in step S1122 is obtained, and the process proceeds to step S1126.
- step S1126 it is determined whether there is an image for detection of the next frame. If it is determined that the image is present (Yes), the process proceeds to step S1116. If not (No), The process moves to step S1128.
- step S1128 the acquired position information is transmitted to the feature amount extraction unit 34, and the process proceeds to step S1100.
- step S1130 it is determined whether or not there is an image for detection of the next frame. If it is determined that there is an image for detection (Yes), the process proceeds to step S1132.
- step S1132 the image data for detection of the next frame is acquired, and the process proceeds to step S1102.
- FIG. 29 is a flowchart showing a feature amount extraction process in the feature amount extraction unit 34.
- step S1200 it is determined whether or not the position information has been acquired from the eye region detection unit 33, and if it is determined that the position information has been acquired (Yes), the process proceeds to step S1202. If not (No), wait until acquisition.
- step S1202 the captured image power stored in the data storage unit 31 is also cut out from the image of the left eye region based on the acquired position information, and the process proceeds to step S1204.
- step S1204 a process for reducing the influence of images other than the left eye such as the right eye and eyebrows is performed by a window function, and the flow advances to step S1206.
- step S1206 the image processed by the window function is subjected to the variance Fourier transform processing. Then, the amplitude spectrum of the left eye region image is obtained, and the flow advances to step S1208.
- step S1208 the difference between the real spectrum coefficient in the amplitude spectrum obtained in step S1206 and the amplitude spectrum of the immediately preceding frame is calculated, and the flow shifts to step S1210.
- step S1210 a principal component analysis is performed on the difference between the real part coefficients calculated in step S1208, the number of dimensions of the real part coefficients is reduced to generate a feature amount, and the process proceeds to step S1212. Run.
- step S1212 a predetermined frame (for example, 10 frames) of the generated feature amount is transmitted as a set to the awake state determination unit 35, and the process proceeds to step S1200.
- a predetermined frame for example, 10 frames
- FIG. 30 is a flowchart showing the awake state determination process in the awake state determination unit 35.
- step S1300 it is determined whether or not the feature amount has been acquired from the feature amount extracting unit 34, and if it is determined that the feature amount has been acquired (Yes), the process proceeds to step S1302. If not, (No) wait until it gets.
- a set of characteristics of the acquired predetermined frame (for example, 10 frames) is used for the four types of HMMs, which are the awakening state determination HMMs that respectively identify the four types of blinking waveforms.
- the amounts are input, the type of blink waveform for each predetermined frame is determined based on the likelihood of these four types of HMMs, and the flow shifts to step S1304.
- step S1304 the determination result in step S1302 is stored in the data storage unit 31 in the order of determination, and the flow advances to step S1306.
- step S1306 it is determined whether or not the determination result for the predetermined period has been stored in the data storage unit 31, and if it is determined that the determination result has been stored (Yes), the process proceeds to step S1308. If this is the case (No), the process moves to step S1300.
- the awake state is determined based on the determination result for the predetermined period, and the process proceeds to step S1310.
- the determination of the arousal state is performed by performing a histogram process on each waveform pattern based on a result of the determination of the blinking waveform for a predetermined period, and obtaining each blinking waveform. The determination is made by obtaining a change in the occurrence frequency of the pattern. For example, if the frequency of occurrence of a waveform pattern other than the normal blink waveform o that is important for determining a sleep state is high, it is determined that the subject is suffering from drowsiness. In addition, in order to increase the determination accuracy, the blink waveforms are further examined to determine that the subject is suffering from drowsiness when the frequency at which the blink waveforms appear continuously increases. I do.
- step S1310 the result determined in step S1308 is transmitted to the alarm system, and the process ends.
- the awake state determination device 2 can capture an image including the face of the subject to be photographed sitting in the driver's seat by the image photographing unit 30.
- HMM for wakefulness determination corresponding to multiple types of blink waveforms of the subject, captured image data, etc. can be stored.
- Image processing unit 32 detects captured image data whose size has been reduced by sub-sampling. It is possible to generate an image for use by the eye area detection unit 33, and to use the SVM for whole face detection and the SVM for left eye area detection to detect the image power for detection and the left eye area.
- the amount extraction unit 34 can cut out the left eye region image from the original photographed image based on the detected position information of the lip region, and extract the extracted left eye region image power feature amount.
- the determination unit 35 determines the type of the blinking waveform using the HMM for awakening state determination, and performs the analysis process based on the result of the determination of the blinking waveform for a predetermined period to determine the arousal state of the subject. It is possible.
- the left eye region of the subject to be photographed is detected and the awake state is determined.
- the subject to be photographed is determined according to the photographing environment and the type of system to be applied. The determination may be performed by detecting the right eye region or the binocular region.
- the process of acquiring the image data of the subject by the image photographing unit 30 and the data storage unit 31 is performed by the image according to any one of claims 1, 2, 19, 22, and 26. Corresponds to shooting means.
- the detection processing of the left eye region from the captured image by the image processing unit 32 and the eye region detection unit 33 is performed according to any one of claims 2, 19, 22, and 26. Corresponds to the part detection means. [0193] In the above embodiment, the processing of acquiring position information by the eye region detection unit 33 corresponds to the positional relationship information acquiring unit described in claim 4 or 23.
- the feature amount extracting unit 34 corresponds to the feature amount extracting unit according to any one of claims 1, 2, 13, 14, 15, 16, 17, 19, 22, and 25. I do.
- the arousal state determination unit 35 is an operation content according to any one of claims 1, 2, 5, 13, 14, 15, 16, 17, 18, 22, 23 and 25. It corresponds to the judgment means.
- the utterance section and the utterance content are detected from the lip region image detected from the captured image.
- the present invention is not limited to this, and it is also possible to determine other operation contents such as a state in which gum is inserted or a state in which the gum is stretched.
- the function of the utterance interval detecting device 1 in the first embodiment or the modification of the first embodiment is combined with the function of the awake state determining device 2 in the second embodiment. It is also possible to adopt a configuration in which the operation content such as lack of extension that can be determined only by blinking is also determined, and the awake state can be determined with higher accuracy. Accordingly, it is possible to more appropriately perform safe driving support such as giving a warning by sound to the vehicle driver in accordance with the determination result.
- the lip region image is detected from the captured image
- the motion content (speech section) related to the movement of the lips is determined
- the eye content is determined from the captured image.
- the image is detected to determine the action contents (slumber, etc.) related to the eye movement, but this is not a limitation, and other images of the face such as the nose and eyebrows are detected and detected.
- the operation contents related to these movements may be determined.
- the face direction of the subject is not taken into account as in the modification of the first embodiment.
- the present invention is not limited to this. Considering the direction of the face, HMMs for awakening state determination corresponding to each face direction are prepared, the face direction is determined, and the HMM corresponding to the face direction determined by the HMM force determination is selected. A configuration in which the type of blink waveform of the subject is determined using the selected HMM! This makes it possible to determine the type of the blinking waveform with higher accuracy.
- the operation content determination device by using a known HMM, the operation content related to the movement of a predetermined part with a temporal concept is determined. Therefore, the operation content can be determined with higher accuracy.
- the predetermined portion is detected using the SVM, it is possible to detect the predetermined portion with high accuracy from various captured images.
- a well-known HMM for the determination of the motion, it is possible to determine the motion content related to the motion of the predetermined part with a temporal concept, and thus it is possible to more accurately determine the motion content. is there.
- the size of the image area of the predetermined portion to be detected according to the face direction is changed. This eliminates the need to perform feature amount extraction processing on the image of the unnecessary portion, and thus it is possible to improve the speed of the extraction processing.
- an image of a predetermined portion whose shape changes according to various face directions is added. It is possible to more accurately determine the operation content related to the movement of the predetermined part from the feature amount corresponding to the various face directions in the image.
- the operation content judging device in addition to the effect of any one of claims 1 to 5, the operation content such as the utterance, lack of extension, and gumming of the target person is also provided. It is possible to determine.
- the utterance of the target person is separately started based on the determination result of the power of the utterance state by the HMM. Since the points are determined, it is possible to accurately determine the utterance section.
- the operation content determination device according to claim 8 and claim 9, according to claim 7, In addition to the effect, even when the output of the HMM becomes something that is not practically possible (abnormal state), for example, repetition of utterance Z and non-utterance, the utterance start point can be determined more accurately. It is possible to determine.
- the operation content judging device in addition to the effect of any one of claims 6 to 9, in addition to the above-described effect of HMM, the judgment result of whether or not the force is in the utterance state is given by the HMM. Since the utterance end point of the target person is separately determined based on the utterance interval, the utterance section can be determined with high accuracy.
- the operation content judging device in addition to the effect of the force 1 according to any one of claims 1 to 12, it is possible to determine the operation content such as dozing.
- the operation content determination device of claim 14 in addition to the effect of claim 13, for example, the type of blink of the target person, such as the speed of blinking, the degree of closing or adjusting the eyelids when blinking, is accurately determined. It is possible to do.
- the eye condition at the time of blinking for example, when the eye condition is expressed as a muscle EMG waveform It is also possible to accurately determine the speed at which the starting force of the blinking ends (the change time of the myoelectric potential) and the type of amplitude indicating the reduction in the amount of eyelid closing during blinking.
- the speed of blinking, the force of closing and adjusting the eyelids at the time of blinking, and the type of blink of the target person are classified. It is possible to accurately determine the awake state of the subject, such as a state in which the subject is depressed or a state in which he or she falls asleep.
- the operation content determination device of claim 17 in addition to the effect of claim 13, it is sufficient to generate an HMM for a specific type of blink, and the determination process is performed using a specific type of HMM. Therefore, it is possible to reduce the memory capacity required for the HMM, perform high-speed judgment processing, and the like.
- the operation content determination device of claim 18 in addition to the effect of claim 17, in addition to the occurrence frequency of a specific type of blink, a specific type of blink in a predetermined time period such as a blink of a specific type of blink. It is possible to determine the state of alertness with high accuracy based on the change in the frequency of occurrence.
- the motion content judging device by using the HMM, it is possible to judge the state of the utterance motion with a temporal concept. It is possible to judge the utterance content with high accuracy.
- the contents of the utterance of the target person can be recognized more accurately in an environment with noise such as music flowing from a car stereo, road noise, wind noise, engine sound, and the like.
- noise such as music flowing from a car stereo, road noise, wind noise, engine sound, and the like.
- a predetermined operation such as route search or route guidance to the destination based on the recognition result.
- the alarm system of claim 21 for example, when the subject is a driver of a car, a state in which the driver is drowsy is determined, and a warning sound or the like is determined. By giving a warning, it is possible to prevent drowsy driving and the like.
- the same effect as that of the operation content determining device of the thirteenth aspect can be obtained.
- an effect equivalent to that of the operation content judging device according to claim 6 can be obtained.
- an effect equivalent to that of the operation content judging device according to claim 13 is obtained.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Acoustics & Sound (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Navigation (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05741463.3A EP1748387B1 (en) | 2004-05-21 | 2005-05-23 | Devices for classifying the arousal state of the eyes of a driver, corresponding method and computer readable storage medium |
JP2006513753A JP4286860B2 (ja) | 2004-05-21 | 2005-05-23 | 動作内容判定装置 |
US11/596,258 US7894637B2 (en) | 2004-05-21 | 2005-05-23 | Device, program, and method for classifying behavior content of an object person |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-151579 | 2004-05-21 | ||
JP2004151579 | 2004-05-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005114576A1 true WO2005114576A1 (ja) | 2005-12-01 |
Family
ID=35428570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/009376 WO2005114576A1 (ja) | 2004-05-21 | 2005-05-23 | 動作内容判定装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US7894637B2 (ja) |
EP (1) | EP1748387B1 (ja) |
JP (1) | JP4286860B2 (ja) |
WO (1) | WO2005114576A1 (ja) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007236488A (ja) * | 2006-03-06 | 2007-09-20 | Toyota Motor Corp | 覚醒度推定装置及びシステム並びに方法 |
JP2008171107A (ja) * | 2007-01-10 | 2008-07-24 | Matsushita Electric Ind Co Ltd | 顔状況判定処理装置および撮像装置 |
WO2008088070A1 (ja) | 2007-01-19 | 2008-07-24 | Asahi Kasei Kabushiki Kaisha | 覚醒状態判定モデル生成装置、覚醒状態判定装置及び警告装置 |
JP2008171108A (ja) * | 2007-01-10 | 2008-07-24 | Matsushita Electric Ind Co Ltd | 顔状況判定処理装置および撮像装置 |
JP2009279099A (ja) * | 2008-05-20 | 2009-12-03 | Asahi Kasei Corp | 瞬目種別識別装置、瞬目種別識別方法、及び瞬目種別識別プログラム |
JP2010074399A (ja) * | 2008-09-17 | 2010-04-02 | Sony Corp | 構図判定装置、構図判定方法、画像処理装置、画像処理方法、プログラム |
CN102837702A (zh) * | 2011-06-24 | 2012-12-26 | 株式会社普利司通 | 路面状态判断方法及路面状态判断装置 |
JP2014092931A (ja) * | 2012-11-02 | 2014-05-19 | Sony Corp | 画像表示装置並びに情報入力装置 |
CN104269172A (zh) * | 2014-07-31 | 2015-01-07 | 广东美的制冷设备有限公司 | 基于视频定位的语音控制方法和系统 |
CN107123423A (zh) * | 2017-06-07 | 2017-09-01 | 微鲸科技有限公司 | 语音拾取装置及多媒体设备 |
US10264210B2 (en) | 2015-08-03 | 2019-04-16 | Ricoh Company, Ltd. | Video processing apparatus, method, and system |
WO2019171452A1 (ja) * | 2018-03-06 | 2019-09-12 | 三菱電機株式会社 | 運転支援装置、運転支援方法及び運転支援装置を備えた運転支援システム |
JP2020091848A (ja) * | 2018-12-04 | 2020-06-11 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 映像処理方法及び装置 |
WO2021114224A1 (zh) * | 2019-12-13 | 2021-06-17 | 华为技术有限公司 | 语音检测方法、预测模型的训练方法、装置、设备及介质 |
JP2021120820A (ja) * | 2020-01-30 | 2021-08-19 | 富士通株式会社 | 計算プログラム、計算方法及び計算装置 |
US20220415003A1 (en) * | 2021-06-27 | 2022-12-29 | Realtek Semiconductor Corp. | Video processing method and associated system on chip |
WO2023032283A1 (ja) * | 2021-09-02 | 2023-03-09 | 株式会社トランストロン | 通報装置、通報方法及び通報プログラム |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ATE395346T1 (de) * | 2003-09-16 | 2008-05-15 | Astrazeneca Ab | Chinazolinderivate als tyrosinkinaseinhibitoren |
JP2007318438A (ja) * | 2006-05-25 | 2007-12-06 | Yamaha Corp | 音声状況データ生成装置、音声状況可視化装置、音声状況データ編集装置、音声データ再生装置、および音声通信システム |
JP4137969B2 (ja) * | 2006-12-04 | 2008-08-20 | アイシン精機株式会社 | 眼部検出装置、眼部検出方法及びプログラム |
JP4895847B2 (ja) * | 2007-02-08 | 2012-03-14 | アイシン精機株式会社 | 瞼検出装置及びプログラム |
KR100795160B1 (ko) * | 2007-03-22 | 2008-01-16 | 주식회사 아트닉스 | 얼굴영역검출장치 및 검출방법 |
JP4891144B2 (ja) * | 2007-05-08 | 2012-03-07 | キヤノン株式会社 | 画像検索装置及び画像検索方法 |
JP4375448B2 (ja) * | 2007-06-26 | 2009-12-02 | ソニー株式会社 | 画像処理装置、撮像装置、画像処理方法およびプログラム |
JP4458173B2 (ja) * | 2008-03-19 | 2010-04-28 | カシオ計算機株式会社 | 画像記録方法、画像記録装置、およびプログラム |
US20100005169A1 (en) * | 2008-07-03 | 2010-01-07 | Von Hilgers Philipp | Method and Device for Tracking Interactions of a User with an Electronic Document |
US9020816B2 (en) * | 2008-08-14 | 2015-04-28 | 21Ct, Inc. | Hidden markov model for speech processing with training method |
US20100074557A1 (en) * | 2008-09-25 | 2010-03-25 | Sanyo Electric Co., Ltd. | Image Processing Device And Electronic Appliance |
KR101179497B1 (ko) * | 2008-12-22 | 2012-09-07 | 한국전자통신연구원 | 얼굴 검출 방법 및 장치 |
JP2010165052A (ja) * | 2009-01-13 | 2010-07-29 | Canon Inc | 画像処理装置及び画像処理方法 |
JP5270415B2 (ja) * | 2009-03-19 | 2013-08-21 | トヨタ自動車株式会社 | 眠気判定装置及びプログラム |
JP5257514B2 (ja) * | 2009-05-12 | 2013-08-07 | トヨタ自動車株式会社 | 視認領域推定装置および運転支援装置 |
CN102460469A (zh) * | 2009-06-12 | 2012-05-16 | 皇家飞利浦电子股份有限公司 | 用于生物识别的系统和方法 |
CN102404510B (zh) | 2009-06-16 | 2015-07-01 | 英特尔公司 | 手持装置中的摄像机应用 |
US8745250B2 (en) * | 2009-06-30 | 2014-06-03 | Intel Corporation | Multimodal proximity detection |
JP2011053915A (ja) * | 2009-09-02 | 2011-03-17 | Sony Corp | 画像処理装置、画像処理方法、プログラム及び電子機器 |
JP5476955B2 (ja) * | 2009-12-04 | 2014-04-23 | ソニー株式会社 | 画像処理装置および画像処理方法、並びにプログラム |
JP5249273B2 (ja) * | 2010-03-25 | 2013-07-31 | パナソニック株式会社 | 生体情報計測システム |
JP2012003326A (ja) * | 2010-06-14 | 2012-01-05 | Sony Corp | 情報処理装置、情報処理方法、およびプログラム |
JP2012068948A (ja) * | 2010-09-24 | 2012-04-05 | Renesas Electronics Corp | 顔属性推定装置およびその方法 |
WO2012053311A1 (ja) * | 2010-10-22 | 2012-04-26 | Necソフト株式会社 | 属性判定方法、属性判定装置、プログラム、記録媒体および属性判定システム |
TW201226245A (en) * | 2010-12-31 | 2012-07-01 | Altek Corp | Vehicle apparatus control system and method thereof |
US20140093142A1 (en) * | 2011-05-24 | 2014-04-03 | Nec Corporation | Information processing apparatus, information processing method, and information processing program |
JP5914992B2 (ja) * | 2011-06-02 | 2016-05-11 | ソニー株式会社 | 表示制御装置、表示制御方法、およびプログラム |
US9094539B1 (en) * | 2011-09-22 | 2015-07-28 | Amazon Technologies, Inc. | Dynamic device adjustments based on determined user sleep state |
JP5836095B2 (ja) * | 2011-12-05 | 2015-12-24 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
US20130188825A1 (en) * | 2012-01-19 | 2013-07-25 | Utechzone Co., Ltd. | Image recognition-based startup method |
US20130243077A1 (en) * | 2012-03-13 | 2013-09-19 | Canon Kabushiki Kaisha | Method and apparatus for processing moving image information, and method and apparatus for identifying moving image pattern |
JP5649601B2 (ja) * | 2012-03-14 | 2015-01-07 | 株式会社東芝 | 照合装置、方法及びプログラム |
BR112015002920A2 (pt) * | 2012-08-10 | 2017-08-08 | Honda Access Kk | método e dispositivo de reconhecimento de fala |
JP6181925B2 (ja) * | 2012-12-12 | 2017-08-16 | キヤノン株式会社 | 画像処理装置、画像処理装置の制御方法およびプログラム |
DE102014100364B4 (de) * | 2013-01-18 | 2020-08-13 | Carnegie Mellon University | Verfahren zum Bestimmen, ob eine Augen-abseits-der-Straße-Bedingung vorliegt |
US20140229568A1 (en) * | 2013-02-08 | 2014-08-14 | Giuseppe Raffa | Context-rich communication between a device and a vehicle |
JP6182917B2 (ja) * | 2013-03-15 | 2017-08-23 | ノーリツプレシジョン株式会社 | 監視装置 |
TWI502583B (zh) * | 2013-04-11 | 2015-10-01 | Wistron Corp | 語音處理裝置和語音處理方法 |
US9747900B2 (en) | 2013-05-24 | 2017-08-29 | Google Technology Holdings LLC | Method and apparatus for using image data to aid voice recognition |
EP3007786A1 (en) | 2013-06-14 | 2016-04-20 | Intercontinental Great Brands LLC | Interactive video games |
KR102053820B1 (ko) | 2013-07-02 | 2019-12-09 | 삼성전자주식회사 | 서버 및 그 제어방법과, 영상처리장치 및 그 제어방법 |
WO2015111771A1 (ko) * | 2014-01-24 | 2015-07-30 | 숭실대학교산학협력단 | 음주 판별 방법, 이를 수행하기 위한 기록매체 및 단말기 |
CN104202694B (zh) * | 2014-07-31 | 2018-03-13 | 广东美的制冷设备有限公司 | 语音拾取装置的定向方法和系统 |
US9952675B2 (en) * | 2014-09-23 | 2018-04-24 | Fitbit, Inc. | Methods, systems, and apparatuses to display visibility changes responsive to user gestures |
US9269374B1 (en) | 2014-10-27 | 2016-02-23 | Mattersight Corporation | Predictive video analytics system and methods |
US9535905B2 (en) * | 2014-12-12 | 2017-01-03 | International Business Machines Corporation | Statistical process control and analytics for translation supply chain operational management |
WO2016157642A1 (ja) * | 2015-03-27 | 2016-10-06 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
CN104834222B (zh) * | 2015-04-30 | 2018-11-27 | 广东美的制冷设备有限公司 | 家用电器的控制方法和装置 |
CN106203235B (zh) * | 2015-04-30 | 2020-06-30 | 腾讯科技(深圳)有限公司 | 活体鉴别方法和装置 |
US10008201B2 (en) * | 2015-09-28 | 2018-06-26 | GM Global Technology Operations LLC | Streamlined navigational speech recognition |
DE102015225109A1 (de) | 2015-12-14 | 2017-06-14 | Robert Bosch Gmbh | Verfahren und Vorrichtung zum Klassieren von Augenöffnungsdaten zumindest eines Auges eines Insassen eines Fahrzeugs und Verfahren und Vorrichtung zum Erfassen einer Schläfrigkeit und/oder eines Sekundenschlafes eines Insassen eines Fahrzeugs |
US10255487B2 (en) * | 2015-12-24 | 2019-04-09 | Casio Computer Co., Ltd. | Emotion estimation apparatus using facial images of target individual, emotion estimation method, and non-transitory computer readable medium |
CN106920558B (zh) * | 2015-12-25 | 2021-04-13 | 展讯通信(上海)有限公司 | 关键词识别方法及装置 |
CN107103271A (zh) * | 2016-02-23 | 2017-08-29 | 芋头科技(杭州)有限公司 | 一种人脸检测方法 |
JP6649306B2 (ja) * | 2017-03-03 | 2020-02-19 | 株式会社東芝 | 情報処理装置、情報処理方法及びプログラム |
US10332515B2 (en) | 2017-03-14 | 2019-06-25 | Google Llc | Query endpointing based on lip detection |
CN107910009B (zh) * | 2017-11-02 | 2020-12-01 | 中国科学院声学研究所 | 一种基于贝叶斯推理的码元改写信息隐藏检测方法及系统 |
CN108875535B (zh) * | 2018-02-06 | 2023-01-10 | 北京旷视科技有限公司 | 图像检测方法、装置和系统及存储介质 |
US11361560B2 (en) * | 2018-02-19 | 2022-06-14 | Mitsubishi Electric Corporation | Passenger state detection device, passenger state detection system, and passenger state detection method |
CN109166575A (zh) * | 2018-07-27 | 2019-01-08 | 百度在线网络技术(北京)有限公司 | 智能设备的交互方法、装置、智能设备和存储介质 |
CN109624844A (zh) * | 2018-12-05 | 2019-04-16 | 电子科技大学成都学院 | 一种基于图像识别和语音传控的公交车行车保护系统 |
WO2020157989A1 (ja) * | 2019-02-01 | 2020-08-06 | 日本電気株式会社 | 覚醒度推定装置、覚醒度推定方法、及びコンピュータ読み取り可能な記録媒体 |
CN112101201B (zh) * | 2020-09-14 | 2024-05-24 | 北京数衍科技有限公司 | 行人状态的检测方法、装置及电子设备 |
CN113345472B (zh) * | 2021-05-08 | 2022-03-25 | 北京百度网讯科技有限公司 | 语音端点检测方法、装置、电子设备及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0398078A (ja) * | 1989-09-12 | 1991-04-23 | Seiko Epson Corp | 音声評価システム |
JPH0424503A (ja) * | 1990-05-21 | 1992-01-28 | Nissan Motor Co Ltd | 眼位置検出装置 |
JPH0779937A (ja) * | 1993-09-17 | 1995-03-28 | Nissan Motor Co Ltd | 覚醒度判定装置 |
JPH08145627A (ja) * | 1994-11-17 | 1996-06-07 | Toyota Motor Corp | 顔位置判定装置及び瞬き検出装置 |
JPH11232456A (ja) * | 1998-02-10 | 1999-08-27 | Atr Chino Eizo Tsushin Kenkyusho:Kk | 顔動画像からの表情抽出方法 |
JP2002157596A (ja) * | 2000-11-17 | 2002-05-31 | Sony Corp | ロボット装置及び顔識別方法 |
JP2002288670A (ja) * | 2001-03-22 | 2002-10-04 | Honda Motor Co Ltd | 顔画像を使用した個人認証装置 |
JP2003158643A (ja) * | 2001-11-20 | 2003-05-30 | Shibasoku:Kk | 信号処理方法及び信号処理装置 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2600834B2 (ja) | 1988-08-23 | 1997-04-16 | オムロン株式会社 | 居眠り検出装置 |
JPH07156682A (ja) | 1993-12-03 | 1995-06-20 | Nissan Motor Co Ltd | 覚醒状態検出装置 |
JPH08175218A (ja) | 1994-12-26 | 1996-07-09 | Toyota Motor Corp | 運転状態検出装置 |
JP3710205B2 (ja) | 1996-06-05 | 2005-10-26 | 沖電気工業株式会社 | 音声認識装置 |
US6070098A (en) * | 1997-01-11 | 2000-05-30 | Circadian Technologies, Inc. | Method of and apparatus for evaluation and mitigation of microsleep events |
JP3577882B2 (ja) | 1997-03-31 | 2004-10-20 | 日産自動車株式会社 | 居眠り状態検出装置 |
JP3688879B2 (ja) | 1998-01-30 | 2005-08-31 | 株式会社東芝 | 画像認識装置、画像認識方法及びその記録媒体 |
JPH11352987A (ja) | 1998-06-04 | 1999-12-24 | Toyota Motor Corp | 音声認識装置 |
JP3012226B2 (ja) | 1998-07-24 | 2000-02-21 | マルチメディアシステム事業協同組合 | 居眠り運転防止装置 |
JP4517457B2 (ja) | 2000-06-13 | 2010-08-04 | カシオ計算機株式会社 | 音声認識装置、及び音声認識方法 |
AU2001296459A1 (en) * | 2000-10-02 | 2002-04-15 | Clarity, L.L.C. | Audio visual speech processing |
US7209883B2 (en) * | 2002-05-09 | 2007-04-24 | Intel Corporation | Factorial hidden markov model for audiovisual speech recognition |
EP2204118B1 (en) * | 2002-10-15 | 2014-07-23 | Volvo Technology Corporation | Method for interpreting a drivers head and eye activity |
US7359529B2 (en) * | 2003-03-06 | 2008-04-15 | Samsung Electronics Co., Ltd. | Image-detectable monitoring system and method for using the same |
-
2005
- 2005-05-23 US US11/596,258 patent/US7894637B2/en active Active
- 2005-05-23 EP EP05741463.3A patent/EP1748387B1/en not_active Ceased
- 2005-05-23 JP JP2006513753A patent/JP4286860B2/ja active Active
- 2005-05-23 WO PCT/JP2005/009376 patent/WO2005114576A1/ja not_active Application Discontinuation
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0398078A (ja) * | 1989-09-12 | 1991-04-23 | Seiko Epson Corp | 音声評価システム |
JPH0424503A (ja) * | 1990-05-21 | 1992-01-28 | Nissan Motor Co Ltd | 眼位置検出装置 |
JPH0779937A (ja) * | 1993-09-17 | 1995-03-28 | Nissan Motor Co Ltd | 覚醒度判定装置 |
JPH08145627A (ja) * | 1994-11-17 | 1996-06-07 | Toyota Motor Corp | 顔位置判定装置及び瞬き検出装置 |
JPH11232456A (ja) * | 1998-02-10 | 1999-08-27 | Atr Chino Eizo Tsushin Kenkyusho:Kk | 顔動画像からの表情抽出方法 |
JP2002157596A (ja) * | 2000-11-17 | 2002-05-31 | Sony Corp | ロボット装置及び顔識別方法 |
JP2002288670A (ja) * | 2001-03-22 | 2002-10-04 | Honda Motor Co Ltd | 顔画像を使用した個人認証装置 |
JP2003158643A (ja) * | 2001-11-20 | 2003-05-30 | Shibasoku:Kk | 信号処理方法及び信号処理装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1748387A4 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007236488A (ja) * | 2006-03-06 | 2007-09-20 | Toyota Motor Corp | 覚醒度推定装置及びシステム並びに方法 |
JP2008171107A (ja) * | 2007-01-10 | 2008-07-24 | Matsushita Electric Ind Co Ltd | 顔状況判定処理装置および撮像装置 |
JP2008171108A (ja) * | 2007-01-10 | 2008-07-24 | Matsushita Electric Ind Co Ltd | 顔状況判定処理装置および撮像装置 |
US8400313B2 (en) | 2007-01-19 | 2013-03-19 | Asahi Kasei Kabushiki Kaisha | Vehicle driver sleep state classification generating device based on Hidden Markov Model, sleep state classification device and warning device |
WO2008088070A1 (ja) | 2007-01-19 | 2008-07-24 | Asahi Kasei Kabushiki Kaisha | 覚醒状態判定モデル生成装置、覚醒状態判定装置及び警告装置 |
EP2363067A1 (en) | 2007-01-19 | 2011-09-07 | Asahi Kasei Kabushiki Kaisha | Arousal state classification model generating device, arousal state classifying device, and warning device |
JP4805358B2 (ja) * | 2007-01-19 | 2011-11-02 | 旭化成株式会社 | 覚醒状態判定モデル生成装置、覚醒状態判定装置及び警告装置 |
JP2009279099A (ja) * | 2008-05-20 | 2009-12-03 | Asahi Kasei Corp | 瞬目種別識別装置、瞬目種別識別方法、及び瞬目種別識別プログラム |
JP2010074399A (ja) * | 2008-09-17 | 2010-04-02 | Sony Corp | 構図判定装置、構図判定方法、画像処理装置、画像処理方法、プログラム |
CN102837702B (zh) * | 2011-06-24 | 2016-05-25 | 株式会社普利司通 | 路面状态判断方法及路面状态判断装置 |
CN102837702A (zh) * | 2011-06-24 | 2012-12-26 | 株式会社普利司通 | 路面状态判断方法及路面状态判断装置 |
JP2014092931A (ja) * | 2012-11-02 | 2014-05-19 | Sony Corp | 画像表示装置並びに情報入力装置 |
CN104269172A (zh) * | 2014-07-31 | 2015-01-07 | 广东美的制冷设备有限公司 | 基于视频定位的语音控制方法和系统 |
US10264210B2 (en) | 2015-08-03 | 2019-04-16 | Ricoh Company, Ltd. | Video processing apparatus, method, and system |
CN107123423A (zh) * | 2017-06-07 | 2017-09-01 | 微鲸科技有限公司 | 语音拾取装置及多媒体设备 |
JP7098265B2 (ja) | 2018-03-06 | 2022-07-11 | 三菱電機株式会社 | 運転支援装置 |
WO2019171452A1 (ja) * | 2018-03-06 | 2019-09-12 | 三菱電機株式会社 | 運転支援装置、運転支援方法及び運転支援装置を備えた運転支援システム |
JPWO2019171452A1 (ja) * | 2018-03-06 | 2020-10-22 | 三菱電機株式会社 | 運転支援装置、運転支援方法及び運転支援装置を備えた運転支援システム |
JP2020091848A (ja) * | 2018-12-04 | 2020-06-11 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 映像処理方法及び装置 |
JP7419017B2 (ja) | 2018-12-04 | 2024-01-22 | 三星電子株式会社 | 映像処理方法及び装置 |
WO2021114224A1 (zh) * | 2019-12-13 | 2021-06-17 | 华为技术有限公司 | 语音检测方法、预测模型的训练方法、装置、设备及介质 |
US12094468B2 (en) | 2019-12-13 | 2024-09-17 | Huawei Technologies Co., Ltd. | Speech detection method, prediction model training method, apparatus, device, and medium |
JP7415611B2 (ja) | 2020-01-30 | 2024-01-17 | 富士通株式会社 | 計算プログラム、計算方法及び計算装置 |
JP2021120820A (ja) * | 2020-01-30 | 2021-08-19 | 富士通株式会社 | 計算プログラム、計算方法及び計算装置 |
US20220415003A1 (en) * | 2021-06-27 | 2022-12-29 | Realtek Semiconductor Corp. | Video processing method and associated system on chip |
WO2023032283A1 (ja) * | 2021-09-02 | 2023-03-09 | 株式会社トランストロン | 通報装置、通報方法及び通報プログラム |
Also Published As
Publication number | Publication date |
---|---|
US20080037837A1 (en) | 2008-02-14 |
JPWO2005114576A1 (ja) | 2008-07-31 |
EP1748387A1 (en) | 2007-01-31 |
JP4286860B2 (ja) | 2009-07-01 |
EP1748387B1 (en) | 2018-12-05 |
EP1748387A4 (en) | 2015-04-29 |
US7894637B2 (en) | 2011-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4286860B2 (ja) | 動作内容判定装置 | |
JP5323770B2 (ja) | ユーザ指示取得装置、ユーザ指示取得プログラムおよびテレビ受像機 | |
US11854550B2 (en) | Determining input for speech processing engine | |
US8635066B2 (en) | Camera-assisted noise cancellation and speech recognition | |
JP4633043B2 (ja) | 画像処理装置 | |
KR100820141B1 (ko) | 음성 구간 검출 장치 및 방법 그리고 음성 인식 시스템 | |
US20100332229A1 (en) | Apparatus control based on visual lip share recognition | |
US20040122675A1 (en) | Visual feature extraction procedure useful for audiovisual continuous speech recognition | |
JP2001092974A (ja) | 話者認識方法及びその実行装置並びに音声発生確認方法及び装置 | |
Hassanat | Visual speech recognition | |
CN114202604A (zh) | 一种语音驱动目标人视频生成方法、装置及存储介质 | |
JP6819633B2 (ja) | 個人識別装置および特徴収集装置 | |
Navarathna et al. | Multiple cameras for audio-visual speech recognition in an automotive environment | |
Huang et al. | Audio-visual speech recognition using an infrared headset | |
JP2002312796A (ja) | 主被写体推定装置、撮像装置、撮像システム、主被写体推定方法、撮像装置の制御方法、及び制御プログラムを提供する媒体 | |
US11315362B2 (en) | Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same | |
JP7347511B2 (ja) | 音声処理装置、音声処理方法、およびプログラム | |
Hassanat et al. | Visual words for lip-reading | |
Yoshinaga et al. | Audio-visual speech recognition using new lip features extracted from side-face images | |
Heckmann | Inter-speaker variability in audio-visual classification of word prominence. | |
Ibrahim | A novel lip geometry approach for audio-visual speech recognition | |
Lucey | Lipreading across multiple views | |
KR102535244B1 (ko) | 음성인식 및 안면 일부 랜드마크를 이용한 신원확인 시스템 및 그 방법 | |
Yau et al. | Visual speech recognition using dynamic features and support vector machines | |
JP4645301B2 (ja) | 顔形状変化情報抽出装置、顔画像登録装置および顔画像認証装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006513753 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005741463 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11596258 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005741463 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11596258 Country of ref document: US |