JP5529103B2 - Face direction detection method and information processing device - Google Patents

Face direction detection method and information processing device Download PDF

Info

Publication number
JP5529103B2
JP5529103B2 JP2011252153A JP2011252153A JP5529103B2 JP 5529103 B2 JP5529103 B2 JP 5529103B2 JP 2011252153 A JP2011252153 A JP 2011252153A JP 2011252153 A JP2011252153 A JP 2011252153A JP 5529103 B2 JP5529103 B2 JP 5529103B2
Authority
JP
Japan
Prior art keywords
face
face recognition
direction
image data
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2011252153A
Other languages
Japanese (ja)
Other versions
JP2013109430A (en
Inventor
享 下遠野
潤 杉山
英輝 柏山
Original Assignee
レノボ・シンガポール・プライベート・リミテッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by レノボ・シンガポール・プライベート・リミテッド filed Critical レノボ・シンガポール・プライベート・リミテッド
Priority to JP2011252153A priority Critical patent/JP5529103B2/en
Publication of JP2013109430A publication Critical patent/JP2013109430A/en
Application granted granted Critical
Publication of JP5529103B2 publication Critical patent/JP5529103B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a technique for detecting the direction of a face while reducing the probability of a positive false recognition (False positive) from a person image taken by a camera. The present invention relates to a technology for controlling electric power.

  In the field of driving support technology for moving vehicles, technology for recognizing the direction of the driver's face has been introduced. Patent Document 1 discloses a face image processing apparatus that detects the face direction of a driver of an automobile. In this document, face model groups corresponding to various orientations are generated based on the front face of the driver, and the face model group is compared with the image captured by the camera, and the face model that best matches is selected. It is described that the orientation of the face is detected. Patent Document 2 discloses a technique for estimating a face direction by calculating the center of gravity from the positions of facial feature points such as mouths and eyes detected from image data.

  Patent Document 3 discloses a technique for calculating a face probability using a histogram for each block in a window and generating a face probability value or a log likelihood value. Patent Document 4 discloses a technique in which a digital camera determines the orientation of a face. In this document, the eyes, nose, ears, eyebrows, mouth, and hair, which are feature points, are detected from the face image as contour information, and the movement of the line of sight is detected to detect that the face has moved. It is described to do.

  Patent Document 5 processes an image captured by a camera to detect an operator's face and receive an input from an input unit, so that an input when the operator's face is not facing the display screen is performed. Disclose mobile devices to prevent. Patent Document 6 discloses a transaction apparatus that uses a user's face recognition to determine whether or not a medium is forgotten. This document describes that it is determined that there is a face when the user is slightly turned sideways, and that there is no face when the upper body is turned sideways or backwards. Non-patent documents 1 to 3 disclose a face recognition technique based on the Viola-Jones method.

JP 2003-308533 A International Publication WO02 / 007095 JP-T-2006-508463 JP 2010-177859 A JP 2011-91749 A JP 2011-86002 A

Rapid Object Detection using a Boosted Cascade of Simple Features, ACCEPTED CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2001, Paul Viola and Michael Jones Research on face tracking method by maximizing discrimination score obtained from face detector using rectangular feature, University of Tsukuba Graduate School, Akira Hidaka, February 2006 Face Detection from Low Resolution, Tokyo Institute of Technology Graduate Paper, Shinji Hayashi Osamu Hasegawa, Journal of the Institute of Image Electronics Engineers of Japan, Vol. 34, 2005

  Computers execute programs such as image data rendering and statistical calculations for a long time for batch processing, or interactive processing in response to input from the input device performed on the screen displayed on the display. To do. In a computer that automatically performs batch processing, a processing progress screen may be displayed on the display. In addition, a computer that performs interactive processing must display a screen on a display.

  However, while the computer is processing, the user does not always look at the display. When looking at a document in front of the computer, thinking while looking into the distance, or talking with others, the user looks at the display. Distract. In both batch processing and interactive processing, there is no problem in using the computer even if the display is stopped while the user is looking away from the display. Even if the time for looking away is short, it takes a long time to collect the whole, so if the display can be stopped finely during that time, power consumption can be greatly reduced.

  Previously, it was considered to stop the display when the user's face was not in front of the camera, but they looked away from the display when the user's face was in front of the camera It was considered difficult to detect the condition and stop the display. The biggest reason is that when judging the direction of the face from the image taken by the camera, it does not look away from the display in the process of pattern recognition of analog facial feature information. Since the so-called positive misrecognition that determines that the user is looking away cannot be sufficiently excluded, it can be mentioned that the display is stopped at an unacceptable frequency when the display is viewed.

  Specifically, when the user supports the cheek with his hand while looking at the display, stroking the surface of the face with his hand, or correcting the posture of the glasses, the display is judged to have diverted his eyes for pattern recognition. Because it will do. Furthermore, given that there are an unspecified number of computer users, it is also necessary to introduce a face recognition technology that can detect the direction of the faces of all users who may use it. Can do. Therefore, in order to control the operation of the display by determining the direction of the face, a method that reduces the probability of positive misrecognition and does not hinder the use of the computer even if positive misrecognition occurs It is necessary to devise.

  Accordingly, an object of the present invention is to provide a method for controlling the power consumption of an information processing device by determining the direction of a face relative to the front direction of a display. It is a further object of the present invention to provide a method for detecting the face direction while reducing the probability of positive misrecognition. A further object of the present invention is to provide an information processing device, a face direction recognition device, and a computer program that realize such a method.

  In one aspect of the present invention, a method for controlling power consumption in an information processing device capable of communicating with a camera module and a display is provided. The camera module and the display may be physically coupled or separated from the information processing device. First, the information processing device transitions to the power-on state. The information processing device determines the direction of the person's face from the image data periodically received from the camera module, and in response to determining that the face direction is shifted from the front of the display, The device transitions from the power on state to the low power state.

  The direction in which the face shifts may be any of the left-right direction, the up-down direction, and the diagonal direction as long as it is a direction away from the front face. When the direction of the user's face is shifted from the front of the display, the user can think that at least the work to be done while looking at the display is interrupted, so low power consumption is lower than in the power-on state. Transition to the power state.

  The low power state can be realized by a process of stopping the backlight of the display, or by causing the main processor or the image-dedicated processor housed in the information processing device to transition to the low power state. The information processing device has a face recognition intensity value that has a maximum value when the face direction is a predetermined direction with respect to the front of the camera module from each image data, and decreases with a shift from the predetermined direction. The direction of the face can be determined by calculating.

  For each image data, the current image data and past image data are compared for each elapsed time, and it is determined whether the amount of change per unit time of the face recognition intensity value calculated from the current image data exceeds a predetermined value. The probability of positive misrecognition can be reduced. The change speed of the face recognition intensity value when the user who operates the information processing device turns his / her neck to shift the face from the front direction is slower than the change speed when the user touches the face while facing the front. Paying attention to this difference in change speed, when the amount of change per unit time exceeds a predetermined value, the current image data is excluded from the judgment target of the face direction, thereby reducing the probability of positive misrecognition. be able to.

  The predetermined direction in which the face recognition intensity value becomes maximum can be a plurality of directions shifted by a predetermined angle with respect to the front direction of the display or the front direction of the display. Even when the face is not recognized from the image data, the user may exist when the input device is operated, so there may be a positive recognition error. It is expected that the information processing device is operated at an unnatural position outside the shooting range. In this case, the use of the information processing device can be ensured by stopping the transition to the low power state while the input device continues to be operated within a predetermined time.

  When the input device is operated during the transition to the low power state, it is possible to return to the power on state because the user is sure to use the information processing device. If the user operates the input device immediately after detecting the direction of the face and transitioning to the low power state, there is a possibility that a positive misrecognition has occurred and the accuracy of face recognition has not been secured sufficiently Is expensive. In this case, it is possible to detect a stable face by maintaining the power-on state while it is not determined that the face is continuously facing the front of the display for a predetermined time after returning to the power-on state. It is desirable to interrupt the control of the power for detecting the face direction until Further, when it is determined that the face is facing the front direction of the display during the transition to the low power state, it is possible to return from the low power state to the power on state.

  In another aspect of the present invention, a method for determining the direction of a face relative to a camera from image data acquired from the camera while reducing the probability of positive misrecognition is provided. Image data is periodically acquired from the camera, and the face recognition intensity value that has the maximum value when the face direction with respect to the camera is in a predetermined direction and decreases as it shifts from the predetermined direction is set for each image data. The face recognition is calculated from the image data calculated from the image data determined by determining whether or not the change amount per unit time of the face recognition intensity value is not more than a predetermined value for each image data. The direction of the face is determined by comparing the intensity value with the threshold value.

  The face recognition intensity value can be configured to have a characteristic including a single peak value centered in the front direction of the camera or a characteristic including a plurality of peak values. When the face recognition intensity value includes a plurality of peak values, the face recognition intensity value should be configured to include a peak value when the face is turned by a predetermined angle in the horizontal direction and a peak value when the face is turned by a predetermined angle in the vertical direction. Can do. At this time, various controls can be performed in which the orientation of the face varies depending on the predetermined direction.

  According to the present invention, it is possible to provide a method for controlling the power consumption of an information processing device by determining the face direction with respect to the front direction of the display. Furthermore, according to the present invention, it is possible to provide a method for detecting the face direction while reducing the probability of positive misrecognition. Furthermore, according to the present invention, an information processing device, a face direction recognition device, and a computer program that can realize such a method can be provided.

It is a functional block diagram which shows the hardware and software structure of the notebook PC10 concerning this Embodiment. It is a figure explaining the method of calculating face recognition intensity | strength absolute value. It is a figure explaining the method of calculating face recognition intensity | strength absolute value. It is a figure explaining the procedure in which the strong discriminator 250 calculates face recognition intensity | strength absolute value from the image data of the recognition target image | photographed with the camera module 11. FIG. It is the figure which showed typically a mode that the face recognition intensity relative value changed with respect to the direction of a face. It is a figure explaining the method of reducing the probability of positive misrecognition when recognizing the direction of a face. It is a flowchart which shows the procedure in which the face recognition part 15 determines whether the image image | photographed with the camera module 11 is a front face (range X), an oblique face (range Y), or a non-face (range Z). It is a flowchart which shows the procedure which judges the direction of a face and controls the power consumption of notebook PC10. It is a state transition diagram when judging the direction of the face and controlling the power consumption of the notebook PC. It is a figure explaining the structure of the strong discriminator 500 which a front face and two diagonal faces each output the largest face recognition intensity | strength absolute value. It is a figure which shows the characteristic of the face recognition intensity | strength absolute value which another strong discriminator outputs.

[Computer configuration]
FIG. 1 is a functional block diagram showing a hardware and software configuration of a notebook personal computer (notebook PC) 10 according to the present embodiment. In FIG. 1, the camera module 11, the input device 13, the display 25, and the backlight attached to the display are hardware, and the other elements are the cooperation between the hardware resources such as the processor and the main memory and the software resources. It is realized by. The software resource includes a known program such as an OS and a device driver, and a novel face detection program and power control program according to the present embodiment.

  The camera module 11 includes an image sensor, an image signal processor, a USB interface, and the like. The camera module 11 transfers image data such as VGA (640 × 480), QVGA (320 × 240), WVGA (800 × 480), and WQVGA (400 × 240) to the face recognition unit 15 at a transfer speed of 30 fps as an example. To do. As will be described later, in this embodiment, the power direction of the notebook PC 10 is controlled by detecting the direction of the user's face and turning on or off the backlight 26 of the display 25. The image transfer is continued even during the low power state with the backlight 26 turned off.

  The camera module 11 is attached to the top of the housing that houses the display 25 so that when the user's face is facing the front direction of the display 25, the face direction with respect to the camera module 11 is also substantially the front direction. ing. However, the fact that the camera module 11 is physically attached to the notebook PC 10 is not an essential element of the present invention, and the camera module 11 is arranged in the front direction of the display 25 and connected to the notebook PC 10 by a wireless interface. May be. The front direction of the camera module 11 and the front direction of the display 25 as viewed from the user are slightly different in the vertical direction. However, in this specification, the front direction of the camera module 11 and the front direction of the display 25 are not distinguished. explain.

  The input device 13 is a device such as a keyboard, a touch panel, or a pointing device that a user performs an input operation. The input device 13 can input even when the notebook PC 10 is in a low power state. The face recognition unit 15 detects the face direction by calculating the absolute value of the face recognition intensity from the image captured by the camera module 11 or calculating the relative value of the face recognition intensity from the absolute value of the face recognition intensity. The face recognition intensity absolute value and the face recognition intensity relative value will be described later.

  The face recognition unit 15 is a registration function in which the camera module 11 registers the maximum value of the face recognition intensity absolute value for normalization in the face image data registration unit 17, and the face direction relative to the display 25 from the face recognition intensity relative value. A face direction detection function to be detected, and a power control function to control the operation of the backlight 26 by instructing the power control unit 23 according to the face direction detection and the operation of the input device 13 are provided.

  The face image data registration unit 17 registers the absolute value of the face recognition intensity of the user who is operating the input device 11 while looking at the display 25 calculated by the face recognition unit 15. The registered face recognition intensity absolute value is close to the maximum value for the face image of the user. The maximum value of the face recognition intensity absolute value is used to obtain a relative value of face recognition intensity by normalizing the face recognition intensity absolute value that differs depending on individual differences and image brightness when recognizing the face direction.

  The GUI object monitoring unit 19 notifies the face recognition unit 15 of the timing at which the user operates the input device 13 to access the object displayed on the display 25 in order to obtain the maximum absolute value of face recognition intensity. The input device monitoring unit 21 notifies the face recognition unit 15 of a window message generated from an input event when the input device 13 is operated. The power control unit 23 turns on or off the backlight 26 of the display 25 according to an instruction from the face recognition unit 15.

[Face recognition intensity absolute value]
Next, a method in which the face recognition unit 15 generates an absolute value of face recognition intensity from image data will be described. The absolute value of the face recognition intensity is obtained by comparing the feature information extracted from the image captured by the camera module 11 with the feature information extracted from the image data obtained by capturing a plurality of sample faces. Is a probabilistic characteristic value indicating the ratio of matching to. The absolute value of the face recognition intensity is the maximum when the face direction is the reference direction, and the left and right directions centered on the vertical axis of the head from the reference direction, the vertical direction centered on the horizontal axis of the head, or the diagonal direction In either direction, it has a characteristic that becomes smaller as it shifts. As an example, the reference direction may be the front direction of the camera module 11.

  As an example, the absolute value of the face recognition intensity can be calculated using the Viola-Jones method. 2 and 3 are diagrams for explaining a method for calculating the absolute value of the face recognition intensity. FIG. 2A is a diagram illustrating a structure of the strong classifier 100 having a cascade structure included in the face recognition unit 15. FIG. 2B is a diagram for explaining the four rectangular features constituting each of the weak classifiers C1 to Cn. FIG. 2C shows an identification pattern 200 composed of rectangular features 101a.

  The strong discriminator 100 is configured by cascading n stages of weak discriminators C1 to Cn. Each of the weak classifiers C1 to Cn determines whether the input image data is a face (T: True) or a non-face (F: False). If it is determined to be F, the image data is discarded. When the weak classifier Cn at the final stage determines T, the strong classifier 100 determines that the image data input to the weak classifier C1 includes a face.

  When the image data input to the weak classifier C1 is discarded by any of the weak classifiers C1 to Cn-1, or when the weak classifier Cn recognizes a face, the processing ends at that point and the face recognition intensity is absolute. The value is calculated. The probability that the image data includes a face increases as the image data through which the image data passes is a weak classifier at the later stage. As will be described later, the face recognition unit 15 determines the number of stages of the weak classifiers C1 to Cn-1 through which the input image data passes regardless of whether or not the strong classifier 100 has finally recognized the face. The absolute value of face recognition intensity is calculated from the face recognition probability held.

  The weak classifiers C1 to Cn use the rectangular feature 101 in order to determine whether or not the image includes a human face from the luminance of a region set in the image to be recognized. Each rectangular feature 101a-101d has a small rectangular area shown in black and white. Each of the weak classifiers C1 to Cn applies the rectangular feature 101 to the small window obtained by dividing the recognition target image, so that the sum of the gradation values included in the black small rectangular area and the gradation value included in the white small rectangular area are A total difference is calculated, and a threshold that is learned and set in advance for this difference is compared with the actually calculated difference, and T or F is output. When the rectangular feature 101 is applied to a small window, the rectangular feature 101 has four degrees of freedom: the number of small rectangular regions, how to arrange them, coordinates, and size.

  In the Viola-Jones method, in order to enable high-speed calculation, the number and arrangement of small rectangular areas are set to four patterns indicated by the rectangular features 101a to 101d, and the sizes of the small rectangular areas are the same. Therefore, the face recognition algorithm that constitutes the weak classifiers C1 to Cn can select two of the sizes and coordinates of the rectangular features 101a to 101d for the small window. FIG. 2C illustrates four identification patterns 200a to 200d for the rectangular feature 101a having different positions and sizes when applied to a small window. However, the number of these identification patterns is not limited to four.

  A unique face recognition algorithm is applied so that the most accurate face recognition can be performed by setting the identification pattern 200 of the rectangular feature 101 and the number thereof. Identification patterns are similarly created for the rectangular features 101b to 101d. Each identification pattern is assigned to one weak classifier. When five identification patterns are created from the four rectangular features 101a to 101d, a total of 20 identification patterns can be formed. Therefore, the strong classifier 100 including 20 weak classifiers is created. Can do.

  The identification pattern 200 assigned to each of the weak classifiers C1 to Cn can be determined using a learning image having a predetermined size. The learning image is composed of a plurality of face images and a plurality of non-face images. An identification pattern is applied to each learning image to set a threshold value and calculate an error rate. An identification pattern having a minimum error rate with respect to the input learning image is determined as an identification pattern used by the first-stage weak classifier C1. The learning images determined by the weak classifiers C1 to Cn as T include a face image that has been recognized normally and a non-face image that has been positively recognized incorrectly.

  The ratio of face images determined as T to the total number of input face images is α1 to αn, and the ratio of nonface images determined as T to the total number of input nonface images is β1 to βn. . β1 to βn correspond to positive misrecognition probabilities. The error rate is calculated with the probability of positive misrecognition β1 to βn. The identification pattern used by the second-stage weak classifier C2 is determined so that the error rate is minimized with respect to the learning image that has passed the first-stage weak classifier C1. Thereafter, the same procedure is repeated, and n weak classifiers are added in series.

  Next, the strong discriminator 100 created in this way covers the characteristics of a person who may use the computer 10 such as age, gender, race, wearing glasses, hairstyle, and hair color. This is applied to a plurality of face images (sample images) selected to be. As an example, all sample images are front faces facing the front direction of the camera. For each classifier C1 to Cn, a value obtained by dividing the number of sample images recognized as a face (T) by the number of sample images received from the previous weak classifier is set as a face recognition probability γ1 to γn. Assign to Cn.

  FIG. 3 is a diagram illustrating a configuration of a strong classifier 250 in which face recognition probabilities γ1 to γn are assigned to the weak classifiers C1 to Cn. FIG. 4 is a diagram illustrating a procedure in which the strong discriminator 250 calculates the face recognition intensity absolute value from the recognition target image 151 captured by the camera module 11. As an example, the resolution of the image 151 is 320 × 240 pixels. The face recognition unit 15 divides the pixel area of the image 151 into 16 small windows 155 having the same size, and applies the strong classifier 250 to each small window in order.

  The size of the small window 155 is set from the minimum size based on the size of the recognizable face (for example, 32 × 32 pixels in consideration of the resolution of the image) to the maximum size that makes the entire image 151 one small window. be able to. Although FIG. 4 shows that the small window 155 divides the image 151 into 20 planes, each small window 155 when applying the strong discriminator 250 is 1 over the entire image 151. Positioned to overlap while shifting in units of pixels or several pixels. Therefore, the number of small windows of 32 × 32 pixels when shifting in the vertical and horizontal directions in units of one pixel is 609 in the horizontal direction and 209 in the vertical direction, which is 60401 in total.

  The size of the small window 155 can be increased sequentially. As the size of the small window 155 increases sequentially, the total number of combinations on the image 151 of the small window 155 in each size decreases, and finally the entire image 151 becomes one small window. The number of pixels when the size of the small window 155 is sequentially increased from the minimum size does not necessarily need to be one pixel, and in one example, the size is increased by 1.25 times in one dimension.

  Furthermore, if the relative shift amount in view of the size of the small window 155 is taken into consideration, the number of pixels to be shifted can be changed corresponding to the size of the small window 155. For example, when the size of the small window 155 is about twice the minimum size, the number of pixels to be shifted is increased from 1 pixel to 2 pixels, and when the size is further increased, the shift amount with respect to the size of the small window is also taken into consideration. By increasing the number of pixels to be relatively equal, an increase in the combination of positioning of small windows can be reduced.

  The weak classifiers C1 to Cn-1 apply the identification pattern 200 to the small windows 155 of various sizes located at various coordinates on the image 151, and sequentially calculate the difference between the gradation values. If the small window 155 determines T, the process is passed to the next weak classifier. If it is determined that any small window 155 is F, the process is terminated at that time. The process is also terminated when the image 151 passes through the last weak classifier Cn. Subsequently, while gradually increasing the size of the small window 155, each of the weak classifiers C1 to Cn-1 similarly applies the identification pattern 200 to the small window 155 located at various coordinates. When each determination / passing process is completed, the face recognition unit 15 calculates a face recognition intensity absolute value.

  Finally, the size of the small window 155 becomes the size of the image 151, and the determination / passing process is completed when the entire process is completed. The absolute value of the face recognition intensity of each small window is calculated as the total value of the face recognition probabilities possessed by each weak classifier that has passed it. For example, when passing through the weak classifier C4, the face recognition intensity absolute value is calculated as γ1 + γ2 + γ3 + γ4 as the sum of the face recognition probabilities held by the weak classifiers C1 to C4. Finally, a small window that has the maximum value of the absolute value of face recognition intensity held by all small windows 155 and that exceeds a certain threshold often corresponds to its size and coordinate position. Thus, there is a face image to be recognized. This is the absolute value of the face recognition intensity detected in the image 151.

  The absolute value of the face recognition intensity is zero when the image data does not pass through the first-stage weak classifier C1, and becomes the maximum value when the image data passes through the last-stage weak classifier Cn. The larger the number of Cn, the larger the value. In other words, when the front face is used as a reference, the more the number of weak classifiers C1 to Cn that pass through, the more the face-like image can be said. However, even when the final weak classifier Cn determines T, the probability of positive recognition error remains in the final result.

  When the learning image and the sample image are front faces, the absolute value of the face recognition intensity is the maximum value when the face is facing the front direction of the camera module 11, and the face recognition intensity or the face is directed to the vertical direction, the horizontal direction, or the diagonal direction The smaller the feature information of the front face from the image data, the smaller it becomes. The strong discriminator 250 outputs a face recognition intensity absolute value that has the front face as the maximum value and decreases as the front face features decrease. In the present embodiment, the direction of the face relative to the front direction of the display 25 is determined using a characteristic that the absolute value of the face recognition intensity decreases as the shift amount of the user's face relative to the front direction increases.

[Relative value of face recognition intensity]
The absolute value of the face recognition intensity varies among individuals, and also changes depending on the brightness of the image captured by the camera module 11. When the operation of the display 25 is controlled using the absolute value of the face recognition intensity, the brightness of the user and the captured image cannot be limited. In the present embodiment, the facial recognition intensity absolute value is normalized in order to eliminate the influence of the individual difference of the user and the brightness of the captured image on the absolute value of the facial recognition intensity so that the face direction can be recognized by a common algorithm. Turn into.

  In the present embodiment, normalization can be performed without a user's conscious operation. When the user starts using the notebook PC 10, the user accesses the object displayed on the display. Specifically, the user moves the cursor displayed on the display 25 and clicks the target object, or inputs a character at the cursor position.

  In most cases, since the user accessing these objects is gazing at the display 25, the image taken by the camera module 11 at that timing can be regarded as the front face of the user. The GUI object monitoring unit 19 monitors an input event generated by the input device 13 for a predetermined time. The GUI object monitoring unit 19 captures a window message corresponding to the input event and notifies the face recognition unit 15 of the window message.

  The camera module 11 transfers the image data at a timing instructed by the face recognition unit 15 until preparation for normalization is completed. Each time the face recognition unit 15 receives a window message, it instructs the camera module 11 to acquire one frame of image data. The face recognition unit 15 calculates a face recognition intensity absolute value from the received image data. Since the image captured at the timing when the GUI object monitoring unit 19 detects the input event is a front face, the calculated absolute value of the face recognition intensity is close to the maximum value given to the user by the strong classifier 250.

  The face recognition unit 15 calculates the absolute value of the face recognition intensity from the image data acquired at the timings of a plurality of input events, and further calculates the average value. Further, the face recognition unit 15 calculates the average luminance value of the entire image 151 and the average luminance value of the face area from each image data, and further calculates the average luminance value for each image. The face area for which the average luminance value is calculated can be the area of the small window 155 having the size and coordinates from which the maximum face recognition intensity absolute value is obtained in the image 151. In the following description, the average value of the registered face recognition intensity absolute values is simply referred to as the face recognition intensity absolute value, and the average luminance value of the entire image or the average luminance value of the face area is simply referred to as the luminance value. The face recognition unit 15 records the face recognition intensity absolute value and the luminance value as a set in the face image data registration unit 17. At this point, preparation for normalization is completed, and thereafter, the camera module 11 periodically transfers image data to the face recognition unit 15.

  Note that the luminance value of an image obtained by photographing the user who uses the notebook PC 10 changes with time. The face recognition unit 15 causes the input device 13 to generate an input event when the luminance value changes more than a predetermined value or when a positive misrecognition occurs at any time during the period from when the notebook PC 10 is turned on to when it is stopped. At this timing, the face recognition intensity absolute value and the luminance value are registered in the face image data registration unit 17. The face recognition unit 15 normalizes the face recognition strength absolute value registered in the face image data registration unit 17 by subtracting the face recognition strength absolute value calculated from the image 151 to be recognized as the face direction. At this time, the face recognition intensity absolute value registered in combination with the luminance value closest to the luminance value corresponding to the face recognition intensity absolute value calculated from the recognition target image 151 is used for normalization. The quotient obtained by this division is used as the face recognition intensity relative value.

  The set of the face recognition intensity absolute value and the brightness value registered in the face image data registration unit 17 is deleted when the notebook PC 10 is in the power-off state. The face recognition unit 15 newly stores a set of face recognition intensity absolute value and luminance value in the face image data registration unit at the timing of an input event every time the notebook PC 10 is in a power-on state. When the same user uses the notebook PC 10, it may be used for power control from the next time without deleting the absolute value of the face recognition intensity.

  FIG. 5 is a diagram schematically showing how the face recognition intensity relative value changes with respect to the face direction. The horizontal axis in FIG. 5 is the angle of the user's face that rotates horizontally about the vertical axis, and the vertical axis is the relative value of face recognition intensity. At the position where the angle is zero, the user points the face in the front direction of the display 25, and the face recognition intensity relative value P indicates the maximum value. The face recognition intensity relative value decreases regardless of whether the face faces in the left or right direction from the direction of zero angle. θ <absolute value (± θ1) indicates a range X in which it is determined that the user is looking at the display 25. A face facing the direction of the range X is referred to as a front face. The line of sight referred to here is not the direction of the eyeball itself but the direction of the face itself.

  The absolute value (± θ1) ≦ θ <absolute value (± θ2) indicates a range Y in which it is determined that the user's face is present in front of the display 25 but the line of sight is away from the display 25. A face facing the direction of the range Y is referred to as an oblique face. θ ≧ absolute value (± θ2) is a face having a very small face recognition intensity relative value because the user does not exist in front of the camera module 11 or completely faces sideways or back. The range Z in which it is not determined is shown. A face facing the direction of the range Z is referred to as a non-face including a case where the face does not exist.

  The method for calculating the face recognition intensity relative value using the Viola-Jones method has been described so far, but the present invention is not limited to this. Therefore, the rectangular characteristics of the weak classifiers connected in cascade do not need to be limited to the four patterns in FIG. As described below, in order to reduce the probability of positive misrecognition, it is desirable that the amount of change in the relative value of face recognition intensity per unit time when transitioning from a front face to an oblique face is small.

  In other words, if the front face changes to a slant face with a gentle curve, the probability of positive misrecognition can be easily reduced. Therefore, the face recognition probabilities γ1 to γn of each weak classifier or the identification pattern 200 can be selected so that the change of the face recognition intensity relative value becomes gentle. The present invention can also employ other recognition methods that can digitize the facial appearance, which is an analog value, or other recognition methods that can use information indicating the facial appearance as information indicating the direction of the face.

  As an example, the ratio of the distance connecting the main parts that define both the left and right ends of the face, such as the ends of the eyebrows, ears, eyes, mouth, etc. Judge the degree of being. It is also possible to calculate the face recognition intensity absolute value and the face recognition intensity relative value from the total points of the scores for each target point by weighting each part of interest while weighting it. In addition, by using the relative value of face recognition intensity obtained by a combination of several recognition methods, it is possible to create a method with lower error by making use of the highly accurate part of those methods.

[Suppression of positive recognition errors]
When the direction of the face is detected and the operation of the backlight 26 is controlled, it is recognized that the user is facing the direction of the range Y or the range Z even though the user is looking at the display 25 facing the front. It is difficult to put to practical use unless the probability of positive misrecognition is reduced (it is looking away while not looking away). Next, a procedure for reducing the probability of positive misrecognition will be described. FIG. 6 is a diagram for explaining a method of reducing the probability of positive misrecognition when recognizing the face direction. FIG. 6 plots the face recognition intensity relative value calculated from each of the image data received from the camera module 11 at a constant frame rate for each time. 6A to 6C show moving average values calculated from three actual measurement values including the actual measurement value of the face recognition intensity relative value calculated from each image data and the last two actual measurement values. Plotting.

  In FIGS. 6A to 6C, the mark of the actual measurement value is outlined and the mark of the moving average value is filled. The moving average value is intended to further reduce the probability of positive misrecognition by eliminating variations in individual measured values. FIG. 6 (A) shows a situation when the user naturally faces the lateral direction from the front direction, and FIG. 6 (B) shows a situation when the user touches a part of the face while facing the front direction. FIG. 6C shows a situation in which the user suddenly stands up and no face appears in the camera module 11. The threshold value W1 corresponds to the angle ± θ1 in FIG. 5 which is the boundary between the front face and the oblique face. The threshold value W2 corresponds to the angle ± θ2 in FIG. 5 that is the boundary between the oblique face and the non-face.

  FIG. 7 shows a procedure in which the face recognition unit 15 determines whether the image captured by the camera module 11 is a front face (range X), an oblique face (range Y), or a non-face (range Z). It is a flowchart. In the following procedure, the face recognition unit 15 calculates a face recognition intensity relative value from the image data periodically received from the camera module 11, and reduces the probability of positive misrecognition, while the face orientation is any of the three. Determine if there is. In block 301, the face recognition unit 301 acquires one frame of image data from the camera module 11, and in block 303 calculates an actual measurement value and a moving average value of the face recognition intensity relative value.

  In block 305, as an example, Δt is three times the frame transfer period, and ΔP is the difference between the three previous moving average values and the moving average value at that time, and the amount of change in the face recognition intensity relative value P per unit time Δt. ΔP / Δt is calculated. When the change amount ΔP / Δt exceeds the predetermined value, the process proceeds to block 307, the current face recognition intensity relative value is discarded, and the process returns to block 301 without using the data for determining the face direction. As described above, as a main factor of positive misrecognition that occurs when the direction of the face is judged from the magnitude of the relative value of the face recognition strength, the relative value of the face recognition strength by the user touching a part of the face as described above. Decrease. If the user is thinking while looking at the display 25 while placing a hand on a part of the face, the face-like strength relative value may fall within the range Y because the face is likely to remain in the image. It is not desirable to turn off 26.

  In the present embodiment, when the hand is touched on the face or when the camera module 11 stops suddenly and the face no longer takes a picture, the relative value of the face recognition intensity is compared to when the face is turned by turning the neck. Paying attention to the fact that the time period during which a drop occurs is short, the probability of positive misrecognition is reduced. If ΔP / Δt is smaller than the predetermined value, the face recognition unit 15 determines that the current change in the face recognition intensity relative value is valid, and proceeds to block 309. In the determination of block 305, the face recognition unit 15 adopts the amount of change in the face recognition intensity relative value corresponding to the moving average value 171 in FIG. 6A as data for determining the face direction, and in FIG. The amount of change in the face recognition intensity relative value corresponding to the moving average value 173 and the moving average value 175 in FIG. 6C is discarded.

  That is, the state transition in FIG. 9 to be described later is not caused by a sudden change. By not reacting to rapid changes that often accompany the occurrence of positive misrecognition, the occurrence itself is prevented. However, in this method, when looking away while turning the neck abruptly, it is not recognized as a legitimate looking away. However, based on normal user behavior, the frequency of such rapid rotation is determined to be sufficiently smaller than the frequency of occurrence when the face recognition intensity relative value changes suddenly by unconsciously placing a hand on the face. Justified. Certainly, by not recognizing the movement slightly off the normal look that turns the head abruptly as a look away, it will lead to a loss of power saving opportunities described later, but considering the frequency of occurrence, it is more positive than the loss The benefits obtained by reducing false recognition are large enough.

  In block 309, it is determined whether or not the current moving average value P is P> W1. When the change amount ΔP / Δt is equal to or smaller than the threshold value and P> W1 as in the current moving average value 177, the process proceeds to block 311 and the face recognition unit 15 determines that the face orientation is in the range X. Block 301 waits for the next image data. When the current moving average value P is not P> W1, the process proceeds to block 313, and it is determined whether or not the current moving average value P is W2 <P ≦ W1.

  When the change amount ΔP / Δt is equal to or less than the threshold value and W2 <P ≦ W1 as in the current moving average value 179, the process proceeds to block 315 and the face recognition unit 15 determines that the face orientation is in the range Y direction. In block 301, the next image data is awaited. When the change amount ΔP / Δt is equal to or smaller than the threshold value and P ≦ W2 as in the current moving average value 181, the process proceeds to block 317, and the face recognition unit 15 determines that the face orientation is in the range Z direction. In block 301, the next image data is awaited.

[Power control procedure]
FIG. 8 is a flowchart showing a procedure for controlling the operation of the backlight 26 by determining the direction of the face, and FIG. 9 is a state transition diagram at that time. The numbers described in the states or transitions between the states in FIG. 9 correspond to the block numbers in FIG. In FIG. 9, a state 451 indicates a state in which the backlight 26 is turned on. This state corresponds to a state in which the face recognition unit 15 recognizes that the face recognition intensity relative value calculated from the image data received from the camera module 11 falls within the range X. In state 451, since the user's face direction is recognized as the front direction of the display 25, the backlight 26 is turned on.

  The state 453 shows a state in which the face recognition unit 15 recognizes that the relative value of the face recognition intensity is in the range Z, but since the user is operating the input device 13 in front of the notebook PC 10, the display 25 backlights 26 are lit. The state 453 is set to turn on the backlight 26 with priority given to the fact that the user is actually operating the input device 13 even if the face recognition unit 15 detects a non-face. The state 453 occurs when the user operates the notebook PC 10 at an unnatural position that is out of the shooting range of the camera module 11.

  A state 455 indicates a state in which the face recognition unit 15 recognizes that the face recognition intensity relative value is within the range Y. In state 455, since the face direction of the user deviates from the front direction of the display 25, the backlight 26 is turned off. In the state 457, the face recognition unit 15 recognizes that the relative value of the face recognition intensity is in the range Z. However, unlike the state 453, there is no input from the input device 13, so it is determined that the user does not exist in front of the notebook PC 10. Then, the backlight 26 is turned off.

  In state 459, when the face recognition unit 15 determines that the face direction is in the range Y (state 455) or the range Z (state 457) from the relative value of the face recognition intensity, the user turns off the backlight 26. Indicates a state in which the input device 13 is operated. The backlight 26 turned off due to the relative value of the face recognition intensity is turned on when the input device 13 is operated. In general, when a positive misrecognition occurs (misrecognition that the person is not looking away), the state 451 is erroneously changed to the state 455 or 457, and the backlight is turned off. . When the user performs input from the input device 13 in response to an operation based on the erroneous recognition while expecting the backlight 26 to be turned on again, the transition to the state 459 continues.

  In FIG. 8, when the notebook PC 10 is turned on in block 401 and the device starts operating, the backlight 26 of the display 25 is turned on. Each time the user operates the input device 13, the face recognition unit 15 registers a set of face recognition intensity absolute value and luminance value in the face image data registration unit 17 in cooperation with the GUI object monitoring unit 19. The registered absolute value of the face recognition intensity is a maximum value or a value close to the luminance value for the user. In block 402, the face recognition unit 15 normalizes the face recognition intensity absolute value calculated for the image data acquired from the camera module 11 with the registered face recognition intensity absolute value to calculate the face recognition intensity relative value, and further calculates the moving average value. calculate.

  In block 403, the face recognition unit 15 processes the image data of one frame received from the camera module 11 according to the procedure shown in FIG. 7 and determines the face direction with respect to the front direction of the display 25. When the face recognition unit 15 recognizes a non-face because the face recognition intensity relative value is within the range Z, the process proceeds to block 421. In block 421, the input device monitoring unit 21 monitors an input event of the input device 13, and in block 423, the face recognition unit 15 monitors a time interval at which the input event occurs.

  The input device monitoring unit 21 notifies the face recognition unit 15 every time an input event is detected. In blocks 421 and 423, as long as the input event is received within a predetermined time interval, the face recognition unit 15 returns to block 402 to calculate a new face recognition intensity absolute value, but keeps the backlight 26 lit. . In this procedure, even when the face recognition unit 15 determines that the face is non-face, the user operates the input device 13 while keeping an unnatural posture or a temporary look away from the shooting range of the camera module 11. This corresponds to processing for not turning off the backlight 26.

  When the face recognition unit 15 does not receive an input event from the input device monitoring unit 21 for a predetermined time in block 423, the process proceeds to block 407, and the face recognition unit 15 turns off the backlight 26 to the power control unit 23. To instruct. In block 407, the power control unit 23 that has received the instruction turns off the backlight 26. This procedure corresponds to processing according to the determination that the face recognition unit 15 has detected a non-face is accurate. When the block 421 is shifted to the block 402, the recognition accuracy may be lowered, so the face recognition unit 15 newly registers a set of face recognition intensity absolute value and luminance value in the face image data registration unit 17 It can also be used to normalize image data to be processed later.

  Returning to block 403, when the face recognition unit 15 recognizes that the face recognition intensity relative value is not in the range Z, that is, in the range X or Y, the process proceeds to block 405. When the face recognition unit 15 recognizes that the face recognition intensity relative value is within the range X in block 405, the process returns to block 402. This procedure corresponds to a process of maintaining the backlight 26 on when the face recognition unit 25 recognizes that the current image data includes a front face, and means that the state remains in the state 451.

  When the face recognition unit 15 recognizes that the face recognition intensity relative value is within the range Y in block 405, the process proceeds to block 407. This procedure corresponds to a process of turning off the backlight 26 of the display 25 that does not need to be used when the face recognition unit 15 determines that the current image data includes an oblique face. In the present embodiment, as shown in the procedures of blocks 403 to 407 and blocks 421 and 423, when the face recognition unit 15 recognizes an oblique face and when the face recognition unit 15 recognizes a non-face and the input device 13 inputs The backlight 26 is turned off only when it is confirmed through monitoring of the event that there is no user.

  In block 408 following block 407, even after the backlight 26 is turned off, the face recognition unit 15 receives new image data and recognizes the direction of the face, and the input device monitoring unit 21 continues the input event detection operation. Yes. When an input event is detected in block 409, the input device monitoring unit 21 notifies the corresponding window message to the face recognition unit 15, and the process proceeds to block 431. In block 431, the face recognition unit 15 instructs the power control unit 23 to turn on the backlight 26. This procedure indicates that if the input device 13 is operated regardless of the face direction in block 408, it is determined that the user needs the display 25 and the display 25 can be used. ing.

  Since the user normally operates the input device 13 while gazing at the display 25, if the transition from the block 409 to the block 431 is performed when the range Y or the range Z is recognized, the accuracy of the face recognition unit 15 is not sufficient. It can be said that there is a high possibility that the face recognition unit 15 has made a positive erroneous recognition in block 408. In other words, when the backlight 26 is turned off while gazing at the display, the user immediately operates the input device 13 to turn on the backlight.

  As described with reference to FIGS. 6 and 7, the face recognition unit 15 incorporates an algorithm that suppresses positive erroneous recognition, but corresponds to the absolute value of the face recognition intensity registered in the face image data registration unit 17. When the difference between the luminance value and the luminance value of the image data acquired during recognition increases, or the user at the time of registration changes, the recognition accuracy may decrease. When a positive misrecognition occurs due to such a cause, the backlight 26 is turned on in a block 431 with priority given to user workability. This operation corresponds to the transition from the state 455 or 457 to the state 459 in FIG.

  In block 433 following block 431, the face recognition unit 15 determines whether or not the range X has been recognized continuously for a predetermined time. When it is recognized that the user is facing the front direction of the display 25 continuously for a certain period of time, it is determined that the recognition accuracy is stable, and the process returns to block 402. In this case, it is possible to improve recognition accuracy by newly registering a set of face recognition intensity absolute value and luminance value in the face image data registration unit 17 and using it for normalization of newly acquired image data. On the other hand, if the input device 13 is operated by the user in block 409 and the range Y or range Z is recognized within the predetermined time in block 433, a positive misrecognition (not looking away) The backlight 26 is kept lit by determining that it is erroneously determined to be looking away.

  When the input device 13 is not operated in block 409, the process moves to block 411, and the face recognition unit 15 determines whether or not the face recognition intensity relative value calculated from the received image data is in the range X. When the face recognition unit 15 recognizes that the face recognition intensity relative value is within the range X, the process proceeds to block 413. In block 413, the face recognition unit 15 instructs the power control unit 23 to turn on the backlight 26 and returns to block 402. When the face recognition unit 15 recognizes that the relative value of the face recognition intensity is in the range Y or the range Z in block 411, the backlight 26 is kept off, and in step 408 the face recognition unit 15 determines the Perform recognition processing.

  In the procedure so far, the method of reducing the power consumption by turning off the backlight 26 when the user is not looking at the display 25 has been described. However, the reduction of the power consumption in the present invention is the control of the backlight 26. It is not limited. In the present invention, the clock and voltage of the CPU and the GPU that can reduce the performance when the user is not looking at the display 26 may be controlled simultaneously or independently with the control of the backlight 26.

  In the present embodiment, a method for realizing power saving by recognizing a face direction while suppressing positive misrecognition is described. However, the face recognition unit 15 is not looking away. There is a case of so-called false negative recognition (determined to be a front face despite detecting an oblique face or a non-face). However, in the case of negative misrecognition, it only leads to a loss of opportunity that could not be done despite the fact that it can save power, leading to a fatal problem that would make the user uncomfortable by performing an incorrect operation. Don't worry. The power saving effect is a property that can reduce the power as a whole.

  So far, the strong discriminator 250 has been described in which the absolute value of the face recognition intensity of the front face having the reference direction as the front direction of the camera module 11 is the maximum value. A strong classifier whose absolute value of the face recognition strength of the diagonal face facing the right direction and the diagonal face facing the left direction has the maximum value, or in addition, the absolute value of the face recognition strength of the front face is also the maximum value A strong classifier can also be configured. FIG. 10 is a diagram illustrating a configuration of a strong discriminator 500 that outputs an absolute value of face recognition intensity that maximizes the front face and the two oblique faces.

  The strong discriminator 500 includes three strong discriminators 501, 503, and 505 and a comparator 507. Each of the strong classifiers 501 to 505 includes a plurality of weak classifiers cascaded in the same manner as the strong classifier 250 of FIG. The strong discriminator 501 is configured to output the maximum face recognition intensity absolute value when facing the left direction by an angle θ1. The strong discriminator 503 has the same configuration as the strong discriminator 250 and is configured to output the maximum face recognition intensity absolute value when facing the front direction. The strong discriminator 505 is configured to output the maximum face recognition intensity absolute value when facing the right direction by an angle θ1.

  The angle ± θ1 can be an angle indicating the boundary between the front face and the oblique face. In the strong classifiers 501 and 505, the rectangular feature and the identification pattern are selected so as to output the maximum face recognition intensity absolute value for the image data of the diagonal face in the left direction or the diagonal face in the right direction. The left and right diagonal faces are also used for the learning image and the sample image.

  One frame of image data input to the strong discriminator 500 is always processed in parallel by the three strong discriminators 501 to 505. The comparator 507 compares the face recognition intensity absolute values S1 to S5 output from the three strong classifiers 501 to 505, and outputs the maximum value face recognition intensity absolute value Pz. The characteristic of the face recognition intensity absolute value Pz in this case has three peaks as shown in FIG. The comparator 507 selects the output of the strong classifier 503 in the range A, selects the output of the strong classifier 501 in the range B, and selects the output of the strong classifier 505 in the range C. When the strong classifiers 501 and 505 detect the recognition intensity absolute value corresponding to θ> absolute value (± θ2), it can be determined that the face is non-face.

  The face image data registration unit 17 stores the absolute value of the face recognition intensity when the user faces the front direction as the maximum value, but in the case of the same user under the same light conditions, the specified angle The face recognition strength absolute values obtained from the strong classifiers when facing the left and right directions and the front direction are almost the same. The face recognition intensity absolute value Pz including all cases including the left and right directions is normalized. The strong discriminator 500 having such characteristics can distinguish the front face and the oblique face with higher accuracy since the face recognition intensity absolute value has the peak value even at the boundary between the front face and the oblique face.

  In another example, two strong classifiers with directivity in the horizontal direction of the face and two strong classifiers with directivity in the vertical direction are connected in parallel and their outputs are compared. A strong classifier connected to the instrument can be configured. FIG. 11 shows the characteristics of the face recognition intensity absolute value in this case. In the left-right direction, the angle ± θ1 indicates the boundary between the front face and the horizontal oblique face, and in the vertical direction, the angle ± θ3 indicates the boundary between the front face and the vertical oblique face. And the output of any strong discriminator can be used not to stop the operation of the display but to maintain the operation of the display.

  For example, when a work is performed while frequently comparing a document on a desk and a display screen, there is a case where it is not desired to stop the display when the face is directed to the desk surface. At this time, if the face recognition intensity absolute value of the strong classifier having directivity is adopted for the diagonal face facing downward, the display operation is continued regardless of the output of the other strong classifiers. By doing so, it is possible to prevent the power saving control of the display from being performed with respect to the movement of the face between the desk surface direction and the display front direction.

  The power consumption reduction method of the present invention is not limited to a notebook PC, but is a desktop personal computer or tablet computer that has a display and can capture image data by photographing a user's face with a camera. It can be widely applied to information processing equipment such as cash dispensers. The camera module and the display need not be physically coupled to the information processing device as long as they can communicate with each other.

  Furthermore, the method of recognizing the direction of the face while reducing the probability of positive misrecognition can be used not only to reduce the power consumption of information processing equipment, but also when a crew member driving a vehicle or aircraft turns sideways. It can also be applied to a field where the user who is facing sideways is warned to watch the display when important information is displayed on the screen of the display.

  Although the present invention has been described with the specific embodiments shown in the drawings, the present invention is not limited to the embodiments shown in the drawings, and is known so far as long as the effects of the present invention are achieved. It goes without saying that any configuration can be adopted.

10 Notebook PC
100, 250, 500 Strong classifier 101, 101a to 101d Rectangular feature 151 Recognition target image 155 Small window 200, 200a to 200d Discrimination pattern C1 to Cn Weak classifier P Face recognition intensity relative value Pz, Py Face recognition intensity absolute value S1 to S5 Face recognition intensity absolute value

Claims (22)

  1. A method for controlling power consumption in an information processing device capable of communicating with a camera module and a display,
    The information processing device transitioning to a power-on state;
    Face recognition having a characteristic that the maximum value is obtained when the face direction with respect to the camera module is a predetermined direction from image data periodically received from the camera module, and decreases as the face is shifted from the predetermined direction. Calculating an intensity value ;
    Determining a face direction from the face recognition intensity value calculated from image data in which the amount of change of the face recognition intensity value per unit time is a predetermined value or less;
    Responsive to determining that the face direction is shifted from the front direction of the display, the information processing device transitions from the power-on state to the low-power state.
  2. The method according to claim 1 , wherein the predetermined direction in which the face recognition intensity value is maximum is a front direction of the display.
  3. The method according to claim 1 , wherein the step of determining the direction of the face includes the step of setting the face direction indicated by the image data received from the camera module as a front face at a timing when the input device generates an input event. .
  4. The method according to claim 1 , wherein the predetermined direction in which the face recognition intensity value is maximum includes a direction shifted by a predetermined angle with respect to a front direction of the display.
  5. Determining whether an input device has been operated when a face is not recognized from the image data;
    While the input device is operated within a predetermined period of time The method of claim 1 and a step of maintaining the power-on state.
  6. Determining whether an input device has been operated while transitioning to the low power state; and
    The method according to claim 1 , further comprising the step of returning to the power-on state when it is determined that the input device has been operated.
  7. Determining whether the face has faced the front of the display continuously for a predetermined time after returning to the power-on state;
    The method according to claim 1 , further comprising the step of maintaining the power-on state while it is not determined that the face is facing the front direction of the display continuously for a predetermined time.
  8. Determining whether a face is facing the front of the display while transitioning to the low power state;
    The method of claim 1 , further comprising the step of returning from the low power state to the power on state when it is determined that the display is facing in the front direction.
  9. The method according to claim 1, wherein the low power state is a state where a backlight of the display is turned off.
  10. A method for determining a face direction relative to the camera from image data acquired from the camera,
    Periodically acquiring image data from the camera;
    Calculating from each image data a face recognition intensity value having a characteristic that becomes a maximum value when the face direction with respect to the camera is a predetermined direction and decreases as it shifts from the predetermined direction;
    Determining whether the amount of change per unit time of the face recognition intensity value for each image data is equal to or less than a predetermined value;
    Determining a face direction from the face recognition intensity value calculated from the image data determined that the amount of change is equal to or less than a predetermined value.
  11. The method according to claim 10 , wherein the face recognition intensity value has a characteristic including a plurality of peak values.
  12. 12. The method according to claim 11 , wherein the face recognition intensity value includes a peak value when the face is turned by a predetermined angle in the left-right direction and a peak value when the face is turned by a predetermined angle in the vertical direction.
  13. An information processing device capable of communication between a camera module and a display,
    A face having a characteristic that the maximum value is obtained when the face direction with respect to the camera module is a predetermined direction from image data periodically received from the camera module, and decreases as the face is shifted from the predetermined direction. A recognition intensity value is calculated, and from the face recognition intensity value calculated from image data whose amount of change per unit time of the face recognition intensity value is a predetermined value or less, the image data is a front face, an oblique face, or A face recognition unit that determines which non-face is included ,
    An information processing apparatus comprising: a power control unit that causes the display to transition to a low power state according to an instruction from the face recognition unit that determines that the image data includes an oblique face or a non-face.
  14. An input device operated by the user;
    An input device monitoring unit that detects an input from the input device and sends an input event to the face recognition unit ;
    While the face recognition unit receives the input event within a predetermined time from the input device monitoring unit, the face recognition unit stops the transition to the low power state even if the face recognition unit detects a non-face. The information processing apparatus according to claim 13 .
  15. The information processing device according to claim 14 , wherein the face recognition unit operates the display when a front face is detected while the operation of the display is stopped.
  16. The information processing apparatus according to claim 14 , wherein the face recognition unit operates the display when receiving the input event from the input device monitoring unit while the display is transitioning to the low power state .
  17. The information processing device according to claim 16 , wherein the face recognition unit maintains the operation of the display unless the information processing device detects a front face continuously for a certain period of time after the display is operated.
  18. A face direction recognition device capable of communicating with a camera module and a display,
    A data acquisition unit for acquiring image data at a predetermined frame rate from the camera module;
    Face recognition intensity value calculation that calculates a face recognition intensity value from each image data that has a maximum value when the face direction with respect to the camera module is in a predetermined direction and decreases with a shift from the predetermined direction. And
    A change amount calculation unit for calculating a change amount per unit time of the face recognition intensity value for each image data;
    A face direction recognition apparatus comprising: a determination unit that determines a face direction from the face recognition intensity value calculated from image data for which the change amount is determined to be equal to or less than a predetermined value.
  19. To a computer that can communicate with the camera module,
    Acquiring image data at a predetermined frame rate from the camera module;
    Calculating from each image data a face recognition intensity value having a characteristic that becomes a maximum value when the face direction with respect to the camera module is a predetermined direction and decreases with shifting from the predetermined direction;
    Calculating a change amount per unit time of the face recognition intensity value for each image data;
    A computer program for executing a process including a step of determining a face direction from the face recognition intensity value calculated from the image data determined that the change amount is equal to or less than a predetermined value.
  20.   A method of controlling power consumption in an information processing device capable of communication between a camera module and a display,
      The information processing device transitioning to a power-on state;
      Calculated from image data received from the camera module at the timing of input by the input device, becomes a maximum value when the face direction with respect to the camera module is a predetermined direction, and decreases as the face is shifted from the predetermined direction. Registering a face recognition intensity value with characteristics;
      Calculating a face recognition intensity value from image data periodically received from the camera module;
      Normalizing a face recognition intensity value calculated from the periodically received image data with the registered face recognition intensity value;
      In response to determining from the normalized face recognition intensity value that the face direction is shifted from the front direction of the display, the information processing device transitions from the power-on state to the low-power state. Step and
    Having a method.
  21. The step of registering includes the step of registering the brightness value of the image data obtained by calculating the face recognition intensity value in combination with the face recognition intensity value,
      21. The method according to claim 20, wherein the normalizing includes normalizing with a face recognition intensity value paired with a luminance value closest to a luminance value of the periodically received image data.
  22. An information processing device capable of communication between a camera module and a display,
      An input device;
      Calculated from image data received from the camera module at the timing of input by the input device, becomes a maximum value when the face direction with respect to the camera module is a predetermined direction, and decreases as the face shifts from the predetermined direction A face image data registration unit for registering a face recognition intensity value having characteristics to
      The face recognition intensity value is calculated from the image data periodically received from the camera module, the face recognition intensity value calculated from the periodically received image data is normalized with the registered face recognition intensity value, and the normalization A face recognition unit that determines that the face direction is shifted from the front direction of the display from the converted face recognition intensity value;
      In response to determining that the face recognition unit has shifted, a power control unit that causes the information processing device to transition from a power-on state to a low-power state;
    Information processing equipment having
JP2011252153A 2011-11-18 2011-11-18 Face direction detection method and information processing device Active JP5529103B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011252153A JP5529103B2 (en) 2011-11-18 2011-11-18 Face direction detection method and information processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011252153A JP5529103B2 (en) 2011-11-18 2011-11-18 Face direction detection method and information processing device

Publications (2)

Publication Number Publication Date
JP2013109430A JP2013109430A (en) 2013-06-06
JP5529103B2 true JP5529103B2 (en) 2014-06-25

Family

ID=48706145

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011252153A Active JP5529103B2 (en) 2011-11-18 2011-11-18 Face direction detection method and information processing device

Country Status (1)

Country Link
JP (1) JP5529103B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6074122B1 (en) * 2013-12-09 2017-02-01 ゼンソモトリック インストゥルメンツ ゲゼルシャフト ヒューア イノベイティブ ゼンソリック エムベーハーSENSOMOTORIC INSTRUMENTS Gesellschaft fur innovative Sensorik mbH Eye tracking device operating method and active power management eye tracking device
JP6280412B2 (en) * 2014-03-26 2018-02-14 株式会社メガチップス Object detection device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11288259A (en) * 1998-02-06 1999-10-19 Sanyo Electric Co Ltd Method and device for power saving control
JP4640825B2 (en) * 2006-02-23 2011-03-02 富士フイルム株式会社 Specific orientation face determination method, apparatus, and program
JP5399880B2 (en) * 2009-12-11 2014-01-29 レノボ・シンガポール・プライベート・リミテッド Power control apparatus, power control method, and computer-executable program

Also Published As

Publication number Publication date
JP2013109430A (en) 2013-06-06

Similar Documents

Publication Publication Date Title
US9690480B2 (en) Controlled access to functionality of a wireless device
US9866754B2 (en) Power management in an eye-tracking system
US20190179419A1 (en) Interactive input system and method
US9459826B2 (en) Apparatus and method of controlling mobile terminal based on analysis of user&#39;s face
US9202280B2 (en) Position estimation based rotation of switched off light source
EP2817694B1 (en) Navigation for multi-dimensional input
US8933882B2 (en) User centric interface for interaction with visual display that recognizes user intentions
DE102013102399B4 (en) Facial feature detection
Kar et al. A review and analysis of eye-gaze estimation systems, algorithms and performance evaluation methods in consumer platforms
CN106415445B (en) Techniques for viewer attention area estimation
US9703373B2 (en) User interface control using gaze tracking
Sugano et al. Appearance-based gaze estimation using visual saliency
AU2013200807B2 (en) Method and portable terminal for correcting gaze direction of user in image
EP2680191B1 (en) Facial recognition
US9030425B2 (en) Detection of interaction with virtual object from finger color change
US10565437B2 (en) Image processing device and method for moving gesture recognition using difference images
TWI506478B (en) Presence sensing
US9280652B1 (en) Secure device unlock with gaze calibration
US9868449B1 (en) Recognizing in-air gestures of a control object to control a vehicular control system
US9377859B2 (en) Enhanced detection of circular engagement gesture
US20180239412A1 (en) Intelligent user mode selection in an eye-tracking system
US9489574B2 (en) Apparatus and method for enhancing user recognition
US8737693B2 (en) Enhanced detection of gesture
US7844086B2 (en) Head pose assessment methods and systems
US6677969B1 (en) Instruction recognition system having gesture recognition function

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20131017

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20131029

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140113

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140415

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20140416

R150 Certificate of patent or registration of utility model

Ref document number: 5529103

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

R360 Written notification for declining of transfer of rights

Free format text: JAPANESE INTERMEDIATE CODE: R360

R371 Transfer withdrawn

Free format text: JAPANESE INTERMEDIATE CODE: R371

R360 Written notification for declining of transfer of rights

Free format text: JAPANESE INTERMEDIATE CODE: R360

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250