US20140369553A1 - Method for triggering signal and in-vehicle electronic apparatus - Google Patents

Method for triggering signal and in-vehicle electronic apparatus Download PDF

Info

Publication number
US20140369553A1
US20140369553A1 US13/971,840 US201313971840A US2014369553A1 US 20140369553 A1 US20140369553 A1 US 20140369553A1 US 201313971840 A US201313971840 A US 201313971840A US 2014369553 A1 US2014369553 A1 US 2014369553A1
Authority
US
United States
Prior art keywords
face
information
shut
central point
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/971,840
Other languages
English (en)
Inventor
Chia-Chun Tsou
Chih-Heng Fang
Po-tsung Lin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Antares Pharma IPL AG
Utechzone Co Ltd
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to ANTARES PHARMA, IPL, AG reassignment ANTARES PHARMA, IPL, AG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KRAUS, HOLGER, WOTTON, PAUL, SADOWSKI, PETER L.
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Assigned to UTECHZONE CO., LTD. reassignment UTECHZONE CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TSOU, CHIA-CHUN, FANG, CHIH-HENG, LIN, PO-TSUNG
Publication of US20140369553A1 publication Critical patent/US20140369553A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00315
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present invention generally relates to an image processing technique, and more particularly, to a method for triggering a signal through a face recognition technique and an in-vehicle electronic apparatus.
  • Face recognition technology plays a very important role among image recognition technologies and is one of today's most focused technologies. Face recognition techniques are usually applied to human computer interfaces, home video surveillances, face recognition in biological detection, security and customs checks, public video surveillances, personal computers, and even the security monitoring in bank vaults.
  • the present invention is directed to a signal triggering method and an in-vehicle electronic apparatus, in which whether a specific signal is triggered is determined according to whether an action of a driver matches a threshold information.
  • the present invention provides an in-vehicle electronic apparatus.
  • the in-vehicle electronic apparatus includes an image capturing unit and an operating device coupled to the image capturing unit.
  • the image capturing unit captures a plurality of images of a driver.
  • the operating device executes an image recognition procedure on each of the images to obtain a face motion information or an eyes open-shut information by detecting a face motion or an eyes open/shut action of the driver.
  • the operating device triggers a distress signal and transmits the distress signal to a wireless communication unit.
  • the image capturing unit is disposed in front of a driver's seat in a vehicle for capturing the images of the driver.
  • the image capturing unit further has an illumination element and performs a light compensation operation through the illumination element.
  • the operating device executes the image recognition procedure on each of the images to detect a nostrils position information of a face in the image, and the operating device obtains the face motion information or the eyes open-shut information according to the nostrils position information.
  • the face motion information includes a head turning number, a head nodding number, and a head circling number of the driver, and the eyes open-shut information includes an eyes shut number of the driver.
  • the present invention provides a signal triggering method adapted to an in-vehicle electronic apparatus.
  • the signal triggering method includes following steps. A plurality of images is continuously captured, where each of the images includes a face. A nostril area on the face is detected to obtain a nostrils position information. Whether the face turns is determined according to the nostrils position information, so as to obtain a face motion information. The face motion information is compared with a threshold information. When the face motion information matches the threshold information, a specific signal is triggered.
  • the nostrils position information includes a first central point and a second central point of two nostrils.
  • the step of determining whether the face turns according to the nostrils position information includes following steps.
  • a horizontal gauge is performed according to the first central point and the second central point to locate a first boundary point and a second boundary point of the face.
  • a central point of the first boundary point and the second boundary point is calculated served as a reference point.
  • the reference point is compared with the first central point to determine whether the face turns towards a first direction.
  • the reference point is compared with the second central point to determine whether the face turns towards a second direction.
  • a number that the face turns towards the first direction and a number that the face turns towards the second direction during a predetermined period are calculated to obtain the face motion information.
  • the step of determining whether the face turns according to the nostrils position information further includes following steps.
  • a turning angle is obtained according to a straight line formed by the first central point and the second central point and a datum line.
  • the turning angle is compared with a first predetermined angle to determine whether the face turns towards the first direction.
  • the turning angle is compared with a second predetermined angle to determine whether the face turns towards the second direction.
  • a number that the face turns towards the first direction and a number that the face turns towards the second direction during a predetermined period are calculated to obtain the face motion information.
  • the signal triggering method further includes following steps.
  • An eye search frame is estimated according to the nostrils position information to detect an eye object within the eye search frame. Whether the eye object is shut according to the size of the eye object, so as to obtain the eyes open-shut information.
  • the face motion information and the eyes open-shut information are compared with the threshold information. When the face motion information and the eyes open-shut information match the threshold information, the specific signal is triggered.
  • the step of determining whether the eye object is shut according to the size of the eye object includes following steps. When the height of the eye object is smaller than a height threshold and the width of the eye object is greater than a width threshold, it is determined that the eye object is shut. An eyes shut number of the eye object within the predetermined period is calculated to obtain the eyes open-shut information.
  • the specific signal is further transmitted to a specific device through a wireless communication unit.
  • the present invention provides another signal triggering method adapted to an in-vehicle electronic apparatus.
  • the signal triggering method includes following steps. A plurality of images is continuously captured, where each of the images includes a face. A nostril area on the face is detected to obtain a nostrils position information. An eye search frame is estimated according to the nostrils position information to detect an eye object within the eye search frame. Whether the eye object is shut is determined according to the size of the eye object, so as to obtain an eyes open-shut information. The eyes open-shut information is compared with a threshold information. When the eyes open-shut information matches the threshold information, a specific signal is triggered.
  • the present invention provides yet another signal triggering method adapted to an in-vehicle electronic apparatus.
  • the signal triggering method includes following steps. A plurality of images is continuously captured, where each of the images includes a face. A nostril area on the face is detected to obtain a nostrils position information. Whether the face turns is determined according to the nostrils position information, so as to obtain a face motion information. An eye search frame is estimated according to the nostrils position information to detect an eye object within the eye search frame. Whether the eye object is shut is determined according to the size of the eye object, so as to obtain an eyes open-shut information. The face motion information and the eyes open-shut information are compared with a threshold information. When the face motion information and the eyes open-shut information match the threshold information, a specific signal is triggered.
  • whether the action of a driver matches a threshold information is determined according to a nostrils position information, so as to determine whether to trigger a specific signal. Because the characteristic information of the nostrils is used, the operation load is reduced and misjudgement is avoided.
  • FIG. 1 is a diagram of an in-vehicle electronic apparatus according to a first embodiment of the present invention.
  • FIG. 2 is a flowchart of a signal triggering method according to the first embodiment of the present invention.
  • FIG. 3 is a diagram of an image with a frontal face according to the first embodiment of the present invention.
  • FIG. 4A and FIG. 4B are diagrams of images with a turning face according to the first embodiment of the present invention.
  • FIG. 5A and FIG. 5B are diagrams of a nostril area according to the first embodiment of the present invention.
  • FIG. 6 is a flowchart of a signal triggering method according to a second embodiment of the present invention.
  • FIG. 7 is a diagram illustrating how an eye search frame is estimated according to the second embodiment of the present invention.
  • FIG. 8A and FIG. 8B are diagrams of an eye image area according to the second embodiment of the present invention.
  • FIG. 9 is a flowchart of a signal triggering method according to a third embodiment of the present invention.
  • FIG. 1 is a diagram of an in-vehicle electronic apparatus according to the first embodiment of the present invention.
  • the in-vehicle electronic apparatus 100 includes an image capturing unit 110 and an operating device 10 .
  • the image capturing unit 110 is coupled to the operating device 10 .
  • the operating device 10 includes a processing unit 120 , a storage unit 130 , and a wireless communication unit 140 .
  • the processing unit 120 is respectively coupled to the image capturing unit 110 , the storage unit 130 , and the wireless communication unit 140 .
  • the image capturing unit 110 captures a plurality of images of a driver.
  • the image capturing unit 110 may be a video camera or a camera with a charge coupled device (CCD) lens, a complementary metal oxide semiconductor transistors (CMOS) lens, or an infrared lens.
  • CCD charge coupled device
  • CMOS complementary metal oxide semiconductor transistors
  • the image capturing unit 110 is disposed in front of the driver's seat in a vehicle for capturing the images of the driver.
  • the image capturing unit 110 transmits the captured images to the operating device 10 , and the operating device 10 executes an image recognition procedure on each of the images to obtain a face motion information or an eyes open-shut information by detecting a face motion or an eyes open/shut action of the driver.
  • the face motion information includes a head turning number, a head nodding number, and a head circling number of the driver
  • the eyes open-shut information includes an eyes shut number of the driver.
  • the operating device 10 triggers a distress signal and transmits the distress signal to the wireless communication unit 140 .
  • the processing unit 120 triggers a distress signal and transmits the distress signal to the wireless communication unit 140 , and the distress signal is then transmitted to a specific device through the wireless communication unit 140 .
  • Aforementioned specific device may be an electronic equipment (for example, cell phone or computer) of a member in a neighborhood watch association or an electronic equipment in a vehicle management center.
  • the image capturing unit 110 further has a turning lens (not shown) for adjusting the shooting direction and angle.
  • the lens is adjusted to face the face of the driver so that each captured image contains the face of the driver.
  • the nostrils on a human face present a darker color therefore can be easily identified, and other features on a human face can be obtained by using the features of the nostrils.
  • the lens of the image capturing unit 110 is further adjusted to face the face of the driver at an elevation of 45°.
  • the nostrils can be clearly shown in each image captured by the image capturing unit 110 , so that the recognition of the nostrils may be enhanced, and the nostrils may be easily detected subsequently.
  • the image capturing unit 110 further has an illumination element.
  • the illumination element is used for performing a light compensation operation when there is only insufficient light, such that the clarity of the captured images can be guaranteed.
  • the operating device 10 detects an action of the driver, and the operating device 10 triggers a specific signal and transmits the specific signal to a specific device when the driver's action matches a threshold information.
  • the threshold information is at least one or a combination of a head turning number N1, a head nodding number N2, a head circling number N3, and an eyes shut number N4 of the driver.
  • the threshold information indicates that the driver turns his head rightwards for 2 times and then leftwards for 2 times during a predetermined period (for example, 3-7 seconds), the driver blinks his eyes for 3 times during a predetermined period (for example, 3 seconds), or the driver blinks his eyes for 3 times plus turns his head rightwards for 2 times and then leftwards for 2 times during a predetermined period (for example, 3-7 seconds).
  • a predetermined period for example, 3-7 seconds
  • the driver blinks his eyes for 3 times during a predetermined period (for example, 3 seconds)
  • the threshold information mentioned above is only examples but not intended to limit the scope of the present invention.
  • the processing unit 120 may be a central processing unit (CPU) or a microprocessor.
  • the storage unit 130 may be a non-volatile memory, a random access memory (RAM), or a hard disc.
  • the wireless communication unit 140 may be a Third Generation (3G) mobile communication module, a General Packet Radio Service (GPRS) module, or a Wi-Fi module.
  • 3G Third Generation
  • GPRS General Packet Radio Service
  • Wi-Fi Wireless Fidelity
  • the storage unit 130 stores a plurality of snippets. These snippets are executed by the processing unit 120 after the snippets are installed.
  • the storage unit 130 includes a plurality of modules. These modules respectively execute a plurality of functions, and each module is composed of one or more snippets.
  • Aforementioned modules include an image processing module, a determination module and a signal triggering module.
  • the image processing module executes an image recognition procedure on each image to detect a face motion or an eyes open/shut action of the driver, so as to obtain the face motion information or the eyes open-shut information.
  • the determination module determines whether the face motion information or the eyes open-shut information matches a threshold information.
  • the signal triggering module triggers a specific signal and transmits the specific signal to a specific device when the face motion information or the eyes open-shut information matches a threshold information.
  • These snippets include a plurality of commands, and the processing unit 120 executes various steps of a signal triggering method through these commands.
  • the in-vehicle electronic apparatus 100 includes only one processing unit 120 . However, in other embodiments, the in-vehicle electronic apparatus 100 may include multiple processing units, and these processing units execute the installed snippets.
  • FIG. 2 is a flowchart of a signal triggering method according to the first embodiment of the present invention.
  • the image capturing unit 110 continuously captures a plurality of images, where each of the images contains a face.
  • a sampling frequency is preset in the in-vehicle electronic apparatus 100 such that the image capturing unit 110 can continuously capture a plurality of images based on this sampling frequency.
  • a start button (a physical button or a virtual button) is disposed in the in-vehicle electronic apparatus 100 , and when the start button is enabled, the image capturing unit 110 is started to capture images and carry out subsequent processing.
  • step S 210 the processing unit 120 detects a nostril area on the face in the captured images to obtain a nostrils position information.
  • the image capturing unit 110 transmits the images to the processing unit 120 , and the processing unit 120 carries out face recognition in each of the images.
  • the face in each image can be obtained through the AdaBoost algorithm or any other existing face recognition algorithm (for example, the face recognition action can be carried out by using Haar-like features).
  • the processing unit 120 searches for a nostril area (i.e., the position of the two nostrils) on the face.
  • the nostrils position information may be a first central point and a second central point of two nostrils.
  • FIG. 3 is a diagram of an image with a frontal face according to the first embodiment of the present invention.
  • the central point of the right nostril is considered a first central point N1
  • the central point of the left nostril is considered a second central point N2.
  • step S 215 the processing unit 120 determines whether the face turns according to the nostrils position information, so as to obtain the face motion information.
  • Whether the face in the images turns towards a first direction d1 or a second direction d2 is determined by using the first central point N1 and the second central point N2.
  • the rightward direction is considered the first direction d1
  • the leftward direction is considered the second direction d2, as shown in FIG. 3 .
  • the first central point N1 and the second central point N2 are compared with a reference point, and which direction the face turns towards is determined based on the relative position between the first central point N1 and the reference point and the relative position between the second central point N2 and the reference point.
  • the processing unit 120 performs a horizontal gauge according to the first central point N1 and the second central point N2 to locate a first boundary point B1 and a second boundary point B2 of the face.
  • 2-10 i.e., totally 4-20
  • pixel rows are respectively obtained above and below the axis X (i.e., the horizontal axis).
  • X i.e., the horizontal axis.
  • the processing unit 120 calculates the central point of the first boundary point B1 and the second boundary point B2 and serves the central point as a reference point R. Namely, assuming the coordinates of the first boundary point B1 to be (B_x1,B_y1) and the coordinates of the second boundary point B2 to be (B_x2,B_y2), the X-coordinate of the reference point R is (B_x1+B_x2)/ 2 , and the Y-coordinate thereof is (B_y1+B_y2)/ 2 .
  • the reference point R is compared with the first central point N1 to determine whether the face turns towards the first direction d1.
  • the reference point R is compared with the second central point N2 to determine whether the face turns towards the second direction d2. For example, when the first central point N1 is at the side of the reference point R towards the first direction d1, it is determined that the face turns towards the first direction d1, and when the second central point N2 is at the side of the reference point R towards the second direction d2, it is determined that the face turns towards the second direction d2.
  • the reference point R is between the first central point N1 and the second central point N2, it is determined that the face faces forward and does not turn.
  • the processing unit 120 calculates the number that the face turns towards the first direction d1 and the number that the face turns towards the second direction d2 during a predetermined period (for example, 10 seconds), so as to obtain a face motion information.
  • Aforementioned face motion information may be recorded as (d1,d1,d2,d2) to indicate that the face first turns towards the first direction d1 twice and then towards the second direction d2 twice.
  • the implementation described above is only an example and is not intended to limit the scope of the present invention.
  • the processing unit 120 compares the face motion information with a threshold information.
  • the threshold information includes two thresholds, where one of the two thresholds is the threshold of the face turning towards the first direction d1 and the other one is the threshold of the face turning towards the second direction d2. Additionally, the sequence in which the face turns towards the first direction d1 and the second direction d2 is also defined in the threshold information.
  • a specific signal is triggered.
  • the processing unit 120 triggers the corresponding specific signal, it further sends the specific signal to a specific device through the wireless communication unit 140 .
  • the specific signal may be a distress signal, and the specific device may be an electronic apparatus used by a member of a neighbourhood watch association or an electronic equipment (for example, a cell phone or a computer) in a vehicle management center. Or, if the in-vehicle electronic apparatus 100 is a cell phone, the driver can preset a phone number.
  • a dialing function can be enabled by the specific signal such that the in-vehicle electronic apparatus 100 can call the specific device corresponding to the preset phone number.
  • FIG. 4A and FIG. 4B are diagrams of images with a turning face according to the first embodiment of the present invention.
  • FIG. 4 A illustrates an image 410 of a face turning towards the first direction d1
  • FIG. 4B illustrates an image 420 of the face turning towards the second direction d2.
  • the coordinates of the first central point N1 of the nostrils are (N1_x, N1_y)
  • the coordinates of the second central point N2 of the nostrils are (N2_x, N2_y)
  • the coordinates of the reference point R i.e., the central point of the first boundary point B1 and the second boundary point B2
  • FIG. 5A and FIG. 5B are diagrams of a nostril area according to the first embodiment of the present invention.
  • FIG. 5A illustrates a nostril area on a face turning towards the first direction d1
  • FIG. 5B illustrates a nostril area on a face turning towards the second direction d2.
  • a turning angle ⁇ is obtained according to a straight line NL formed by the first central point N1 and the second central point N2 and a datum line RL.
  • the datum line RL is a horizontal axis on the first central point N1
  • the datum line RL is considered 0°.
  • the processing unit 120 After determining that the face turns towards the first direction d1 or the second direction d2, the processing unit 120 further calculates the number that the face turns towards the first direction d1 and the number that the face turns towards the second direction d2 during a predetermined period, so as to obtain a face motion information.
  • the horizontal axis on the second central point N2 may also be served as the datum line, and the first predetermined angle and the second predetermined angle may be adjusted according to the actual requirement, which is not limited herein.
  • the turning direction of the face may also be determined by using only the turning angle.
  • the turning angle ⁇ is obtained according to the straight line NL formed by the first central point N1 and the second central point N2 and the datum line RL. After that, the turning angle ⁇ is compared with the first predetermined angle to determine whether the face turns towards the first direction d1. Besides, the turning angle ⁇ is compared with the second predetermined angle to determine whether the face turns towards the second direction d2. For example, when the turning angle ⁇ is greater than or equal to A° (A is between 2-5), it is determined that the face turns towards the first direction d1 (i.e., the face turns rightwards). When the turning angle ⁇ is smaller than or equal to ⁇ A°, it is determined that the face turns towards the second direction d2 (i.e., the face turns leftwards).
  • whether the face turns is determined by using the nostrils position information, and a specific signal is triggered when the turning direction and number match a threshold information.
  • FIG. 6 is a flowchart of a signal triggering method according to the second embodiment of the present invention. Below, the signal triggering method will be described with reference to the in-vehicle electronic apparatus 100 illustrated in FIG. 1 .
  • step S 605 the image capturing unit 110 continuously captures a plurality of images, where each of the images contains a face. Then, in step S 610 , the processing unit 120 detects a nostril area on the face to obtain a nostrils position information. Details of steps S 605 and S 610 can be referred to steps S 205 and S 210 described above therefore will not be described herein.
  • an eye search frame is estimated according to the nostrils position information to detect an eye object within the eye search frame.
  • the nostril is easier to identify in an image.
  • an eye search frame is estimated upwards to locate an eye object within the eye search frame, so that the search area can be reduced.
  • FIG. 7 is a diagram illustrating how an eye search frame is estimated according to the second embodiment of the present invention.
  • the processing unit 120 calculates the distance D between the first central point N1 and the second central point N2. Then, the processing unit 120 estimates the central point, length, and width of the eye search frame according to the distance D.
  • an eye search frame 710 is obtained according to pre-defined width w and height h, where the width w is greater than the height h.
  • the width w is 2 ⁇ 22 pixels
  • the height h is 2 ⁇ 42 pixels.
  • the first estimation value k1 is deducted from the X-coordinate towards the first direction d1, and the second estimation value k2 is added to the Y-coordinate upwards, so as to obtain another central point 73 .
  • another eye search frame 730 is obtained according to pre-defined width w and height h.
  • the starting point may also be the central point of the first central point N1 and the second central point N2, which is not limited in the present invention.
  • the processing unit 120 After obtaining the eye search frames 710 and 730 , the processing unit 120 obtains more precise eye image areas 720 and 740 in the eye search frames 710 and 730 . Another embodiment will be described below by taking the left eye of the driver as an example.
  • FIG. 8A and FIG. 8B are diagrams of an eye image area according to the second embodiment of the present invention.
  • FIG. 8A illustrates the eye image area 720 in FIG. 7 in an eye-shut state.
  • the contrast of the eye image area 720 is adjusted to obtain an enhanced image.
  • a gain value and an offset value of the eye image area 720 are adjusted. For example, an average value avg of the grayscales of all pixels in the eye image area 720 is calculated.
  • the offset value is set as the negative value of the average value avg (i.e., ⁇ avg), and the gain value is set as G1, where 3.2 ⁇ G1 ⁇ 2.1. If the average value avg is not smaller than 150, the offset value is set as the negative value of the average value avg (i.e., ⁇ avg), and the gain value is set as G2, where 1.9 ⁇ G2 ⁇ 2.5.
  • a denoising process is performed on the enhanced image to obtain a denoised image.
  • the denoising process is performed by using a 3 ⁇ 3 matrix in which every element has the value 1.
  • an edge sharpening process is performed on the denoised image to obtain a sharpened image.
  • the edge sharpening process is performed by using an improved Soble mask having the value (1, 0, 0, 0, ⁇ 1).
  • a binaryzation process is performed on the sharpened image to obtain a binarized image.
  • the edge sharpening process is performed on the binarized image again to obtain an eye object 810 , as shown in FIG. 8B .
  • step S 620 the processing unit 120 determines whether the eye object is shut according to the size of the eye object 810 , so as to obtain an eyes open-shut information. For example, when the height of the eye object 810 is smaller than a height threshold (for example, the height threshold is between 5-7 pixels) and the width of the eye object 810 is greater than a width threshold (for example, the width threshold is between 60-80 pixels), the processing unit 120 determines that the eye object 810 is shut. Otherwise, the processing unit 120 determines that the eye object 810 is open. Thereafter, the processing unit 120 calculates an eyes shut number of the eye object during a predetermined period to obtain the eyes open-shut information.
  • a height threshold for example, the height threshold is between 5-7 pixels
  • a width threshold for example, the width threshold is between 60-80 pixels
  • step S 625 the eyes open-shut information is compared with a threshold information.
  • the threshold information includes an eye blinking threshold (for example, 3 times).
  • step S 630 when the eyes open-shut information matches the threshold information, a specific signal is triggered. After the processing unit 120 triggers the corresponding specific signal, it further sends the specific signal to a specific device through the wireless communication unit 140 .
  • an eye object is located by using the nostrils position information to determine whether the driver blinks his eyes, and a specific signal is triggered when the number of blinks matches a threshold.
  • An appropriate eye search frame is obtained by using the feature information of the easily recognized nostrils, and an eye image area is then obtained in the eye search frame for detecting an eye object.
  • FIG. 9 is a flowchart of a signal triggering method according to the third embodiment of the present invention.
  • the signal triggering method in the present embodiment will be described below with reference to both FIG. 1 and FIG. 9 .
  • the image capturing unit 110 continuously captures a plurality of images, where each of the images contains a face.
  • the processing unit 120 detects a nostril area on the face to obtain a nostrils position information.
  • steps S 905 and S 910 can be referred to the steps S 205 and S 210 described above therefore will not be described herein.
  • the sequence of determining whether the face turns and detecting whether the eye object is shut is only an example for the convenience of description but not intended to limit the scope of the present invention.
  • step S 915 the processing unit 120 determines whether the face turns according to the nostrils position information, so as to obtain a face motion information.
  • the details of step S 915 can be referred to step S 215 in the first embodiment therefore will not be described herein.
  • step S 920 an eye search frame is estimated according to the nostrils position information to detect an eye object in the eye search frame.
  • the processing unit 120 determines whether the eye object is shut according to the size of the eye object, so as to obtain an eyes open-shut information.
  • steps S 920 and S 925 can be referred to steps S 615 and S 620 in the second embodiment therefore will not be described herein.
  • the face motion information and the eyes open-shut information are obtained, in step S 930 , the face motion information and the eyes open-shut information are compared with a threshold information.
  • the threshold information includes three thresholds: a blink threshold, a threshold of the face turning towards a first direction, and a threshold of the face turning towards a second direction.
  • the sequence of the face turning towards the first direction and towards the second direction is defined in the threshold information.
  • step S 630 when the face motion information and the eyes open-shut information match the threshold information, a specific signal is triggered.
  • the processing unit 120 triggers the corresponding specific signal, it further sends the specific signal to a specific device through the wireless communication unit 140 .
  • the action of a driver can be captured through a human computer interface without disturbing or bothering any other people, and a specific signal is triggered when the action of the driver satisfies a specific condition (i.e., a threshold information).
  • a specific condition i.e., a threshold information
  • a nostril area on a face is first located to obtain a nostrils position information, and whether the driver's action matches a threshold information is then determined according to the nostrils position information, so as to determine whether to trigger a specific signal.
  • the driver can trigger a specific signal by turning his head and/or blinking his eyes, so that the safety of the driver can be protected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
US13/971,840 2013-06-14 2013-08-21 Method for triggering signal and in-vehicle electronic apparatus Abandoned US20140369553A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW102121160A TWI492193B (zh) 2013-06-14 2013-06-14 觸發訊號的方法及車用電子裝置
TW102121160 2013-06-14

Publications (1)

Publication Number Publication Date
US20140369553A1 true US20140369553A1 (en) 2014-12-18

Family

ID=52019254

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/971,840 Abandoned US20140369553A1 (en) 2013-06-14 2013-08-21 Method for triggering signal and in-vehicle electronic apparatus

Country Status (3)

Country Link
US (1) US20140369553A1 (zh)
CN (1) CN104238733B (zh)
TW (1) TWI492193B (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323984B2 (en) * 2014-06-06 2016-04-26 Wipro Limited System and methods of adaptive sampling for emotional state determination
US20160309329A1 (en) * 2014-05-14 2016-10-20 The Regents Of The University Of California Sensor-assisted user authentication
CN109116839A (zh) * 2017-06-26 2019-01-01 本田技研工业株式会社 车辆控制系统、车辆控制方法及存储介质
US20190370578A1 (en) * 2018-06-04 2019-12-05 Shanghai Sensetime Intelligent Technology Co., Ltd . Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
US20190370577A1 (en) * 2018-06-04 2019-12-05 Shanghai Sensetime Intelligent Technology Co., Ltd Driving Management Methods and Systems, Vehicle-Mounted Intelligent Systems, Electronic Devices, and Medium
US11195301B1 (en) * 2020-07-26 2021-12-07 Nec Corporation Of America Estimation of head yaw in an image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104924907B (zh) * 2015-06-19 2018-09-14 宇龙计算机通信科技(深圳)有限公司 一种调节车速的方法及装置

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6130617A (en) * 1999-06-09 2000-10-10 Hyundai Motor Company Driver's eye detection method of drowsy driving warning system
US6243015B1 (en) * 1999-06-17 2001-06-05 Hyundai Motor Company Driver's drowsiness detection method of drowsy driving warning system
US20040071318A1 (en) * 2002-10-09 2004-04-15 Humphrey Cheung Apparatus and method for recognizing images
US20050163383A1 (en) * 2004-01-26 2005-07-28 Samsung Electronics Co., Ltd. Driver's eye image detecting device and method in drowsy driver warning system
US7202792B2 (en) * 2002-11-11 2007-04-10 Delphi Technologies, Inc. Drowsiness detection system and method
US7689008B2 (en) * 2005-06-10 2010-03-30 Delphi Technologies, Inc. System and method for detecting an eye
US7746235B2 (en) * 2005-03-10 2010-06-29 Delphi Technologies, Inc. System and method of detecting eye closure based on line angles
US20100288573A1 (en) * 2007-11-22 2010-11-18 Toyota Jidosha Kabushiki Kaisha Vehicle driver state detection apparatus
US20120215403A1 (en) * 2011-02-20 2012-08-23 General Motors Llc Method of monitoring a vehicle driver
US8433105B2 (en) * 2008-10-08 2013-04-30 Iritech Inc. Method for acquiring region-of-interest and/or cognitive information from eye image
US8547435B2 (en) * 2009-09-20 2013-10-01 Selka Elektronik ve Internet Urunleri San.ve Tic.A.S Mobile security audio-video recorder with local storage and continuous recording loop
US8587440B2 (en) * 2009-09-22 2013-11-19 Automotive Research & Test Center Method and system for monitoring driver
US8724858B2 (en) * 2008-05-12 2014-05-13 Toyota Jidosha Kabushiki Kaisha Driver imaging apparatus and driver imaging method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3962803B2 (ja) * 2005-12-16 2007-08-22 インターナショナル・ビジネス・マシーンズ・コーポレーション 頭部検出装置、頭部検出方法および頭部検出プログラム
JP2007207009A (ja) * 2006-02-02 2007-08-16 Fujitsu Ltd 画像処理方法及び画像処理装置
JP4728432B2 (ja) * 2008-01-16 2011-07-20 旭化成株式会社 顔姿勢推定装置、顔姿勢推定方法、及び、顔姿勢推定プログラム
CN102034334B (zh) * 2009-09-28 2012-12-19 财团法人车辆研究测试中心 驾驶人监控方法及其监控系统
CN101916496B (zh) * 2010-08-11 2013-10-02 无锡中星微电子有限公司 一种司机驾驶姿势检测的系统和方法
CN101950355B (zh) * 2010-09-08 2012-09-05 中国人民解放军国防科学技术大学 基于数字视频的驾驶员疲劳状态检测方法
TWI418478B (zh) * 2010-12-03 2013-12-11 Automotive Res & Testing Ct And a method and system for detecting the driving state of the driver in the vehicle
CN102324166B (zh) * 2011-09-19 2013-06-12 深圳市汉华安道科技有限责任公司 一种疲劳驾驶检测方法及装置
TWM426839U (en) * 2011-11-24 2012-04-11 Utechzone Co Ltd Anti-doze apparatus
CN102982316A (zh) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 一种对驾驶员非正常驾驶行为的识别装置和方法

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6130617A (en) * 1999-06-09 2000-10-10 Hyundai Motor Company Driver's eye detection method of drowsy driving warning system
US6243015B1 (en) * 1999-06-17 2001-06-05 Hyundai Motor Company Driver's drowsiness detection method of drowsy driving warning system
US20040071318A1 (en) * 2002-10-09 2004-04-15 Humphrey Cheung Apparatus and method for recognizing images
US7202792B2 (en) * 2002-11-11 2007-04-10 Delphi Technologies, Inc. Drowsiness detection system and method
US20050163383A1 (en) * 2004-01-26 2005-07-28 Samsung Electronics Co., Ltd. Driver's eye image detecting device and method in drowsy driver warning system
US7746235B2 (en) * 2005-03-10 2010-06-29 Delphi Technologies, Inc. System and method of detecting eye closure based on line angles
US7689008B2 (en) * 2005-06-10 2010-03-30 Delphi Technologies, Inc. System and method for detecting an eye
US20100288573A1 (en) * 2007-11-22 2010-11-18 Toyota Jidosha Kabushiki Kaisha Vehicle driver state detection apparatus
US8724858B2 (en) * 2008-05-12 2014-05-13 Toyota Jidosha Kabushiki Kaisha Driver imaging apparatus and driver imaging method
US8433105B2 (en) * 2008-10-08 2013-04-30 Iritech Inc. Method for acquiring region-of-interest and/or cognitive information from eye image
US8547435B2 (en) * 2009-09-20 2013-10-01 Selka Elektronik ve Internet Urunleri San.ve Tic.A.S Mobile security audio-video recorder with local storage and continuous recording loop
US8587440B2 (en) * 2009-09-22 2013-11-19 Automotive Research & Test Center Method and system for monitoring driver
US20120215403A1 (en) * 2011-02-20 2012-08-23 General Motors Llc Method of monitoring a vehicle driver

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160309329A1 (en) * 2014-05-14 2016-10-20 The Regents Of The University Of California Sensor-assisted user authentication
US9813907B2 (en) * 2014-05-14 2017-11-07 The Regents Of The University Of California Sensor-assisted user authentication
US9323984B2 (en) * 2014-06-06 2016-04-26 Wipro Limited System and methods of adaptive sampling for emotional state determination
CN109116839A (zh) * 2017-06-26 2019-01-01 本田技研工业株式会社 车辆控制系统、车辆控制方法及存储介质
US20190370578A1 (en) * 2018-06-04 2019-12-05 Shanghai Sensetime Intelligent Technology Co., Ltd . Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
US20190370577A1 (en) * 2018-06-04 2019-12-05 Shanghai Sensetime Intelligent Technology Co., Ltd Driving Management Methods and Systems, Vehicle-Mounted Intelligent Systems, Electronic Devices, and Medium
US10915769B2 (en) * 2018-06-04 2021-02-09 Shanghai Sensetime Intelligent Technology Co., Ltd Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
US10970571B2 (en) * 2018-06-04 2021-04-06 Shanghai Sensetime Intelligent Technology Co., Ltd. Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
US11195301B1 (en) * 2020-07-26 2021-12-07 Nec Corporation Of America Estimation of head yaw in an image

Also Published As

Publication number Publication date
TW201447827A (zh) 2014-12-16
CN104238733A (zh) 2014-12-24
CN104238733B (zh) 2017-11-24
TWI492193B (zh) 2015-07-11

Similar Documents

Publication Publication Date Title
US20140369553A1 (en) Method for triggering signal and in-vehicle electronic apparatus
CN105844128B (zh) 身份识别方法和装置
US20210303829A1 (en) Face liveness detection using background/foreground motion analysis
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
CN110955912B (zh) 基于图像识别的隐私保护方法、装置、设备及其存储介质
KR102299847B1 (ko) 얼굴 인증 방법 및 장치
JP4696857B2 (ja) 顔照合装置
US10127439B2 (en) Object recognition method and apparatus
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
US10956715B2 (en) Decreasing lighting-induced false facial recognition
US9501691B2 (en) Method and apparatus for detecting blink
CN108496170B (zh) 一种动态识别的方法及终端设备
US9594958B2 (en) Detection of spoofing attacks for video-based authentication
CN110612530B (zh) 用于选择脸部处理中使用的帧的方法
US11694475B2 (en) Spoofing detection apparatus, spoofing detection method, and computer-readable recording medium
US20120194697A1 (en) Information processing device, information processing method and computer program product
US11978231B2 (en) Wrinkle detection method and terminal device
US20230222842A1 (en) Improved face liveness detection using background/foreground motion analysis
CN109753886B (zh) 一种人脸图像的评价方法、装置及设备
TWI466070B (zh) 眼睛搜尋方法及使用該方法的眼睛狀態檢測裝置與眼睛搜尋裝置
CN115690892B (zh) 一种眯眼识别方法、装置、电子设备及存储介质
CN109214316B (zh) 周界防护方法及装置
CN113011222B (zh) 一种活体检测系统、方法及电子设备
CN110399780B (zh) 一种人脸检测方法、装置及计算机可读存储介质
CN112749642A (zh) 一种识别跌倒的方法和装置

Legal Events

Date Code Title Description
AS Assignment

Owner name: ANTARES PHARMA, IPL, AG, SWITZERLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WOTTON, PAUL;KRAUS, HOLGER;SADOWSKI, PETER L.;SIGNING DATES FROM 20130225 TO 20130304;REEL/FRAME:029957/0197

AS Assignment

Owner name: UTECHZONE CO., LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSOU, CHIA-CHUN;FANG, CHIH-HENG;LIN, PO-TSUNG;SIGNING DATES FROM 20130715 TO 20130814;REEL/FRAME:031066/0752

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION