CN109598237A - A kind of fatigue state detection method and device - Google Patents
A kind of fatigue state detection method and device Download PDFInfo
- Publication number
- CN109598237A CN109598237A CN201811475389.XA CN201811475389A CN109598237A CN 109598237 A CN109598237 A CN 109598237A CN 201811475389 A CN201811475389 A CN 201811475389A CN 109598237 A CN109598237 A CN 109598237A
- Authority
- CN
- China
- Prior art keywords
- detected value
- image
- value
- motor unit
- tof camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The present invention discloses a kind of fatigue state detection method and device.Fatigue state detection method includes: that the TOF camera adjusted in real time using camera parameter images user, obtains the infrared image sequence and range image sequence of user, corresponds between the infrared image of time shooting and the pixel of depth image;Recognition of face is carried out to infrared image sequence using human face recognition model, obtains face location and motor unit detected value, realizes that fatigue state detects using motor unit detected value;According to the corresponding relationship of infrared image and depth image pixel, the corresponding depth information of face location in range image sequence is obtained, utilizes the parameter of depth information adjustment TOF camera.The present invention adjusts the distance between TOF camera and user in real time, to obtain better imaging effect, and then improves the fatigue state testing result obtained based on image recognition processing.
Description
Technical field
The present invention relates to machine learning techniques field more particularly to a kind of fatigue state detection method and devices.
Background technique
Since fatigue driving is to cause the one of the major reasons of traffic accident, in the market for the demand day of fatigue inspection equipment
Benefit increases, and the movements such as closes one's eyes, yawns, bowing using fatigue detecting equipment monitoring to judge driver fatigue state, Jin Erxuan
Corresponding alert levels are selected, this improves traffic safety important in inhibiting for avoiding traffic accident.
In the prior art, be based on RGB camera or infrared camera and to cooperate machine learning algorithm mostly, by human eye,
The real-time status of face or mouth carries out fatigue state detection.Wherein, it is easy using RGB camera by illumination effect, such as
In the non-uniform situation of daylight, non-uniform light interferes greatly the identification of image, leads to fatigue state testing result
Inaccuracy;And use infrared camera can to avoid illumination effect, but due to infrared camera acquisition picture there is no depth information, nothing
Method obtains the distance between camera and driver, when driver or camera position change, can not adjust brightness according to distance,
And then influence the accuracy of fatigue state testing result.
Summary of the invention
The present invention provides a kind of fatigue state detection method and devices, can not accurately identify driving to solve the prior art
The problem of member's fatigue state.
One aspect of the present invention provides a kind of fatigue state detection method, comprising: the TOF adjusted in real time using camera parameter
Camera images user, obtains the infrared image sequence and range image sequence of user, with the infrared image and depth of time shooting
It spends between the pixel of image and corresponds;Recognition of face is carried out to infrared image sequence using human face recognition model, obtains people
Face position and motor unit detected value realize that fatigue state detects using motor unit detected value;According to infrared image and depth
The corresponding relationship of image slices vegetarian refreshments obtains the corresponding depth information of face location in range image sequence, utilizes depth information tune
The parameter of whole TOF camera.
One aspect of the present invention provides a kind of fatigue state detection device, comprising: image acquisition unit, for utilizing camera
The TOF camera that parameter adjusts in real time images user, obtains the infrared image sequence and range image sequence of user, claps with the time
It is corresponded between the pixel of the infrared image and depth image taken the photograph;Image identification unit, for utilizing human face recognition model
Recognition of face is carried out to infrared image sequence, obtains face location and motor unit detected value, it is real using motor unit detected value
Existing fatigue state detection;Camera adjustment unit obtains deep for the corresponding relationship according to infrared image and depth image pixel
The corresponding depth information of face location in image sequence is spent, the parameter of depth information adjustment TOF camera is utilized.
The present invention obtains infrared figure and depth map simultaneously using TOF camera, is not illuminated by the light environment influence based on infrared imaging,
Fatigue state can be accurately identified, and depth map can calculate the variation of the distance between TOF camera and user, according to TOF camera
The distance between user adjusts brightness in real time, further increases the accuracy of fatigue detecting.
Detailed description of the invention
Fig. 1 is the flow chart of the fatigue state detection method shown in the embodiment of the present invention;
Fig. 2 is the situation of change schematic diagram of the motor unit detected value shown in the embodiment of the present invention;
Fig. 3 is the eye closing motion detection flow chart shown in the embodiment of the present invention;
Fig. 4 is the motion detection flow chart of opening one's mouth shown in the embodiment of the present invention;
Fig. 5 is the structural block diagram of the fatigue state detection device shown in the embodiment of the present invention;
Fig. 6 is the hardware structural diagram of the fatigue state detection device shown in the embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Hereinafter, will be described with reference to the accompanying drawings the embodiment of the present invention.However, it should be understood that these descriptions are only exemplary
, and be not intended to limit the scope of the invention.In addition, in the following description, descriptions of well-known structures and technologies are omitted, with
Avoid unnecessarily obscuring idea of the invention.
Term as used herein is not intended to limit the present invention just for the sake of description specific embodiment.Used here as
Word " one ", " one (kind) " and "the" etc. also should include " multiple ", " a variety of " the meaning, unless in addition context clearly refers to
Out.In addition, the terms "include", "comprise" as used herein etc. show the presence of the feature, step, operation and/or component,
But it is not excluded that in the presence of or add other one or more features, step, operation or component.
There are all terms (including technical and scientific term) as used herein those skilled in the art to be generally understood
Meaning, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification
Meaning, without that should be explained with idealization or excessively mechanical mode.
Shown in the drawings of some block diagrams and/or flow chart.It should be understood that some sides in block diagram and/or flow chart
Frame or combinations thereof can be realized by computer program instructions.These computer program instructions can be supplied to general purpose computer,
The processor of special purpose computer or other programmable data processing units, so that these instructions are when executed by this processor can be with
Creation is for realizing function/operation device illustrated in these block diagrams and/or flow chart.
Therefore, technology of the invention can be realized in the form of hardware and/or software (including firmware, microcode etc.).Separately
Outside, technology of the invention can take the form of the computer program product on the machine readable media for being stored with instruction, the meter
Calculation machine program product uses for instruction execution system or instruction execution system is combined to use.In the context of the present invention,
Machine readable media, which can be, can include, store, transmitting, propagating or transmitting the arbitrary medium of instruction.For example, machine readable Jie
Matter can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, device or propagation medium.It is machine readable
The specific example of medium includes: magnetic memory apparatus, such as tape or hard disk (HDD);Light storage device, such as CD (CD-ROM);It deposits
Reservoir, such as random access memory (RAM) or flash memory;And/or wire/wireless communication link.
One aspect of the present invention provides a kind of fatigue state detection method.
Fig. 1 is the flow chart of the fatigue state detection method shown in the embodiment of the present invention, as shown in Figure 1, the present embodiment
Method includes:
S110, the TOF camera adjusted in real time using camera parameter image user, obtain user infrared image sequence and
Range image sequence is corresponded between the infrared image of time shooting and the pixel of depth image.
TOF camera is the camera using flight time (Time of Flight, TOF) technology, i.e. sensor is issued through adjusting
The near infrared light of system meets object back reflection, and sensor is by calculating light transmitting and reflection interval difference or phase difference, to obtain quilt
Shoot the distance of object, to generate depth information, furthermore shot in conjunction with traditional camera, can by the three-D profile of object with
The topographic map mode that different colours represent different distance shows.
S120 carries out recognition of face to infrared image sequence using human face recognition model, obtains face location and movement is single
First detected value realizes that fatigue state detects using motor unit detected value.
S130 obtains face position in range image sequence according to the corresponding relationship of infrared image and depth image pixel
Corresponding depth information is set, the parameter of the TOF camera is adjusted using depth information.
Wherein, depth information indicates the distance between TOF camera and face.
The present embodiment obtains infrared figure and depth map using TOF camera simultaneously, and infrared imaging is not illuminated by the light environment influence, can
To accurately identify fatigue state, and depth map can calculate the variation of the distance between TOF camera and user, be adjusted according to distance bright
Degree, further increases the accuracy of fatigue detecting.
Fig. 2 is the situation of change schematic diagram of the motor unit detected value shown in the embodiment of the present invention, and Fig. 3 is that the present invention is implemented
The eye closing motion detection flow chart exemplified, Fig. 4 are the motion detection flow chart of opening one's mouth shown in the embodiment of the present invention, below with reference to
As Fig. 2-4 couples of above-mentioned steps S110-S130 are described in detail.
Firstly, executing step S110, i.e., user is imaged using the TOF camera that camera parameter adjusts in real time, obtain user
Infrared image sequence and range image sequence, it is a pair of between the infrared image of time shooting and the pixel of depth image one
It answers.
The present embodiment utilizes the available infrared image sequence of TOF camera and range image sequence, due to infrared image sequence
Column and range image sequence are the infrared image frame and depth image for being generated by the same sensor, therefore being shot with the time
Pixel between frame is one-to-one.The present embodiment can use the infrared image frame and depth map that the same time shoots
As face location in the every frame depth image of corresponding relationship acquisition between frame pixel, and then face is obtained with respect to TOF camera
Distance.
After obtaining infrared image sequence and range image sequence, step S120 is continued to execute, that is, utilizes recognition of face
Model carries out recognition of face to infrared image sequence, obtains face location and motor unit detected value, is detected using motor unit
Value realizes fatigue state detection.
The present embodiment obtains face location and motor unit detected value according to following methods: first with OpenFace face
Linear support vector device in identification model identifies infrared image sequence, obtains the pixel information of face location, with
And the situation of change of motor unit detected value is obtained, the situation of change of motor unit detected value includes the variation of lip motion detected value
Situation and eye motion detected value situation of change, the variation of lip motion detected value indicate whether the movement for occurring to open mouth,
The situation of change of eye motion detected value indicates whether that blink movement occurs;Then according to the situation of change of motor unit detected value
Carry out fatigue state detection;Finally according to yawn movement and/or the eye closing action recognition fatigue state of generation, such as work as generation
Movement and eye closing yawn when acting, identification user is in very fatigue state, when only yawn act when, identification user
In compared with fatigue state, when eye closing movement only occurs, identification user is in general fatigue state.
In the present embodiment, motor unit detected value (i.e. Action Units, AU) computational threads can be opened up, using dynamic
Make the calculating that unit detected value computational threads carry out face location detection and AU value, such as in OpenFace human face recognition model
The existence of AU is predicted using linear support vector device (linear kernel Support VectorMachine),
It is determining there are when AU, is returning (linear kernel Support Vector Regression) using linear support vector
AU value is calculated.It opens up fatigue behaviour detection thread and carries out fatigue detecting.
In OpenFace, the present embodiment AU25 and AU26 belong to lip motion detection, yawn for identification;AU45 belongs to
In blink motion detection, close one's eyes for identification.
It can be calculated in the red image of every frame in OpenFace human face recognition model using linear support vector recurrence
AU45 value, for infrared image sequence, when there are eye closing movement, the situation of change of AU45 value is calculated as shown in Fig. 2,
The value of AU45 can steeply rise, and steadily keep a period of time, just fall after rise rapidly again when opening eyes, then according to this rule
Rule the present embodiment is greater than second time threshold in the time that eye motion detected value is persistently greater than blink characteristic value, and determination is closed
Eye movement is made.
As shown in figure 3, according to the AU45 value being calculated, it can be determined that prominent with the presence or absence of AU45 value in infrared image sequence
The the first infrared image frame Up Frame become, when not having, there are determine that there is no closing one's eyes when the first infrared image frame UpFrame
Movement is in normal Normal state;When there are the first infrared image frame Up Frame, mutation value detection Up is carried out
Detect is determined dynamic there is no closing one's eyes when not detecting that AU45 value is equal to or more than blink characteristic value Hold Frame
Make, is in normal Normal state;When detecting that AU45 value is equal to or more than blink characteristic value, the detection Close that closes one's eyes is carried out
EyeDetect, when the duration Hold on Time that AU45 value is equal to or more than blink characteristic value is greater than second time threshold
It when T2, determines and eye closing movement occurs, otherwise, be equal to or more than the duration Hold on Time of blink characteristic value in AU45 value
When no more than second time threshold T2, determines there is no eye closing movement, be in normal Normal state.
Likewise, using linear support vector recurrence, that every frame can be calculated is red in OpenFace human face recognition model
AU25 value and AU26 value in outer image, for infrared image sequence, when exist yawn act when, be calculated AU25 value and
The situation of change of AU26 value is as shown in Fig. 2, AU25 value and AU26 value can steeply rise, and when steadily keeping one section
Between, it just falls after rise rapidly when closing mouth, opens one's mouth then being persistently greater than according to this regular the present embodiment in lip motion detected value again
When the time of characteristic value is greater than first time threshold, movement of yawning is determined.
As shown in figure 4, according to the AU25 value and AU26 value that are calculated, it can be determined that whether there is in infrared image sequence
Second infrared image frame Up Frame ' of AU25 value and AU26 value mutation, when not having, there are the second infrared image frame Up Frame '
When determine there is no yawn movement, be in normal Normal state;When there are the second infrared image frame Up Frame ',
Mutation value detection Up Detect is carried out, is not detecting AU25 value and AU26 value equal to or more than the characteristic value Hold that opens one's mouth
When Frame ', determines there is no movement of yawning, be in normal Normal state;Detecting that AU25 value and AU26 value be equal to
Or greater than yawn characteristic value when, yawn and detect Open Mouth Detect, when AU25 value and AU26 value are equal to or greatly
When the duration Hold on Time ' for characteristic value of opening one's mouth is greater than first time threshold T1, movement of yawning is determined, it is no
Then, it is not more than at the first time in AU25 value and AU26 value equal to or more than the duration Hold on Time ' for characteristic value of opening one's mouth
When threshold value T1, determines there is no movement of yawning, be in normal Normal state.
After realizing fatigue state detection, step S130 is continued to execute, i.e., according to infrared image and depth image pixel
The corresponding relationship of point obtains the corresponding depth information of face location in range image sequence, adjusts TOF camera using depth information
Parameter.
Depth information in the present embodiment is each pixel in the corresponding human face region of face location in every frame depth image
The average value of the sum of depth value, camera parameter include gain parameter Gain and laser brightness parameter Pulsecnt.
The present embodiment can open up camera parameter adjustment thread and carry out parameter adjustment, utilize TOF indicated by depth information
The distance between camera and face turn TOF camera down when determining that user and TOF camera distance become close according to depth information
Gain parameter Gain and laser brightness parameter Pulsecnt, keeps brightness dimmed;User and TOF camera are being determined according to depth information
When distance becomes remote, the gain parameter Gain and laser brightness parameter Pulsecnt of TOF camera are tuned up, so that brightness is brightened, to reach
Better imaging effect keeps fatigue detecting more acurrate.
The present embodiment enables multithreading and carries out fatigue state detection, can be improved detection speed, and utilize OpenFace people
Face identification model, the situation of change based on motor unit detected value identify that the states such as close one's eyes, yawn carry out fatigue detecting in turn,
Logic is simple, detection effect accuracy is high.
Another aspect of the present invention provides a kind of fatigue state detection device.
Fig. 5 is the structural block diagram of the fatigue state detection device shown in the embodiment of the present invention, as shown in figure 5, the present embodiment
Control device include:
Image acquisition unit 51, the TOF camera for being adjusted in real time using camera parameter image user, obtain user's
Infrared image sequence and range image sequence, it is a pair of between the infrared image of time shooting and the pixel of depth image one
It answers;
Image identification unit 52 obtains people for carrying out recognition of face to infrared image sequence using human face recognition model
Face position and motor unit detected value realize that fatigue state detects using motor unit detected value;
Camera adjustment unit 53 obtains depth map for the corresponding relationship according to infrared image and depth image pixel
The corresponding depth information of face location as described in sequence utilizes the parameter of depth information adjustment TOF camera.
Depth information in the present embodiment is each pixel depth in the corresponding human face region of face location in depth image
The average value of the sum of value.
In the present embodiment, image identification unit 52 be used for using in OpenFace human face recognition model linearly support to
Measuring device identifies infrared image sequence, obtains the pixel information of face location, and obtain motor unit detected value
Situation of change, the situation of change of motor unit detected value include that lip motion detected value situation of change and eye motion detected value become
Change situation, the variation of lip motion detected value indicates whether the movement for occurring to open mouth, the variation feelings of eye motion detected value
Condition indicates whether that blink movement occurs;Fatigue state detection is carried out according to the situation of change of motor unit detected value.
The time that image identification unit 52 is also used to persistently be greater than in lip motion detected value characteristic value of opening one's mouth is greater than first
When time threshold, movement of yawning is determined;It is greater than the in the time that eye motion detected value is persistently greater than blink characteristic value
Two time thresholds determine and eye closing movement occur;According to yawn movement and/or the eye closing action recognition fatigue state of generation.
In the present embodiment, camera adjustment unit 53 is used to determine that user and TOF camera distance become according to depth information
When close, turn the gain parameter and laser brightness parameter of TOF camera down, keep brightness dimmed;According to depth information determine user with
When TOF camera distance becomes remote, the gain parameter and laser brightness parameter of TOF camera are tuned up, brightness is made to brighten.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality
Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit
The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with
It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual
It needs that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not
In the case where making the creative labor, it can understand and implement.
Fatigue state detection device provided by the invention can also pass through hardware or software and hardware by software realization
In conjunction with mode realize.Taking software implementation as an example, referring to shown in Fig. 6, fatigue state detection device provided by the invention may include
Processor 601, the machine readable storage medium 602 for being stored with machine-executable instruction.Processor 601 and machine readable storage are situated between
Matter 602 can be communicated via system bus 603.Also, by read and execute in machine readable storage medium 602 with control logic
Above-described fatigue state detection method can be performed in corresponding machine-executable instruction, processor 601.
Machine readable storage medium 602 mentioned in the present invention can be any electronics, magnetism, optics or other physics and deposit
Storage device may include or store information, such as executable instruction, data, etc..For example, machine readable storage medium may is that
RAM (Radom Access Memory, random access memory), volatile memory, nonvolatile memory, flash memory, storage are driven
Dynamic device (such as hard disk drive), solid state hard disk, any kind of storage dish (such as CD, DVD) or similar storage are situated between
Matter or their combination.
Disclosed example according to the present invention, the present invention also provides a kind of including machine-executable instruction machine readable deposits
Machine readable storage medium 602 in storage media, such as Fig. 6, machine-executable instruction can be by fatigue state detection devices
Processor 601 is executed to realize above-described fatigue state detection method.
The above description is merely a specific embodiment, under above-mentioned introduction of the invention, those skilled in the art
Other improvement or deformation can be carried out on the basis of the above embodiments.It will be understood by those skilled in the art that above-mentioned tool
Body description only preferably explains that the purpose of the present invention, protection scope of the present invention should be subject to the protection scope in claims.
Claims (10)
1. a kind of fatigue state detection method, which is characterized in that the described method includes:
User is imaged using the TOF camera that camera parameter adjusts in real time, obtains the infrared image sequence and depth image of user
Sequence is corresponded between the infrared image of time shooting and the pixel of depth image;
Recognition of face is carried out to the infrared image sequence using human face recognition model, obtains face location and motor unit detection
Value realizes that fatigue state detects using the motor unit detected value;
According to the corresponding relationship of infrared image and depth image pixel, face location described in the range image sequence is obtained
Corresponding depth information adjusts the parameter of the TOF camera using the depth information.
2. the method according to claim 1, wherein described utilize human face recognition model to the infrared image sequence
Column carry out recognition of face, obtain face location and motor unit detected value, realize tired shape using the motor unit detected value
State detection, comprising:
The infrared image sequence is identified using the linear support vector device in OpenFace human face recognition model, is obtained
The pixel information of face location, and obtain the situation of change of the motor unit detected value, the motor unit detected value
Situation of change include lip motion detected value situation of change and eye motion detected value situation of change, the lip motion detection
The variation of value indicates whether the movement for occurring to open mouth, and the situation of change of the eye motion detected value indicates whether to blink
Eye movement is made;
Fatigue state detection is carried out according to the situation of change of the motor unit detected value.
3. according to the method described in claim 2, it is characterized in that, the situation of change according to the motor unit detected value
Carry out fatigue state detection, comprising:
When the time that the lip motion detected value is persistently greater than characteristic value of opening one's mouth being greater than first time threshold, determines and occur to beat
Yawn movement;
It is greater than second time threshold in the time that the eye motion detected value is persistently greater than blink characteristic value, determination is closed one's eyes
Movement;
According to yawn movement and/or the eye closing action recognition fatigue state of generation.
4. the method according to claim 1, wherein the depth information is face location described in depth image
The average value of the sum of each pixel depth value in corresponding human face region.
5. the method according to claim 1, wherein described adjust the TOF camera according to the depth information
Parameter, comprising:
When determining that user and TOF camera distance become close according to the depth information, turn down TOF camera gain parameter and
Laser brightness parameter keeps brightness dimmed;
When determining that user and TOF camera distance become remote according to the depth information, tune up TOF camera gain parameter and
Laser brightness parameter, makes brightness brighten.
6. a kind of fatigue state detection device, which is characterized in that described device includes:
Image acquisition unit, the TOF camera for being adjusted in real time using camera parameter image user, obtain the infrared figure of user
As sequence and range image sequence, corresponded between the infrared image of time shooting and the pixel of depth image;
Image identification unit obtains face for carrying out recognition of face to the infrared image sequence using human face recognition model
Position and motor unit detected value realize that fatigue state detects using the motor unit detected value;
Camera adjustment unit obtains the depth image for the corresponding relationship according to infrared image and depth image pixel
The corresponding depth information of face location described in sequence, the parameter of the TOF camera is adjusted using the depth information.
7. device according to claim 6, which is characterized in that
Described image recognition unit, for utilizing the linear support vector device in OpenFace human face recognition model to the two dimension
Image sequence is identified, obtains the pixel information of face location, and obtain the variation feelings of the motor unit detected value
Condition, the situation of change of the motor unit detected value include lip motion detected value situation of change and the variation of eye motion detected value
Situation, the variation of the lip motion detected value indicate whether the movement for occurring to open mouth, the eye motion detected value
Situation of change indicates whether that blink movement occurs;Fatigue state inspection is carried out according to the situation of change of the motor unit detected value
It surveys.
8. device according to claim 7, which is characterized in that
Described image recognition unit, the time for being also used to persistently be greater than in the lip motion detected value characteristic value of opening one's mouth are greater than the
When one time threshold, movement of yawning is determined;Persistently it is greater than the time of blink characteristic value in the eye motion detected value
Greater than second time threshold, determines and eye closing movement occurs;According to yawn movement and/or the eye closing action recognition fatigue shape of generation
State.
9. device according to claim 6, which is characterized in that the depth information is face location described in depth image
The average value of the sum of each pixel depth value in corresponding human face region.
10. device according to claim 6, which is characterized in that
The camera adjustment unit, for adjusting when determining that user and TOF camera distance become close according to the depth information
The gain parameter and laser brightness parameter of small TOF camera, keep brightness dimmed;According to the depth information determine user with it is described
When TOF camera distance becomes remote, the gain parameter and laser brightness parameter of TOF camera are tuned up, brightness is made to brighten.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811475389.XA CN109598237A (en) | 2018-12-04 | 2018-12-04 | A kind of fatigue state detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811475389.XA CN109598237A (en) | 2018-12-04 | 2018-12-04 | A kind of fatigue state detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109598237A true CN109598237A (en) | 2019-04-09 |
Family
ID=65961290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811475389.XA Pending CN109598237A (en) | 2018-12-04 | 2018-12-04 | A kind of fatigue state detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109598237A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334629A (en) * | 2019-06-26 | 2019-10-15 | 湖北秉正讯腾科技有限公司 | Can multi-faceted detecting distance method, apparatus and readable storage medium storing program for executing |
CN113504890A (en) * | 2021-07-14 | 2021-10-15 | 炬佑智能科技(苏州)有限公司 | ToF camera-based speaker assembly control method, apparatus, device, and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104504856A (en) * | 2014-12-30 | 2015-04-08 | 天津大学 | Fatigue driving detection method based on Kinect and face recognition |
CN105769120A (en) * | 2016-01-27 | 2016-07-20 | 深圳地平线机器人科技有限公司 | Fatigue driving detection method and device |
CN106851123A (en) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | Exposal control method, exposure-control device and electronic installation |
CN107333070A (en) * | 2017-07-12 | 2017-11-07 | 江苏集萃有机光电技术研究所有限公司 | Image acquiring method and diagnostic equipment |
CN108537155A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
-
2018
- 2018-12-04 CN CN201811475389.XA patent/CN109598237A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104504856A (en) * | 2014-12-30 | 2015-04-08 | 天津大学 | Fatigue driving detection method based on Kinect and face recognition |
CN105769120A (en) * | 2016-01-27 | 2016-07-20 | 深圳地平线机器人科技有限公司 | Fatigue driving detection method and device |
CN106851123A (en) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | Exposal control method, exposure-control device and electronic installation |
CN107333070A (en) * | 2017-07-12 | 2017-11-07 | 江苏集萃有机光电技术研究所有限公司 | Image acquiring method and diagnostic equipment |
CN108537155A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334629A (en) * | 2019-06-26 | 2019-10-15 | 湖北秉正讯腾科技有限公司 | Can multi-faceted detecting distance method, apparatus and readable storage medium storing program for executing |
CN110334629B (en) * | 2019-06-26 | 2022-12-23 | 如隆智能科技(嘉兴)有限公司 | Method and device capable of detecting distance in multiple directions and readable storage medium |
CN113504890A (en) * | 2021-07-14 | 2021-10-15 | 炬佑智能科技(苏州)有限公司 | ToF camera-based speaker assembly control method, apparatus, device, and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111133473B (en) | Camera pose determination and tracking | |
JP7076368B2 (en) | Range gate type depth camera parts | |
CN109670421B (en) | Fatigue state detection method and device | |
Weikersdorfer et al. | Event-based 3D SLAM with a depth-augmented dynamic vision sensor | |
TW201915831A (en) | System and method for entity recognition | |
KR101550474B1 (en) | Method and device for finding and tracking pairs of eyes | |
US9779511B2 (en) | Method and apparatus for object tracking and 3D display based thereon | |
JP2021114307A (en) | Information processing device, information processing method, and program | |
JP2017223648A (en) | Reducing power consumption for time-of-flight depth imaging | |
CN108446585A (en) | Method for tracking target, device, computer equipment and storage medium | |
US20120026335A1 (en) | Attribute-Based Person Tracking Across Multiple Cameras | |
CN102542552B (en) | Frontlighting and backlighting judgment method of video images and detection method of shooting time | |
CN112396116B (en) | Thunder and lightning detection method and device, computer equipment and readable medium | |
JP2017010337A (en) | Pupil detection program, pupil detection method, pupil detection apparatus and line of sight detection system | |
CN110046560A (en) | A kind of dangerous driving behavior detection method and camera | |
CN105469427B (en) | One kind is for method for tracking target in video | |
WO2017165332A1 (en) | 2d video analysis for 3d modeling | |
CN112153363B (en) | Method and system for 3D corneal position estimation | |
US10679376B2 (en) | Determining a pose of a handheld object | |
CN109598237A (en) | A kind of fatigue state detection method and device | |
US10559087B2 (en) | Information processing apparatus and method of controlling the same | |
Su et al. | An efficient human-following method by fusing kernelized correlation filter and depth information for mobile robot | |
TWI618647B (en) | System and method of detection, tracking and identification of evolutionary adaptation of vehicle lamp | |
JP6396051B2 (en) | Area state estimation device, area state estimation method, program, and environment control system | |
JP2020160901A (en) | Object tracking device and object tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190409 |