CN104573725B - It is a kind of that detection method is driven based on vertical view the blind of feature - Google Patents

It is a kind of that detection method is driven based on vertical view the blind of feature Download PDF

Info

Publication number
CN104573725B
CN104573725B CN201510013113.XA CN201510013113A CN104573725B CN 104573725 B CN104573725 B CN 104573725B CN 201510013113 A CN201510013113 A CN 201510013113A CN 104573725 B CN104573725 B CN 104573725B
Authority
CN
China
Prior art keywords
face
driver
eye
image
upper eyelid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510013113.XA
Other languages
Chinese (zh)
Other versions
CN104573725A (en
Inventor
张卡
何佳
尼秀明
徐小伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ANHUI QINGXIN INTERNET INFORMATION TECHNOLOGY Co Ltd
Original Assignee
ANHUI QINGXIN INTERNET INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANHUI QINGXIN INTERNET INFORMATION TECHNOLOGY Co Ltd filed Critical ANHUI QINGXIN INTERNET INFORMATION TECHNOLOGY Co Ltd
Priority to CN201510013113.XA priority Critical patent/CN104573725B/en
Publication of CN104573725A publication Critical patent/CN104573725A/en
Application granted granted Critical
Publication of CN104573725B publication Critical patent/CN104573725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Detection method is driven based on vertical view the blind of feature the invention discloses a kind of, the present invention uses video image processing technology, monitor the vertical view feature of driver's eyes, according to driver be in the vertical view state time length real-time judge whether there is it is blind drive for, high with the monitoring degree of accuracy, missing inspection flase drop is few, affected by environment small, the features such as speed is fast, and cost is low.

Description

Blind driving detection method based on overlooking characteristics
Technical Field
The invention relates to the technical field of safe driving, in particular to the technical field of blind driving detection.
Background
With the rapid growth of automobile reserves, people enjoy the convenience and quickness of traffic and are accompanied with the frequent occurrence of various traffic accidents, thereby causing huge personnel and economic losses. The factors causing traffic accidents are many, wherein the ' blind driving ' caused by the driver playing with the mobile phone with his head down or holding the terminal is an important induction reason, because the driver is not prohibited by traffic regulations like drunk driving or call-receiving driving, so the blind driving ' is also a factor which is generally ignored by people, however, the harm caused by the blind driving is far better than that caused by drunk driving or call-receiving driving, particularly when the driver plays with the mobile phone, the driver generally needs to look down or over, the sight line is seriously deviated from the straight ahead, the effective observation on the road condition and the surrounding environment is lost, and the strain capacity of the driver is greatly weakened once an emergency or an emergency is met, and the traffic accidents are very easily caused. When the vehicle speed is 65km/h, the driver looks down at the mobile phone for 2 seconds, which is equivalent to blind opening for 36 meters, and in case of emergency, the driver needs to brake for at least 20 meters, and if the driver needs to open and read a microblog by using the smart phone for 12 seconds, which is equivalent to blind opening for about 216 meters by the vehicle. In addition, when the mobile phone is played for driving, traffic signals are missed or bulletin boards and other signs cannot be seen easily.
For traffic accidents caused by blind driving, the driving behavior of a driver cannot be monitored in real time, and supervision departments of some passenger transport and freight transport enterprises can only take postnatal inference as the basis for dividing responsibilities, and cannot perform monitoring and prevention in advance. Therefore, the blind driving behavior of the driver is monitored in real time and fed back to the transportation enterprise supervision department in time, and the method plays an important role in preventing major traffic accidents.
Disclosure of Invention
In view of the above problems, the technical problem to be solved by the present invention is to provide a blind driving detection method.
The specific technical scheme is as follows:
a blind driving detection method based on overlooking characteristics comprises the following steps:
11 Before the detection starts, a face detection classifier file is loaded;
12 Detecting start, acquiring a head image of a driver in real time and converting the head image into a gray image;
13 Based on the face detection classifier file loaded in the step 11) and the gray level image obtained in the step 12), judging whether a face exists in the current frame, if so, accurately finding the contours of the upper eyelids of the left eye and the right eye of the driver in the gray level image, and acquiring the contour radius of the upper eyelids through a curve fitting algorithm; if not, directly entering step 14);
14 If the face does not exist, judging whether the driver is in a blind driving state or not based on the time characteristic of face disappearance; if the human face exists, judging whether the driver is in a blind driving state or not based on the upper eyelid contour characteristics based on the left eye and the right eye;
15 When the driver is in the blind driving state), giving an alarm to the driver or sending a real-time video of the driver in the blind driving state to a remote monitoring server.
Further, the step 12) is converted into a gray image according to the formula [1 ];
f(x,y)=0.299r(x,y)+0.587g(x,y)+0.114b(x,y) [1]
where f (x, y) is the gray value at pixel (x, y) in the transformed image, and r (x, y), g (x, y), b (x, y) are the values of the three channels red, green, and blue at pixel (x, y) in the original image.
Further, the step 13) includes the steps of:
31 Obtaining an effective area of face detection, wherein if the correct face position is not obtained in the previous frame, the effective area of face detection is a full image area; if the correct face position exists in the previous frame, the effective area is the effective area of face detection formed by respectively expanding the rectangular width of half of the face to the left and the right and respectively expanding the rectangular height of half of the face to the upper and the lower on the basis of the original face rectangular area;
32 Based on the adaboost classifier, performing face detection;
33 Judging whether the human face exists, if so, emptying the time stamp list, entering step 34), and if not, putting the current frame time stamp into the time stamp list, and entering step 14);
34 Based on the layout rule of 'three-family five-eye' of the human face, respectively obtaining the position sub-areas rect _ left and rect _ right of the left and right eyes according to the formula [2] and the formula [3 ];
wherein, rect _ face is a face position area in the image;
35 Obtain upper eyelid profile curves for the left and right eyes;
36 Based on least square fitting theoretical formulas [7], [8] and [9], obtaining a circle radius R and a circle center corresponding to the upper eyelid contour curve;
where n is the number of point sets participating in the fitting, x i 、y i Coordinates of points participating in the fitting;
37 The final upper eyelid contour radius is obtained by calculating the mean value of the upper eyelid contour radii of the left and right eyes, putting the mean value into a radius list, and putting the ordinate of the center of the upper eyelid contour of the left eye into a center list.
Further, the step 35) specifically includes the following steps:
41 Obtaining an upper eyelid contour curve of a left eye, firstly carrying out image blurring treatment, and carrying out mean value filtering by adopting a template shown in a formula [4 ];
42 Image enhancement, correcting the image based on gamma filtering theory according to formula [5], enhancing the contrast of the eye image, and the effect is as shown in fig. 3;
wherein, g (x, y) is the gray value of the pixel at the enhanced image (x, y), f (x, y) is the gray value of the pixel at the original image (x, y), γ is the gamma filter coefficient, when γ is less than 1, the contrast of the low gray value area can be significantly enhanced, when γ is greater than 1, the contrast of the high gray value area can be significantly enhanced;
43 Based on an improved Laplace operator template formula [6], acquiring edge characteristics of the eye, and performing binarization processing based on a maximum class spacing algorithm;
44 By morphological closing operations, connecting adjacent edges to form connected regions;
45 Selecting an eye communication region by removing a communication region with a smaller area and then selecting a communication region with a largest area as the eye communication region;
46 Obtain an uppermost edge curve of the connected region of the eye as an upper eyelid contour curve of the eye;
47 ) the upper eyelid contour curve of the right eye, with reference to steps 41) -46).
Further, the step 45) includes the steps of:
51 Removing the communicating regions with smaller areas;
52 Selecting a largest-area connected region as a candidate connected region of the eye;
53 Judging whether other connected regions exist at the similar height of the candidate connected region, and if so, merging the other connected regions into the candidate connected region to form the final eye connected region.
Further, the step 14) includes the steps of:
61 Judging whether the face of the driver in the current frame is in a disappearing state, specifically, according to the number of timestamps of face disappearing in the timestamp list, if the number of the timestamps is more than 0, entering step 62); if not, go to step 63);
62 According to the formula [10], judging whether the driver is in a blind driving state based on the time characteristic of face disappearance,
wherein exist =1 shows that the driver is in the blind driving state, timemap begin Is the face disappearing in the time stamp listTimestamp of the beginning end Is the timestamp, T, of the end of face disappearance in the timestamp list n Is a timestamp interval threshold value which indicates how long the time that the face of the driver seriously deviates from the right front lasts for the driver is in a blind driving state;
63 Based on the upper eyelid contour characteristics of the eyes, whether the driver is in the blind driving state is judged, and the specific method is that in unit time T, the number of frames of the driver in the overlooking state is counted according to a formula [11], a formula [12] and a formula [13], and whether the driver is in the blind driving state is judged according to a formula [14 ];
Tr=0.4*R1+0.6*R2 [12]
where exist =1 indicates that the driver is in the blind driving state, N is the total number of frames in unit time, N _ R is the number of frames in the overhead state in unit time, R [ i ] indicates the radius of the ith frame in the radius list, R1 is the upper eyelid radius when looking forward in the normal case, R2 is the upper eyelid radius when looking down, c [ i ] indicates the upper eyelid contour circle center position of the ith frame in the circle center list, htop is the upper edge position of the upper eyelid contour of the left eye, and Hbottom is the lower edge position of the upper eyelid contour of the left eye.
The invention has the beneficial effects that: the invention adopts a video image processing technology to monitor the overlooking characteristic of the eyes of the driver, judges whether blind driving behaviors exist or not in real time according to the length of overlooking state time of the driver, and has the characteristics of high monitoring accuracy, less missed detection and false detection, less environmental influence, high speed, low cost and the like.
Drawings
FIG. 1 is a logic flow diagram of the system of the present invention;
FIG. 2 is a local region segmentation effect map of an eye;
FIG. 3 enhancement effect map of eye image
FIG. 4 is a graph of the edge detection effect of an eye image;
FIG. 5 is a diagram of connected region effects for an eye;
FIG. 6 is a graph of the effect of the upper eyelid profile curve for the eye;
FIG. 7 is a graph of the effect of curve fitting a circle on the upper eyelid profile of an eye;
in fig. 2 to 7, (a) corresponds to a related image when the normal driving is forward-looking, and (b) corresponds to a related image when the driving is downward-looking.
Detailed Description
In order to more clearly understand the technical solution of the present invention, the present invention will be further described with reference to the accompanying drawings and examples.
Referring to fig. 1, the method of the present embodiment is described based on the whole blind driving detection system, which includes an initialization module, an acquisition module, a detection module, a determination module, and a voice communication module, and the specific implementation steps are as follows:
s1, executing an initialization module;
the initialization module has the function that when the system is started, necessary classifier learning files, mainly face detection classifier files, are loaded;
s2, executing an acquisition module;
the acquisition module has the functions of acquiring a driving state image of a driver in real time, mainly a head image of the driver, and converting the driving state image into a gray image according to a formula [1 ];
f(x,y)=0.299r(x,y)+0.587g(x,y)+0.114b(x,y) [1]
wherein f (x, y) is the gray value of the pixel (x, y) in the transformed image, and r (x, y), g (x, y), b (x, y) are the values of the three channels of red, green and blue at the pixel (x, y) in the original image;
s3, executing a detection module;
the detection module has the functions that the upper eyelid contours of the left eye and the right eye of the driver are accurately found in the image, the contour radius of the upper eyelid is obtained through a curve fitting algorithm, and preparation is made for the judgment module, the basis is that under the condition that the acquisition equipment is fixed, the contour radius of the upper eyelid when the driver normally looks forward is obviously different from the contour radius of the upper eyelid when looking down, the more the upper eyelid contour radius is, the larger the upper eyelid contour radius is, and the specific steps are as follows:
s31, positioning an eye area, mainly finding out local subregions of the left eye and the right eye, and specifically comprising the following steps:
s311, obtaining a face detection effective area, wherein the effective detection area is a full image area if the correct face position is not obtained in the previous frame, and the effective detection area is an area formed by expanding the width of a half face rectangle to the left and the right and expanding the height of the half face rectangle to the upper and the lower on the basis of the original face rectangle area if the correct face position exists in the previous frame;
s312, detecting the human face based on an adaboost classifier;
s313, judging whether the human face exists, if so, emptying the time stamp list, and entering the step S314, otherwise, putting the time stamp of the current frame into the time stamp list, and entering the step S4;
s314, respectively obtaining position sub-areas rect _ left and rect _ right of the left eye and the right eye according to a formula [2] and a formula [3] based on a three-family five-eye layout rule of the human face, wherein the partial sub-areas of the left eye are shown in figure 2;
wherein, rect _ face is a face position area in the image;
s32, obtaining upper eyelid contour curves of the left and right eyes, wherein the obtaining process is described below by taking the obtaining of the upper eyelid contour of the left eye as an example, and the obtaining process of the upper eyelid contour of the right eye is similar to the above, and the specific steps are as follows:
s321, blurring the image, wherein the edge characteristics of the upper eyelid are relatively obvious, and blurring the image can remove the influence of partial fine edges such as skin pores, eyelashes and the like under the condition of keeping the edge characteristics of the upper eyelid, and the method adopts a template as shown in the formula [4] to carry out mean filtering;
s322, enhancing the image, correcting the image based on the gamma filtering theory according to the formula [5], enhancing the contrast of the eye image, and achieving the effect as shown in figure 3;
wherein, g (x, y) is the gray value of the pixel at the enhanced image (x, y), f (x, y) is the gray value of the pixel at the original image (x, y), γ is the gamma filter coefficient, when γ is less than 1, the contrast of the low gray value area can be significantly enhanced, when γ is greater than 1, the contrast of the high gray value area can be significantly enhanced;
s323, based on the improved Laplace operator template formula [6], obtaining the edge characteristics of the eye, and carrying out binarization processing based on the maximum class spacing algorithm, wherein the effect is as shown in figure 4;
s324, performing morphological closing operation, and connecting adjacent edges to form a communication area;
s325, selecting an eye communication region, specifically, removing the communication region with a smaller area, and selecting the communication region with the largest area as the eye communication region, wherein the effect is as shown in FIG. 5;
s326, acquiring the uppermost edge curve of the eye communication area as the upper eyelid contour curve of the eye, wherein the effect is as shown in FIG. 6;
s33, obtaining a circle radius R and a circle center corresponding to the upper eyelid contour curve based on a least square fitting theoretical formula [7], a formula [8] and a formula [9], wherein the effect is as shown in figure 7;
where n is the number of point sets participating in the fitting, x i 、y i Coordinates of points participating in the fitting;
s34, obtaining the final upper eyelid contour radius, specifically calculating the mean value of the upper eyelid contour radii of the left eye and the right eye, putting the mean value into a radius list, and putting the longitudinal coordinate of the center of the upper eyelid contour of the left eye into a circle center list;
s4, executing a judgment module;
the function of the judging module is to judge whether the driver is in a overlook state for a long time, if so, the driver is in a blind driving state when looking down, and the specific steps are as follows:
s41, judging whether the face of the driver in the current frame is in a disappearing state or not, wherein the specific method is that the face of the driver is in the disappearing state according to the number of the time stamps in the time stamp list, if the number of the time stamps is more than 0, the step S42 is executed, and if not, the step S43 is executed;
s42, judging whether the driver is in a blind driving state or not based on the time characteristic of face disappearance according to a formula [10], and quitting the current module;
wherein exist =1 shows that the driver is in the blind driving state, timemap begin Is the starting timestamp, in the timestamp list end Is the ending timestamp, T, in a list of timestamps n Is a time stamp interval threshold value which represents how long the time of the face of the driver seriously deviating from the right front is in a blind driving state, in the embodiment, T n =2 seconds;
s43, judging whether the driver is in a blind driving state or not based on the upper eyelid contour characteristics of the eyes, wherein the specific method comprises the steps of counting the number of frames of the driver in a overlooking state according to a formula [11], a formula [12] and a formula [13] in unit time T, and judging whether the driver is in the blind driving state according to a formula [14], wherein the value range of T is 1.5 seconds to 3 seconds in the embodiment;
Tr=0.4*R1+0.6*R2 [12]
wherein exist =1 indicates that the driver is in a blind driving state, N is the total number of frames in a unit time, N _ R is the number of frames in a top view state in a unit time, R [ i ] indicates the radius of the ith frame in the radius list, R1 is the radius of the upper eyelid when looking forward in a normal case, R2 is the radius of the upper eyelid when looking down, c [ i ] indicates the center position of the upper eyelid contour in the center list of the ith frame, htop is the upper edge position of the upper eyelid contour of the left eye, and Hbottom is the lower edge position of the upper eyelid contour of the left eye;
and S44, updating module parameters, and adjusting relevant state parameter values in the module according to the eyelid contour characteristics on the eyes of the current frame and the blind driving state judgment condition.
S5, executing a voice communication module;
the voice communication module has the functions that when a driver is in a blind driving state, the module timely sends out an alarm to remind the driver that the driver is in the blind driving state, or a real-time video in the blind driving state is sent to a remote server, at the moment, a transportation enterprise supervision department can carry out timely processing through the video, and if the driver needs to communicate with the transportation enterprise, the module can also receive a remote command.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention is subject to the protection scope of the claims.

Claims (5)

1. A blind driving detection method based on overlooking characteristics is characterized by comprising the following steps:
11 Before the detection starts, loading a face detection classifier file;
12 Detection starts, the head image of the driver is collected in real time and converted into a gray image;
13 Based on the face detection classifier file loaded in the step 11) and the gray level image obtained in the step 12), judging whether a face exists in the current frame, if so, accurately finding the contours of the upper eyelids of the left eye and the right eye of the driver in the gray level image, and acquiring the contour radius of the upper eyelids through a curve fitting algorithm; if not, directly entering step 14);
14 If the face does not exist, judging whether the driver is in a blind driving state or not based on the time characteristic of face disappearance; if the human face exists, judging whether the driver is in a blind driving state or not based on the upper eyelid contour characteristics based on the left eye and the right eye;
15 When the driver is in the blind driving state, giving an alarm to the driver or sending a real-time video of the driver in the blind driving state to a remote monitoring server;
the step 13) comprises the following steps:
31 Obtaining an effective area of face detection, if the correct face position is not obtained in the previous frame, the effective area of face detection is a full image area; if the correct face position exists in the previous frame, the effective area is the effective area of face detection formed by respectively expanding the rectangular width of half face to the left and the right and respectively expanding the rectangular height of half face to the upper and the lower on the basis of the original face rectangular area;
32 Based on the adaboost classifier, face detection is carried out;
33 Judging whether the human face exists, if so, emptying the time stamp list, entering step 34), and if not, putting the current frame time stamp into the time stamp list, and entering step 14);
34 Based on the layout rule of 'three-family five-eye' of the human face, respectively obtaining the position sub-areas rect _ left and rect _ right of the left and right eyes according to the formula [2] and the formula [3 ];
wherein, rect _ face is a face position area in the image;
35 Obtaining upper eyelid contour curves for the left and right eyes;
36 Obtaining a circle radius R and a circle center corresponding to the upper eyelid contour curve based on a least square fitting theoretical formula [7], a formula [8] and a formula [9 ];
where n is the number of point sets participating in the fitting, x i 、y i Coordinates of points participating in the fitting;
37 The final upper eyelid contour radius is obtained by calculating the mean value of the upper eyelid contour radii of the left and right eyes, putting the mean value into a radius list, and putting the ordinate of the center of the upper eyelid contour of the left eye into a center list.
2. The blind driving detection method based on the overlooking characteristic of claim 1, wherein the step 12) is converted into a grayscale image according to a formula [1 ];
f(x,y)=0.299r(x,y)+0.587g(x,y)+0.114b(x,y) [1]
where f (x, y) is the gray value at pixel (x, y) in the transformed image, and r (x, y), g (x, y), b (x, y) are the values of the three channels red, green, and blue at pixel (x, y) in the original image.
3. The blind driving detection method based on the overlooking characteristic of claim 1, wherein the step 35) specifically comprises the following steps:
41 Obtaining the upper eyelid contour curve of the left eye, firstly carrying out image blurring treatment, and carrying out mean value filtering by adopting a template shown in the formula [4 ];
42 Image enhancement, correcting the image based on gamma filtering theory according to formula [5], enhancing the contrast of the eye image;
wherein, g (x, y) is the gray value of the pixel at the enhanced image (x, y), f (x, y) is the gray value of the pixel at the original image (x, y), γ is the gamma filter coefficient, when γ is less than 1, the contrast of the low gray value area can be significantly enhanced, when γ is more than 1, the contrast of the high gray value area can be significantly enhanced;
43 Based on an improved Laplace operator template formula [6], acquiring edge characteristics of the eye, and performing binarization processing based on a maximum class spacing algorithm;
44 Connecting adjacent edges to form a connected region by a morphological closing operation;
45 Selecting an eye communication region by removing a communication region with a smaller area and then selecting a communication region with a largest area as the eye communication region;
46 Obtain an uppermost edge curve of the connected region of the eye as an upper eyelid contour curve of the eye;
47 To obtain the upper eyelid profile of the right eye, in reference to steps 41) -46).
4. The blind driving detection method based on the overhead view characteristic as claimed in claim 3, wherein the step 45) comprises the following steps:
51 Removing the smaller area connected region;
52 Select the largest area of connected regions as candidate connected regions for the eye;
53 Judging whether other connected regions exist at the similar height of the candidate connected region, and if so, merging the other connected regions into the candidate connected region to form the final eye connected region.
5. The blind driving detection method based on the overhead view characteristic as claimed in claim 1, wherein the step 14) comprises the steps of:
61 Judging whether the face of the driver in the current frame is in a disappearing state, specifically, according to the number of timestamps of face disappearing in the timestamp list, if the number of the timestamps is more than 0, entering step 62); if not, entering step 63);
62 According to the formula [10], judging whether the driver is in a blind driving state based on the time characteristic of face disappearance,
wherein exist =1 shows that the driver is in the blind driving state, timemap begin Is the timestamp, at the beginning of the face disappearance in the timestamp list end Is the timestamp, T, of the end of face disappearance in the timestamp list n Is a timestamp interval threshold value which indicates how long the time that the face of the driver seriously deviates from the right front lasts for the driver is in a blind driving state;
63 Based on the upper eyelid contour characteristics of the eyes, judging whether the driver is in a blind driving state, wherein the specific method comprises the steps of counting the number of frames of the driver in an overlooking state according to a formula [11], a formula [12] and a formula [13] in unit time T, and judging whether the driver is in the blind driving state according to a formula [14 ];
Tr=0.4*R1+0.6*R2 [12]
where exist =1 indicates that the driver is in the blind driving state, N is the total number of frames in unit time, N _ R is the number of frames in the overhead state in unit time, R [ i ] indicates the radius of the ith frame in the radius list, R1 is the upper eyelid radius when looking forward in the normal case, R2 is the upper eyelid radius when looking down, c [ i ] indicates the upper eyelid contour circle center position of the ith frame in the circle center list, htop is the upper edge position of the upper eyelid contour of the left eye, and Hbottom is the lower edge position of the upper eyelid contour of the left eye.
CN201510013113.XA 2015-01-09 2015-01-09 It is a kind of that detection method is driven based on vertical view the blind of feature Active CN104573725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510013113.XA CN104573725B (en) 2015-01-09 2015-01-09 It is a kind of that detection method is driven based on vertical view the blind of feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510013113.XA CN104573725B (en) 2015-01-09 2015-01-09 It is a kind of that detection method is driven based on vertical view the blind of feature

Publications (2)

Publication Number Publication Date
CN104573725A CN104573725A (en) 2015-04-29
CN104573725B true CN104573725B (en) 2018-02-23

Family

ID=53089745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510013113.XA Active CN104573725B (en) 2015-01-09 2015-01-09 It is a kind of that detection method is driven based on vertical view the blind of feature

Country Status (1)

Country Link
CN (1) CN104573725B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030316A (en) * 2007-04-17 2007-09-05 北京中星微电子有限公司 Safety driving monitoring system and method for vehicle
CN101090482A (en) * 2006-06-13 2007-12-19 唐琎 Driver fatigue monitoring system and method based on image process and information mixing technology
CN101639894A (en) * 2009-08-31 2010-02-03 华南理工大学 Method for detecting train driver behavior and fatigue state on line and detection system thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7751599B2 (en) * 2006-08-09 2010-07-06 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101090482A (en) * 2006-06-13 2007-12-19 唐琎 Driver fatigue monitoring system and method based on image process and information mixing technology
CN101030316A (en) * 2007-04-17 2007-09-05 北京中星微电子有限公司 Safety driving monitoring system and method for vehicle
CN101639894A (en) * 2009-08-31 2010-02-03 华南理工大学 Method for detecting train driver behavior and fatigue state on line and detection system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于人眼定位技术的疲劳驾驶检测方法";李立凌;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130731;说明书第5页第3-7段 *

Also Published As

Publication number Publication date
CN104573725A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104751600B (en) Anti-fatigue-driving safety means and its application method based on iris recognition
CN106611512B (en) Method, device and system for processing starting of front vehicle
CN104021378B (en) Traffic lights real-time identification method based on space time correlation Yu priori
TWI423166B (en) Method for determining if an input image is a foggy image, method for determining a foggy level of an input image and cleaning method for foggy images
CN107292251B (en) Driver fatigue detection method and system based on human eye state
CN110532976A (en) Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN102306293A (en) Method for judging driver exam in actual road based on facial image identification technology
CN109635656A (en) Vehicle attribute recognition methods, device, equipment and medium neural network based
CN109584507A (en) Driver behavior modeling method, apparatus, system, the vehicles and storage medium
CN105956548A (en) Driver fatigue state detection method and device
Guo et al. Image-based seat belt detection
CN111553214B (en) Method and system for detecting smoking behavior of driver
WO2019126908A1 (en) Image data processing method, device and equipment
CN103996198A (en) Method for detecting region of interest in complicated natural environment
CN103927548B (en) Novel vehicle collision avoiding brake behavior detection method
Katyal et al. Safe driving by detecting lane discipline and driver drowsiness
CN104182769B (en) A kind of detection method of license plate and system
CN101615241B (en) Method for screening certificate photos
CN111626272A (en) Driver fatigue monitoring system based on deep learning
CN108009495A (en) Fatigue driving method for early warning
CN103021179A (en) Real-time monitoring video based safety belt detection method
CN109948433A (en) A kind of embedded human face tracing method and device
CN105976570A (en) Driver smoking behavior real-time monitoring method based on vehicle video monitoring
CN111860210A (en) Method and device for detecting separation of hands from steering wheel, electronic equipment and storage medium
CN112241647B (en) Dangerous driving behavior early warning device and method based on depth camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant