CN104123549B - Eye positioning method for real-time monitoring of fatigue driving - Google Patents

Eye positioning method for real-time monitoring of fatigue driving Download PDF

Info

Publication number
CN104123549B
CN104123549B CN201410369776.0A CN201410369776A CN104123549B CN 104123549 B CN104123549 B CN 104123549B CN 201410369776 A CN201410369776 A CN 201410369776A CN 104123549 B CN104123549 B CN 104123549B
Authority
CN
China
Prior art keywords
image
eyes
frame
eye
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410369776.0A
Other languages
Chinese (zh)
Other versions
CN104123549A (en
Inventor
赵安
梁万元
种银保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Affiliated Hospital of TMMU
Original Assignee
Second Affiliated Hospital of TMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Affiliated Hospital of TMMU filed Critical Second Affiliated Hospital of TMMU
Priority to CN201410369776.0A priority Critical patent/CN104123549B/en
Publication of CN104123549A publication Critical patent/CN104123549A/en
Application granted granted Critical
Publication of CN104123549B publication Critical patent/CN104123549B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to an eye positioning method for the real-time monitoring of fatigue driving, which is realized by Matlab2012 software. The method comprises the following steps: step 1, performing initial positioning on a face and eyes, so as to obtain a precise images of eyes; step 2, obtaining the absolute values of adjacent frame difference values based on a consecutive frame difference method of a skin color model of an YCbCr color space; step 3, judging whether front and rear head images are overlapped according to a binary image of adjacent frame difference values; step 4, head displacement detection: respectively detecting a transverse displacement dx and a longitudinal displace dy of the head; step 5, forecasting an eye candidate area; step 6, correcting the eye candidate area; and step 7, repeating the steps 2 to 6, and performing the eye positioning of the next frame. According to the eye positioning method provided by the invention, the positioning calculated amount of the face can be reduced, the eye positioning speed and image processing frame rate can be improved, the accuracy of eye positioning can be guaranteed, and the timeliness and reliability of monitoring the driving fatigue can be performed according to the statuses of eyes.

Description

A kind of eye locating method for fatigue driving real-time monitoring
Technical field
The present invention relates to fatigue driving monitoring method, and in particular to a kind of eye location for fatigue driving real-time monitoring Method.
Background technology
With socio-economic development, automobile has become requisite a kind of vehicles in people's daily life.Automobile The increasingly increase of quantity, while the trip for people, transport etc. bring convenience, the life that the vehicle accident for taking place frequently also gives people Life safety and property bring huge loss.Show that various countries lead because of fatigue driving according to relevant statistics both domestic and external The vehicle accident of cause accounts for the 10%~20% of vehicle accident sum, it is seen that fatigue driving is the principal element for causing vehicle accident One of, thus in recent years the research of fatigue driving monitoring technology is received more and more attention.
At present, fatigue driving monitoring technology is mainly included based on physiology signal, operator behavior and vehicle shape The driving fatigue monitoring technology of state, but because physiological signal is non-stationary, sensor complexity and contact, and model Imperfection, preferable monitoring effect could not be obtained.As the A of CN 102406507 disclose a kind of " vapour based on physiology signal Car driver fatigue monitoring method ", including tired scaling method and detection method.Scaling method includes:By sensor acquisition And the pulse peak value and frequency of n times unit interval, heart rate and respiratory frequency composition fatigue characteristic calibration matrix are extracted, by master Component analyzing method sets up the weight vectors of each fatigue characteristic, and weights are added to into calibration matrix, thus build fatigue demarcate to Amount.Fatigue detection method includes:Weight will be demarcated and be added to fatigue characteristic vector in the unit interval, calculated characteristic vector and demarcate The mahalanobis distance of vector, fatigue of automobile driver degree is differentiated by it apart from dispersion degree, and carries out early warning.The monitoring method Based on theory of Chinese medical science, with reference to modern signal processing method the fatigue characteristic of driver is searched out.
CN103279752A discloses " a kind of based on the eye location side for improving Adaboost methods and Face geometric eigenvector Method ", it is concretely comprised the following steps:Step one:It is respectively trained face classification device and eye classification device;Step 2:Using the people for training Face grader determines face location;Step 3:Using the eye classification device for training on the top 2/3 of the human face region for detecting Part determine the position in candidate's eyes region;Step 4:Determined using the inherent geometric properties on face statistical significance each The geometric properties coefficient of group eyes pair;Step 5:Determine respective decision metric d of every group of candidate's eyes pair;Step 6:It is relatively more each The decision metric of group candidate's eyes pair, decision metric is less, represents that the confidence level of the candidate's eyes pair is higher;Can determine that One group of optimal eye pair, and then determine the optimum position of eyes.The localization method using in face geometric properties pair The eye areas for searching further are screened, and can accurately and effectively determine the optimum position of eyes.It can however not monitoring eyes Degree of fatigue.
More ripe technology is that PERCLOS (Percentage are realized by video surveillance eyestrain degree OfEyelid Closure Over the Pupil Over Time, eyes closed accounts for the percentage rate of special time) monitoring.When Front PERCLOS is monitored when implementing, and to every two field picture identical positioning step is repeated.Although frame by frame positioning can be realized Eyes are accurately positioned, but the association do not excavated between adjacent two field picture is simplifying some positioning steps of face, eyes, Computationally intensive, eye location speed is difficult to be lifted, poor real.Previous karyotype studies show that generally people's eyes are closed The time of conjunction is between 0.2~0.3s, and the time window that PERCLOS index Chang Yiyi minutes sample with analysis as face, if Eye locating method real-time can not meet the requirement of sampling thheorem, then the open and-shut mode of accurate measurements eyes is difficult to, for wink Between occur tired event easily occur delay judgement and fail to judge, it is difficult to be prevented effectively from the generation of accident.Thus system delay pair It is very big in the availability influence of existing PERCLOS fatigue driving monitoring methods.In order to obtain preferable fatigue monitoring and early warning effect Really, in addition it is also necessary to further explore eye locating method, and meet requirement of the PERCLOS fatigue decision models to real-time.
The content of the invention
The present invention is computationally intensive for existing eye locating method frame by frame, slow-footed shortcoming, proposes a kind of for fatigue The eye locating method of real-time monitoring is driven, the method can reduce Face detection while eye location accuracy is ensured Amount of calculation, so as to improve image processing speed, realize the real-time monitoring to eyes open and-shut mode, it is ensured that driving fatigue monitoring system The reliability of system.
The method such as detects at the shifting of eye position in the sampling interval based on the difference of adjacent two frame of YCbCr color spaces It is dynamic, and the eyes precise area oriented with reference to previous frame image, it is possible to determine the candidate region of present frame eyes, then to waiting Favored area is extracted, you can obtain the precise area of eyes.
A kind of eye locating method for fatigue driving real-time monitoring of the present invention, using Matlab2012 softwares Realize, comprise the following steps:
Step one, to face and eyes initial alignment, obtains eyes exact image:First with the people of photographic head shooting clear Face coloured image, then split face coloured image, face width Fw and face height Fh is obtained, it is then, fixed using existing eyes Position method carries out eye location to the first two field picture, draws rectangular area and exact position, the i.e. eyes at the first frame eyes place Exact image, and record the parameter { (x, y), w, h } of eye position;Wherein (x, y) is the coordinate of rectangular area upper left angle point, W is the width of rectangle, and h is the height of rectangle;
Step 2, based on the neighbor frame difference method of YCbCr color space complexion models, seeks the absolute value of consecutive frame difference:First Adjacent two color image frame is transformed into into YCbCr color spaces, the image of before and after two frames YCbCr color spaces is obtained, is used Img1 represents the image of former frame YCbCr color space, and with img2 the image of a later frame YCbCr color space is represented;Then it is sharp With " complexion model " (Yuan Ying. the fatigue of automobile driver of view-based access control model drives detection algorithm research, and Shenyang University of Technology master learns Degree thesis whole-length, 2010, p.11) and binary conversion treatment is carried out to img1 and img2, two frame bianry images are obtained, represent previous with BW1 Frame bianry image, with BW2 a later frame bianry image is represented;Finally two frame bianry image BW1 and BW2 are subtracted each other and are taken absolute value, Obtain the bianry image BW of consecutive frame difference.
Step 3, according to the bianry image of consecutive frame difference, whether two frame head images overlap before and after judgement:Adjacent two frame If head image is overlapped, then next step is performed;If adjacent two frames head image is not overlapped, return to step one;
Adjacent two frames head image overlaps the method for judging, it is assumed that the pixel value of former frame bianry image BW1 is 1 area Domain area is A1, it is assumed that it is A2 that the pixel value of a later frame bianry image BW2 is 1 region area, it is assumed that the two of consecutive frame difference It is A3 that the pixel value of value image BW is 1 region area, if 0≤A3 is < A1+A2, adjacent two field pictures have overlap, otherwise not Overlap;
Step 4, head displacement detection:Head lateral displacement dx and length travel dy is detected respectively;
Step 5, the prediction of eyes candidate region:
According to head displacement, " eyes displacement prediction model " is utilized to carry out eyes lateral displacement Dx and length travel Dy pre- Survey;The rectangular area for making the actual place of eyes of previous frame image is expressed as { (x, y), w, h }, and (x, y) is rectangular area upper left The coordinate of angle point, w is the width of rectangle, and h is the height of rectangle;According to displacement Dx、DyThe rectangle region being located with former frame eyes Domain, you can determine present frame eyes candidate region:{(x-Dx,y-Dy), w+2Dx,h+2Dy}。
Step 6, the amendment of eyes candidate region:
1) using the rgb2gray in Matlab2012 softwares, (rgb2gray is the computational science software that is world known Function in Matlab image processing toolboxes, its function is that coloured image RGB is converted into into gray level image I, its using method For:I=rgb2gray (RGB)) eyes candidate region figure is converted to gray level image by function;
2) " maximum variance between clusters " (NOBUYUKIOTSU.Athresholdselectionmethodfromgray is utilized levelhistograms.IEEETRANSACTIONSONSYSTREMS,MAN,ANDCYBERNETICS,VOL.SMC-9,NO.1, JANUARY1979, p62-66.) find out threshold value T that image carries out required for gray level threshold segmentation;
3) row threshold division is entered to image using threshold value T, obtains a width bianry image;
4) using the bwlabel in Matlab2012 softwares【The function of Matlab image processing toolboxes, its function is mark Connected region in note bianry image, (BW, n), function return one is an equal amount of with BW for using method L=bwlabel Matrix L, L contains the labelling that object is connected in BW, and n typically chooses 4 or 8, implies that four-way is connected or eight to connection, is defaulted as 8】 The bianry image obtained after function pair Threshold segmentation carries out connected component labeling;
" connected component labeling method " comes from《Digital Image Processing》, author:(U.S.) Paul Gonzales Deng Zhu publishing houses:Electronics work Industry publishing house, publication time:2004-5-1 numbers of words:879000 releases:1 edition 1 number of pages:609 printing times:2004-5- 1ISBN:9787505398764.
The connected component labeling of bianry image is from only by " 0 " pixel (generally representing background dot) and " 1 " pixel (usual table Showing pattern graphical dots) in a width dot matrix image of composition, " 1 " the value collection of pixels of (4- neighborhoods or 8- neighborhoods) of adjoining each other is carried Take out.
5) two maximum regions of area in connection labelling result are found out, as eye areas;
6) intercept in the original image of eyes candidate regions with 4) in the corresponding image in region, the image for obtaining is eyes Precise area image;
7) location parameter { (x, y), w, h } of eyes, { (x, y), w, the h } in alternative steps one will be recorded.
Step 7, repeat step two to step 6 carries out the eye location of next frame.
Further, described existing eye locating method including Face Detection, face segmentation, gray-level projection and shape State processes four steps.
Further, the complexion model is:
Wherein Cb and Cr represent two chromatic components in YCbCr color spaces.Using above-mentioned complexion model to img1 and Img2 carries out Face Detection, and it is 1 that the pixel value for meeting the point of complexion model is made, and does not meet the pixel value order of the point of complexion model For 0, bianry image BW1 and bianry image BW2 is respectively obtained, finally BW1 and BW2 are subtracted each other and taken absolute value, obtain binary map As the neighbor frame difference figure of BW, i.e. YCbCr space.
Further, head displacement detection is carried out according to the following steps;
(1) first the horizontal line for occurring white pixel for the first time is found by scanogram from top to bottom, is taken below the horizontal line The image-region of Fh, is designated as p1;
(2) in p1 upper 2/3 region is taken, p2 is designated as;
(3) according to the right boundary of white portion in p2, the region in right boundary is taken out, is designated as p3, as head Horizontal substantial range of motion, and make a width of W, a height of H of p3;
(4) horizontal median axis Y=H/2 with p3 take out respectively the image for accounting for picture altitude 30% as boundary, up and down, are designated as p4;Breadth Maximums of the p4 per the continuous white pixel of a line is calculated, dx is designated asi, wherein i ∈ [1,0.6H];By dxiMeansigma methodss make For the lateral displacement of head, dx is designated as
(5) as boundary, the image for accounting for picture traverse 30% is taken out respectively in left and right to the vertical central axis line X=W/2 with p3, is designated as p5;The Breadth Maximum of continuous white pixel, is designated as dy during p5 is per string in calculatingj, wherein j ∈ [1,0.6W];By dyjIt is average It is worth the length travel as head, is designated as dy
Beneficial effects of the present invention
Eye locating method of the present invention based on the modeling of YCbCr color spaces neighbor frame difference, using empty based on YCbCr colors Between the feature of difference of consecutive frame the change of sampling interval eye position such as can detect that.If eyes displacement is in detectable model In enclosing, you can the eyes candidate region of latter two field picture is selected with reference to the eyes exact position of previous frame image detection, is simplified The positioning step of eyes in conventional eye localization method;If eyes displacement is outside detectable scope, using tradition Eye locating method positioned.In the case of normal driving, driver head's motion amplitude typically will not be very big, thus goes out Lose face probability very little of the eyeball displacement outside detectable scope, the overall real-time performance of this localization method accesses larger carrying Rise.
Description of the drawings
Fig. 1 is eye locating method flow chart of the present invention;
Fig. 2 is existing eye locating method flow chart;
Fig. 3 is eye position parameter schematic diagram;
Fig. 4 is neighbor frame difference method schematic diagram;
Fig. 5 is eyes candidate region makeover process schematic diagram;
Fig. 6 is the head lateral displacement dx scatterplots of the eyes lateral displacement Dx that 200 two field pictures are detected manually and automatic detection Figure and fitting a straight line figure;
Fig. 7 is the head length travel dx scatterplots of the eyes length travel Dx that 200 two field pictures are detected manually and automatic detection Figure and fitting a straight line figure.
Specific embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Referring to Fig. 1, a kind of described eye locating method for fatigue driving real-time monitoring is soft using Matlab2012 Part is realized, comprised the following steps:
Step one, to face and eyes initial alignment, obtains eyes exact image:First with the people of photographic head shooting clear Face coloured image, then split face coloured image, face width Fw and face height Fh is obtained, then, " existing eyes are determined for utilization Position method " carries out eye location to the first two field picture, draws rectangular area and exact position, the i.e. eyes at the first frame eyes place Exact image;And record the parameter { (x, y), w, h } (referring to Fig. 3) of eye position;
" the existing eye locating method " (referring to Fig. 2) includes Face Detection, face segmentation, gray-level projection and shape State processes four steps.
Step 2, based on the neighbor frame difference model of YCbCr color space complexion models, seeks the absolute value of consecutive frame difference: First adjacent two color image frame is transformed into into YCbCr color spaces, obtains the image of before and after two frames YCbCr color spaces, used Img1 represents the image of former frame YCbCr color space, and with img2 the image of a later frame YCbCr color space is represented;Then it is sharp Binary conversion treatment is carried out to img1 and img2 with " complexion model ", two frame bianry images are obtained, with BW1 former frame binary map is represented Picture, with BW2 a later frame bianry image is represented;Finally two frame bianry image BW1 and BW2 are subtracted each other and taken absolute value, obtain adjacent The bianry image BW of frame difference;
The complexion model is:
Wherein Cb and Cr represent two chromatic components in YCbCr color spaces.Using above-mentioned complexion model to img1 and Img2 carries out Face Detection, and it is 1 that the pixel value for meeting the point of complexion model is made, and does not meet the pixel value order of the point of complexion model For 0, bianry image BW1 is respectively obtained【Referring to Fig. 4 (1)】With bianry image BW2【Referring to Fig. 4 (2)】, finally by BW1 and BW2 phases Subtract and take absolute value, obtain bianry image BW【Referring to Fig. 4 (3)】
Step 3, according to the bianry image of consecutive frame difference, whether two frame head images overlap before and after judgement:Adjacent two frame If head image is overlapped, then next step is performed;If adjacent two frames head image is not overlapped, return to step one;
Adjacent two frames head image overlaps the method for judging, it is assumed that the pixel value of former frame bianry image BW1 is 1 area Domain area is A1, it is assumed that it is A2 that the pixel value of a later frame bianry image BW2 is 1 region area, it is assumed that the two of consecutive frame difference It is A3 that the pixel value of value image BW is 1 region area, if 0≤A3 is < A1+A2, adjacent two field pictures have overlap, otherwise not Overlap;
Step 4, head displacement detection:Head lateral displacement dx and length travel dy is detected respectively:
The head displacement detection is carried out according to the following steps;
(1) first the horizontal line for occurring white pixel for the first time is found by scanogram from top to bottom, is taken below the horizontal line The image-region of Fh (face height), is designated as p1;
(2) in p1 upper 2/3 region is taken, p2 is designated as;
(3) according to the right boundary of white portion in p2, the region in right boundary is taken out, is designated as p3, as head Horizontal substantial range of motion, and make a width of W, a height of H of p3;
(4) horizontal median axis X=H/2 with p3 take out respectively the image for accounting for picture altitude 30% as boundary, up and down, are designated as p4;Breadth Maximums of the p4 per the continuous white pixel of a line is calculated, dx is designated asi, wherein i ∈ [1,0.6H];By dxiMeansigma methodss make For the lateral displacement of head, dx is designated as
(5) with the vertical central axis line x of p3
Y=W/2 is boundary, and left and right is taken out respectively the image for accounting for picture traverse 30%, is designated as p5;Connect during p5 is per string in calculating The Breadth Maximum of continuous white pixel, is designated as dyj, wherein j ∈ [1,0.6W];By dyjMeansigma methodss as head length travel, It is designated as dy
Step 5, the prediction of eyes candidate region:According to head displacement, utilize " eyes displacement prediction model " horizontal to eyes Displacement Dx and length travel Dy are predicted;The eyes displacement prediction model is:
The rectangular area for making the actual place of eyes of previous frame image is expressed as { (x, y), w, h }, and (x, y) is rectangular area The coordinate of upper left angle point, w is the width of rectangle, and h is the height of rectangle;According to displacement Dx、DyThe square being located with former frame eyes Shape region, you can determine present frame eyes candidate region:
{ (x-Dx, y-Dy), w+2Dx, h+2Dy };
For each specific monitoring object, eyes are to determine in the position of head, thus according to head position It is feasible to move detection eye position change.In order to set up the model that eye position change is detected according to head displacement, the present invention Using the coloured image in photographic head continuous acquisition same person 200 frame head portions in the same context, the head of people in gatherer process The random left and right in camera view in portion and move forward and backward, and eyes are naturally opened and closed.
First, by this 200 two field picture of collection using adjacent two frame as one group of data, using YCbCr space neighbor frame difference Method is processed, and obtains 199 groups of head position delta data (dxi,dyi), wherein i=2 ... 200.
Then, the minimum square of every two field picture eyes composition is marked using the mode of manual selection area to this 200 two field picture Shape region, finds the center of the central point as eyes of rectangle, is designated as (xi,yi).Subtracted each other two-by-two again, obtained 199 groups Eye position variable quantity:
Referring to Fig. 6 and Fig. 7, it can be seen that the stronger linear relationship of the presentation of two groups of data, correlation coefficient:
RR(dxi,Dxi)=0.9725
RR(dyi,Dyi)=0.9219
The adjacent two field pictures eye position variable quantity detected using this paper algorithms is illustrated, adjacent two with manual detection Two field picture eye position variable quantity is linear.Thus, according to experimental result above, establish the change of this paper eye positions Detection model:
Wherein dxAnd dyIt is the head displacement detected value obtained according to neighbor frame difference method, Dx and Dy is eyes displacement prediction value.
Step 6, the amendment of eyes candidate region:
1) eyes candidate region figure is converted to into gray level image using the rgb2gray functions in Matlab2012 softwares, 【Referring to Fig. 5 (1)】;
2) " maximum variance between clusters " are utilized to find out threshold value T that image carries out required for gray level threshold segmentation;
3) row threshold division is entered to image using threshold value T, obtains a width bianry image,【Referring to Fig. 5 (2)】;
4) connected using the bianry image obtained after the bwlabel function pair Threshold segmentations in Matlab2012 softwares Zone marker;
5) two maximum regions of area in connection labelling result are found out, as eye areas,【Referring to Fig. 5 (3)】;
6) intercept in the original image of eyes candidate regions with 4) in the corresponding image in region, the image for obtaining is eyes Precise area image,【Referring to Fig. 5 (4)】;
7) location parameter { (x, y), w, h } of eyes, { (x, y), w, the h } in alternative steps one are recorded with reference to Fig. 2.
Expand because the eye areas that step one is obtained are one to former frame eyes rectangular area, may there is eyebrow The non-ocular regions such as hair, so needing further to process.
Step 7, repeat step two to step 6 carries out the eye location of next frame.
The present invention carries out emulation experiment using Matlab2012 software programmings, and the face image of 200 frame continuous acquisition is entered Row is processed, and is 0.214 second/frame using the average speed of existing eye locating method, after the method for the present invention, average speed It is 1/3 to the 1/2 of human eye closing time under normal circumstances for 0.103 second/frame, thus it is possible to effectively avoid because at image Manage the time delay of the judgement of the excessively slow and caused eyestrain state of speed and fail to judge.

Claims (4)

1. a kind of eye locating method for fatigue driving real-time monitoring, is realized using Matlab2012 softwares, including following Step:
Step one, to face and eyes initial alignment, obtains eyes exact image:It is color first with the face of photographic head shooting clear Color image, then split face coloured image, face width Fw and face height Fh is obtained, then, utilize " existing eye location side Method " carries out eye location to the first two field picture, show that the rectangular area at the first frame eyes place and exact position, i.e. eyes are accurate Image, and record the parameter { (x, y), w, h } of eye position;Wherein (x, y) is the coordinate of rectangular area upper left angle point, and w is The width of rectangle, h is the height of rectangle;
Step 2, based on the neighbor frame difference method of YCbCr color space complexion models, seeks the absolute value of consecutive frame difference:First by phase Adjacent two color image frames are transformed into YCbCr color spaces, obtain the image of before and after two frames YCbCr color spaces, use img1 tables Show the image of former frame YCbCr color space, with img2 the image of a later frame YCbCr color space is represented;Then " the colour of skin is utilized Model " carries out binary conversion treatment to img1 and img2, obtains two frame bianry images, and with BW1 former frame bianry image is represented, uses BW2 represents a later frame bianry image;Finally two frame bianry image BW1 and BW2 are subtracted each other and taken absolute value, obtain consecutive frame difference Bianry image BW;
Step 3, according to the bianry image of consecutive frame difference, whether two frame head images overlap before and after judgement:Adjacent two frame heads portion If image is overlapped, then next step is performed;If adjacent two frames head image is not overlapped, return to step one;
Adjacent two frames head image overlaps the method for judging, it is assumed that the pixel value of former frame bianry image BW1 is 1 area surface Product is A1, it is assumed that it is A2 that the pixel value of a later frame bianry image BW2 is 1 region area, it is assumed that the binary map of consecutive frame difference As the region area that the pixel value of BW is 1 is A3, if 0≤A3 is < A1+A2, adjacent two field pictures have overlap, otherwise do not overlap;
Step 4, head displacement detection:Head lateral displacement dx and length travel dy is detected respectively;
Step 5, the prediction of eyes candidate region:According to head displacement, " eyes displacement prediction model " is utilized to eyes lateral displacement Dx and length travel Dy are predicted;The eyes displacement prediction model is:
D x = 1.2 d x - 1.4 D y = 0.9 d y + 0.4
The rectangular area for making the actual place of eyes of previous frame image is expressed as { (x, y), w, h }, and (x, y) is rectangular area upper left The coordinate of angle point, w is the width of rectangle, and h is the height of rectangle;According to displacement Dx、DyThe rectangle region being located with former frame eyes Domain, you can determine present frame eyes candidate region:
{ (x-Dx, y-Dy), w+2Dx, h+2Dy };
Step 6, the amendment of eyes candidate region:
1) eyes candidate region figure is converted to into gray level image using the rgb2gray functions in Matlab2012 softwares;
2) " maximum variance between clusters " are utilized to find out threshold value T that image carries out required for gray level threshold segmentation;
3) row threshold division is entered to image using threshold value T, obtains a width bianry image;
4) connected region is carried out using the bianry image obtained after the bwlabel function pair Threshold segmentations in Matlab2012 softwares Labelling;
5) two maximum regions of area in connection labelling result are found out, as eye areas;
6) intercept in the original image of eyes candidate regions with 4) in the corresponding image in region, the image for obtaining is the accurate of eyes Area image;
7) location parameter { (x, y), w, h } of eyes, { (x, y), w, the h } in alternative steps one will be recorded;
Step 7, repeat step two to step 6 carries out the eye location of next frame.
2. a kind of eye locating method for fatigue driving real-time monitoring according to claim 1, is characterized in that:Step " existing eye locating method " described in one includes Face Detection, face segmentation, gray-level projection and Morphological scale-space four Individual step.
3. a kind of eye locating method for fatigue driving real-time monitoring according to claim 1, is characterized in that:Step " complexion model " described in two be:
98 ≤ C b ≤ 127 133 ≤ C r ≤ 170
Wherein Cb and Cr represent two chromatic components in YCbCr color spaces;Using above-mentioned complexion model to img1 and img2 Face Detection is carried out, it is 1 that the pixel value for meeting the point of complexion model is made, the pixel value order for not meeting the point of complexion model is 0, Former frame bianry image BW1 and a later frame bianry image BW2 is respectively obtained, finally BW1 and BW2 is subtracted each other and is taken absolute value, obtained To the neighbor frame difference figure of the bianry image BW of consecutive frame difference, i.e. YCbCr space.
4. a kind of eye locating method for fatigue driving real-time monitoring according to claim 1, is characterized in that:Step Head displacement detection described in four, is carried out according to the following steps;
(1) first the horizontal line for occurring white pixel for the first time is found by scanogram from top to bottom, takes the horizontal line with human face The image-region of height Fh, is designated as p1;
(2) in p1 upper 2/3 region is taken, p2 is designated as;
(3) according to the right boundary of white portion in p2, the region in right boundary is taken out, is designated as p3, as head is horizontal Substantial range of motion, and make a width of W of p3, a height of H;
(4) horizontal median axis Y=H/2 with p3 take out respectively the image for accounting for picture altitude 30% as boundary, up and down, are designated as p4;Meter Breadth Maximums of the p4 per the continuous white pixel of a line is calculated, dx is designated asi, wherein i ∈ [1,0.6H];By dxiMeansigma methodss as head Lateral displacement, be designated as dx
d x = Σdx i 0.6 H ;
(5) as boundary, the image for accounting for picture traverse 30% is taken out respectively in left and right to the vertical central axis line X=W/2 with p3, is designated as p5;Meter The Breadth Maximum of continuous white pixel, is designated as dy during p5 is per string in calculationj, wherein j ∈ [1,0.6W];By dyjMeansigma methodss conduct The length travel of head, is designated as dy
d y = Σdy i 0.6 W .
CN201410369776.0A 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving Expired - Fee Related CN104123549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410369776.0A CN104123549B (en) 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410369776.0A CN104123549B (en) 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving

Publications (2)

Publication Number Publication Date
CN104123549A CN104123549A (en) 2014-10-29
CN104123549B true CN104123549B (en) 2017-05-03

Family

ID=51768954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410369776.0A Expired - Fee Related CN104123549B (en) 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving

Country Status (1)

Country Link
CN (1) CN104123549B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574820B (en) * 2015-01-09 2017-02-22 安徽清新互联信息科技有限公司 Fatigue drive detecting method based on eye features
CN105354985B (en) * 2015-11-04 2018-01-12 中国科学院上海高等研究院 Fatigue driving monitoring apparatus and method
CN106447651A (en) * 2016-09-07 2017-02-22 遵义师范学院 Traffic sign detection method based on orthogonal Gauss-Hermite moment
CN106682603B (en) * 2016-12-19 2020-01-21 陕西科技大学 Real-time driver fatigue early warning system based on multi-source information fusion
CN106971194B (en) * 2017-02-16 2021-02-12 江苏大学 Driving intention recognition method based on improved HMM and SVM double-layer algorithm
CN107222660B (en) * 2017-05-12 2020-11-06 河南工业大学 Distributed network vision monitoring system
CN107240292A (en) * 2017-06-21 2017-10-10 深圳市盛路物联通讯技术有限公司 A kind of parking induction method and system of technical ability of being stopped based on driver itself
CN107248313A (en) * 2017-06-21 2017-10-13 深圳市盛路物联通讯技术有限公司 A kind of vehicle parking inducible system and method
CN108162893A (en) * 2017-12-25 2018-06-15 芜湖皖江知识产权运营中心有限公司 A kind of running control system applied in intelligent vehicle
CN110738602B (en) * 2019-09-12 2021-01-01 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007156945A (en) * 2005-12-07 2007-06-21 Sony Corp Image processor and image processing method, program, and data structure
CN102122357A (en) * 2011-03-17 2011-07-13 电子科技大学 Fatigue detection method based on human eye opening and closure state
CN103700217A (en) * 2014-01-07 2014-04-02 广州市鸿慧电子科技有限公司 Fatigue driving detecting system and method based on human eye and wheel path characteristics
CN103839379A (en) * 2014-02-27 2014-06-04 长城汽车股份有限公司 Automobile and driver fatigue early warning detecting method and system for automobile

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007156945A (en) * 2005-12-07 2007-06-21 Sony Corp Image processor and image processing method, program, and data structure
CN102122357A (en) * 2011-03-17 2011-07-13 电子科技大学 Fatigue detection method based on human eye opening and closure state
CN103700217A (en) * 2014-01-07 2014-04-02 广州市鸿慧电子科技有限公司 Fatigue driving detecting system and method based on human eye and wheel path characteristics
CN103839379A (en) * 2014-02-27 2014-06-04 长城汽车股份有限公司 Automobile and driver fatigue early warning detecting method and system for automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于肤色和人脸特征的人脸检测和人眼定位方法研究;李尚国;《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》;20110515(第05期);I138-1157 *
基于肤色的人脸检测方法及眼睛定位算法研究;徐艳;《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑(月刊)》;20061215(第12期);I138-1207 *

Also Published As

Publication number Publication date
CN104123549A (en) 2014-10-29

Similar Documents

Publication Publication Date Title
CN104123549B (en) Eye positioning method for real-time monitoring of fatigue driving
CN102289660B (en) Method for detecting illegal driving behavior based on hand gesture tracking
CN103714660B (en) System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN102324025B (en) Human face detection and tracking method based on Gaussian skin color model and feature analysis
CN103400110B (en) Abnormal face detecting method before ATM cash dispenser
CN108875642A (en) A kind of method of the driver fatigue detection of multi-index amalgamation
CN103310194B (en) Pedestrian based on crown pixel gradient direction in a video shoulder detection method
CN106570486A (en) Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification
CN106781282A (en) A kind of intelligent travelling crane driver fatigue early warning system
CN110728241A (en) Driver fatigue detection method based on deep learning multi-feature fusion
CN111582086A (en) Fatigue driving identification method and system based on multiple characteristics
CN105389554A (en) Face-identification-based living body determination method and equipment
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN104331151A (en) Optical flow-based gesture motion direction recognition method
CN106845328B (en) A kind of Intelligent human-face recognition methods and system based on dual camera
CN105740758A (en) Internet video face recognition method based on deep learning
CN109859241B (en) Adaptive feature selection and time consistency robust correlation filtering visual tracking method
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
CN104050488A (en) Hand gesture recognition method based on switching Kalman filtering model
CN105678813A (en) Skin color detection method and device
CN107038422A (en) The fatigue state recognition method of deep learning is constrained based on space geometry
CN106682603A (en) Real time driver fatigue warning system based on multi-source information fusion
CN104200199B (en) Bad steering behavioral value method based on TOF camera
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
CN109902565A (en) The Human bodys' response method of multiple features fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170503

Termination date: 20210730

CF01 Termination of patent right due to non-payment of annual fee