CN108256390A - Eye motion method for catching based on projecting integral and iris recognition - Google Patents

Eye motion method for catching based on projecting integral and iris recognition Download PDF

Info

Publication number
CN108256390A
CN108256390A CN201611237357.7A CN201611237357A CN108256390A CN 108256390 A CN108256390 A CN 108256390A CN 201611237357 A CN201611237357 A CN 201611237357A CN 108256390 A CN108256390 A CN 108256390A
Authority
CN
China
Prior art keywords
image
eye
iris
projecting
integral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611237357.7A
Other languages
Chinese (zh)
Inventor
钟鸿飞
覃争鸣
杨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Original Assignee
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou filed Critical Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority to CN201611237357.7A priority Critical patent/CN108256390A/en
Publication of CN108256390A publication Critical patent/CN108256390A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a kind of eye motion method for catching based on projecting integral and iris recognition, which is characterized in that including:S1 face-images obtain, S2 image preprocessings, and S3 projecting integrals calculate, the positioning of S4 human eye areas, S5 localization of iris circle, and S6 eye motions capture.The present invention program is integrated using Gray Projection, human face structure feature combination Gray Projection is integrated, the grey scale change of human eye area is calculated in both the horizontal and vertical directions, so as to obtain human eye area positioning, in the eye areas image determined in eye position, using the physiological structure characteristic positioning iris center of iris, the identification of positioning and the action of upper and lower eyelid in eyes local area image is finally realized using parameterized template;Overcome imaging difference of the eye image under different illumination conditions, improve adaptability of the detecting system for illumination, realize that round-the-clock, robustness driver's eye motion captures.

Description

Eye motion method for catching based on projecting integral and iris recognition
Technical field
The invention belongs to field of visual inspection, are related to a kind of eye motion capture side based on projecting integral and iris recognition Method.
Background technology
Fatigue driving refers to monotonicity or long-time due to driving environment, super-strength driving, and driver is because of energy Consumption is excessive and generates physiology, mental function decline, and reaction level, manipulation efficiency is caused to decline, and then driving efficiency is caused to drop It is low, and the phenomenon that influence normal driving.Driver in fatigue, sensing capability to traffic environment, dangerous judgement and There is different degrees of decline to the manipulation ability of vehicle, easily cause traffic accident.
The mainstream detection method of fatigue driving is in the case where driver participates in, and the state of mind of driver is commented Fixed, the most commonly used is the fatigue detecting method based on facial video, this method is by trained scoring expert according to driver's Facial expression assesses its fatigue state;The accuracy rate of this method is dependent on the accurate capture to eye motion.By reality Eye image under image blur dimness, sunglasses operating mode under accidental light irradiation, driver's limb action, night operating mode in driving procedure The influence of the factors such as invisible, still there are numerous technical bottlenecks for the eye motion capture of round-the-clock, the high robust of realization.
Under night operating mode, it is insufficient to drive indoor illumination, weakens the boundary between each organ in face-image, easily by There are the face-images such as polarisation sidelight to the influence of ambient lighting, thus generate non-structural edges (non-face physiological property side Edge), the eye motion for seriously affecting driver captures.
In order to weaken ultraviolet light for influence of the dazzle in the injury of eyes and environment for visual observation, drive People is in the case where daylight is strong, it is intended to and sunglasses are worn, thereby result in blocking for driver's eye areas image, it can not Capture the eye motion of driver.
Invention content
The purpose of the present invention is to overcome the deficiency in the prior art, especially solves the fatigue detecting method of existing facial video In, image blur dimness, sunglasses operating mode under the accidental light irradiation, driver's limb action, night operating mode in by practical driving procedure The lower invisible influence of eye image, the problem of eye motion is caused to fail.
In order to solve the above technical problems, the present invention adopts the following technical scheme that:One kind is known based on projecting integral and iris Other eye motion method for catching, the method includes:
S1 face-images obtain, and face face-image is obtained by the way of infrared illumination and infrared filtering;
S2 image preprocessings pre-process the facial image of camera capture;
S3 projecting integrals calculate, and the grey scale change of facial image is calculated using integral projection method;
S4 human eye areas position, and human eye area is positioned according to human eye ratio and integral and calculating result;
S5 localization of iris circle, in the eye areas image determined in eye position, according to the physiological structure characteristic of iris Position iris center.
S6 eye motions capture, and according to the positioning result of characteristic point, realize effective extraction of driver's eyelid profile action.
Further, the step S2 image pretreatment operations include:Image gray processing, image equilibration, image two-value Change, the operation of image negative film;
Further, the step S3 projecting integrals calculating operation includes:Horizontal vertical integration operation and vertical integration are thrown Shadow operates;
Further, in the step S3 projecting integrals calculating operation, left and right two parts is divided the image into, are thrown respectively Shadow integral and calculating;
Further, in the step S4 human eye area positioning actions, people is obtained according to the ratio of two spacing of people or so Vitrea eye domain;
Further, in the step S6 eye motion capture operations, eyes regional area is realized using parameterized template The positioning of upper and lower eyelid in image.
The present invention has following advantageous effect compared with prior art:
The present invention program is integrated using Gray Projection, and human face structure feature combination Gray Projection is integrated, horizontal and vertical The grey scale change of human eye area is calculated in straight both direction, so as to obtain human eye area positioning, in the eyes that eye position determines In area image, using the physiological structure characteristic positioning iris center of iris, finally eyes are realized using parameterized template The identification of positioning and the action of upper and lower eyelid in local area image;Overcome imaging of the eye image under different illumination conditions Difference improves adaptability of the detecting system for illumination, realizes that round-the-clock, robustness driver's eye motion captures.
Description of the drawings
Fig. 1 is the flow chart of the eye motion method for catching based on projecting integral and iris recognition of the embodiment of the present invention.
Fig. 2 is that the infrared illumination mode of the embodiment of the present invention obtains face-image design sketch.
Fig. 3 is that the infrared fileter of the embodiment of the present invention obtains face-image design sketch.
Fig. 4 is the image binaryzation result schematic diagram of the embodiment of the present invention.
Fig. 5 is the image negative film result schematic diagram of the embodiment of the present invention.
Fig. 6 is the integral projection result of calculation curve graph of the embodiment of the present invention.
Fig. 7 is the human eye parameter logistic relation schematic diagram of the embodiment of the present invention.
Fig. 8 is the Iris Location design sketch of the embodiment of the present invention.
Fig. 9 is the eyelid locations of contours design sketch of the embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings and specific embodiment the present invention is carried out in further detail with complete explanation.It is appreciated that It is that specific embodiment described herein is only used for explaining the present invention rather than limitation of the invention.
With reference to Fig. 1, a kind of eye motion method for catching based on projecting integral and iris recognition of the embodiment of the present invention, institute The method of stating includes:
S1 face-images obtain.Using the face-image of CCD camera acquisition driver, video camera is mounted on vehicular meter Near disk.For the difference of vehicle, focal length of camera is respectively 8mm (passenger car) and 12mm (commercial car), is adopted in the present embodiment The camera acquisition facial image for being 8mm with focal length.
With reference to Fig. 2, in order to overcome the problems, such as the dim light under night operating mode and sidelight, while the normal driving to driver is avoided Interference is generated, face-image is obtained by the way of infrared illumination (850nm);
With reference to Fig. 3, in order to realize the visible problem of eye image under sunglasses operating mode, obtained by the way of infrared filtering red The face-image of outer optical band.The present embodiment selects 850nm infrared fileters to obtain face-image, which can obtain wave The long facial reflected light more than 800nm.
S2, image preprocessing, detailed process include:S21 image gray processings, S22 image equilibrations, S23 image binaryzations, S24 images negative film operates.
S11 image gray processings:Camera obtain eye image be coloured image, comprising contain much information, image procossing speed Degree is slower.High in view of requirement of the human-computer interaction to real-time, it is necessary that gray processing processing is carried out to coloured image.Gray processing The process for exactly making R, G of colour element, B component value equal, the gray value in gray level image are equal to the RGB in original color image Average value, i.e.,
Gray=(R+G+B)/3 (1)
S22 image equilibrations:Histogram equalization pulls open the gray scale spacing of image or makes intensity profile uniform, so as to increase Big contrast makes image detail clear, achievees the purpose that image enhancement.Its specific method is:
All gray level S of original image are provided firstk(k=0,1 ..., L-1);Then statistics original image is each The pixel number n of gray levelk;It is straight that the accumulation for (3) formula being used to calculate original image again after the histogram of original image is calculated using formula (2) Fang Tu:
P(Sk)=nk/ n, k=0,1 ..., L-1 (2)
p(tk)=nk/n (4)
Wherein, n is total number of image pixels.To gray value tkRounding determines Sk→tkMapping relations after count new histogram Each gray-scale pixel number nk;New histogram is finally calculated using formula (4).
S23 image binaryzations:Image binaryzation is carried out with maximum variance between clusters, process is:
If image shares L gray level, gray value is that the pixel of i shares niA, image shares N number of pixel, normalizing Change grey level histogram, enable
A threshold value t is set, pixel is divided by c according to gray value0And c1Two classes.c0Probability ω0, mean μ0
c1Probability ω1, mean μ1
Wherein,It can thus be appreciated that c0And c1Inter-class variance σ2(t) it is:
σ2(t)=ω0(μ-μ0)211-μ)2 (9)
Then t is subjected to value from 0 to i, t is optimal threshold when σ is maximized, you can obtains best binary picture Picture, binaryzation result is referring to Fig. 4.
S24 image negative films:Image negative film refers to that black portions are mapped as white by binary image, and white portion reflects It penetrates as black, so as to the prominent ocular for being originally used for black portions, negative film result is referring to Fig. 5.
S3, projecting integral calculate;For facial image, since there may be the plane internal rotation of certain angle, people Eyes be not in same horizontal line, if carrying out global level projection to whole facial image, obtained eyes indulge seat Mark is just not accurate enough.Therefore in the present embodiment, facial image is divided into the identical left and right two parts of size between two parties first, The left and right eyes of people are included respectively, are made horizontal integral projection respectively to this left and right two parts image and are calculated:
I (x, y) is grey scale pixel values of the image I at location point (x, y), then image is respectively in section [y1,y2] and [x1, x2] on vertical integral projection IPFv(x) and horizontal integral projection IPFh(x) it is respectively:
In the present embodiment, make horizontal integral projection respectively to left and right two parts image and calculate, obtain result referring to Fig. 6, it is horizontal Axis represents each location point, the longitudinal axis represent pixel on this position and, on the horizontal axis in Fig. 6 (a) corresponding to peak-peak Location point be exactly corresponding right eye ordinate value yR, the location point on horizontal axis in Fig. 6 (b) corresponding to peak-peak is exactly The ordinate value y of corresponding left eyeL, two positions on horizontal axis in Fig. 6 (c) corresponding to crest value maximum in two sections of waveforms Point be corresponding right and left eyes abscissa value, i.e. xLAnd xR, and xR<xL
S4 human eye areas position.The human eye centre coordinate calculated with the method for gray-level projection is exactly human eye ash Spend the horizontal maximum of integration and the intersecting point coordinate of vertical maximum place straight line, i.e. point (xL,yL) left eye eyeball central point is corresponded to, Point (xR,yR) correspond to right eye eyeball central point.Referring to Fig. 7, human eye parameter logistic relationship marks rectangle human eye window according to figure 5 Mouthful, rectangle human eye window, that is, human eye localization region.
S5 localization of iris circle.Referring to Fig. 8, human eye iris has the physiological property of radial symmetric, therefore right using radial direction Claim transformation, the positioning of iris central point is carried out in human eye gradient image.If the gradient of any pixel point p (u, v) is in image Gp={ Gpu,Gpv, the mapping pixel for defining pixel p (u, v) is:
In formula, n is mapping radius,Round represents round.Calculate current mapping radius Gradient direction mapping graph θ under nnWith gradient magnitude mapping graph An
Merge gradient direction mapping graph θnWith gradient magnitude mapping graph An
Wherein, knFor scale parameter, α is radial direction parameter.To SnGaussian filtering is carried out, in Gaussian filter range scale Isotropically expand the coverage R of mapping pixeln,
Rn=Sn*Fn (14)
Wherein, FnFor isotropism Gaussian filter, transformed mappings radius n obtains radial symmetry transform output result Rs
In RsMiddle positioning maximum value, coordinate are the iris center in eye image.
S6 eye motions capture.Referring to Fig. 9, upper and lower eyelid in eyes local area image is realized using parameterized template Positioning, and then analyze driver eye state.
The parameterized template characteristic of definition is:
(1) iris is represented with circle, and iris centre coordinate is (ui,vi), it is determined by iris Spot detection algorithm, to analyze Relationship between iris kinetic characteristic and driving fatigue;
(2) upper and lower eyelid is represented with parabola, and centre coordinate is (uo,vo);
(3) a height of a in upper eyelid, a height of c of palpebra inferior, a length of b of eyelid parabola semifocal chord, eyes inclination angle are θ;
(4) eyes left comer point is determined with right corner point by eye Corner Detection.
The left and right angle point of eyes is denoted as X respectively in eyes local area imagec1,Xc2, defining edge energy function is:
Wherein, fup(u,v;Xc1,Xc2), fdown(x,y;Xc1,Xc2) it is respectively according to the left and right angle point of eyes and the top of setting The upper and lower eyelid parabolic equation that point coordinates calculates, defining colour of skin energy function is:
I.e. in colour of skin image, curve surface integral is carried out in upper palpebra inferior area defined.Define priori energy Xiang Wei:
Eprior=-(b-2a)2-(a-2c)2 (18)
Parameterized template total energy function is:
E=Eprior+Eedge+Eskin (19)
Method by searching for energy function extreme value in eyes local area image realizes eye contour (upper palpebra inferior) Positioning.

Claims (6)

1. a kind of eye motion method for catching based on projecting integral and iris recognition, which is characterized in that the method includes:
S1 face-images obtain, and face face-image is obtained by the way of infrared illumination and infrared filtering;
S2 image preprocessings pre-process the facial image of camera capture;
S3 projecting integrals calculate, and the grey scale change of facial image is calculated using integral projection method;
S4 human eye areas position, and human eye area is positioned according to human eye ratio and integral and calculating result;
S5 localization of iris circle in the eye areas image determined in eye position, is positioned according to the physiological structure characteristic of iris Iris center;
S6 eye motions capture, and according to the positioning result of characteristic point, realize effective extraction of driver's eyelid profile action.
2. a kind of eye motion method for catching based on projecting integral and iris recognition according to claim 1, feature It is, the step S2 image pretreatment operations include:Image gray processing, image equilibration, image binaryzation, image negative film behaviour Make.
3. a kind of eye motion method for catching based on projecting integral and iris recognition according to claim 1, feature It is, the step S3 projecting integrals calculating operation includes:Horizontal vertical integration operation and vertical integral projection operation;
4. a kind of eye motion method for catching based on projecting integral and iris recognition according to claim 1, feature It is, in the step S3 projecting integrals calculating operation, divides the image into left and right two parts, carries out projecting integral's calculating respectively;
5. a kind of eye motion method for catching based on projecting integral and iris recognition according to claim 1, feature It is, in the step S4 human eye area positioning actions, human eye area is obtained according to the ratio of two spacing of people or so.
6. a kind of eye motion method for catching based on projecting integral and iris recognition according to claim 1, feature It is, in the step S6 eye motion capture operations, upper and lower eye in eyes local area image is realized using parameterized template The positioning of eyelid.
CN201611237357.7A 2016-12-29 2016-12-29 Eye motion method for catching based on projecting integral and iris recognition Pending CN108256390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611237357.7A CN108256390A (en) 2016-12-29 2016-12-29 Eye motion method for catching based on projecting integral and iris recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611237357.7A CN108256390A (en) 2016-12-29 2016-12-29 Eye motion method for catching based on projecting integral and iris recognition

Publications (1)

Publication Number Publication Date
CN108256390A true CN108256390A (en) 2018-07-06

Family

ID=62719542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611237357.7A Pending CN108256390A (en) 2016-12-29 2016-12-29 Eye motion method for catching based on projecting integral and iris recognition

Country Status (1)

Country Link
CN (1) CN108256390A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740512A (en) * 2018-12-29 2019-05-10 山东财经大学 A kind of method for recognizing human eye state for fatigue driving judgement
CN110623629A (en) * 2019-07-31 2019-12-31 毕宏生 Visual attention detection method and system based on eyeball motion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740512A (en) * 2018-12-29 2019-05-10 山东财经大学 A kind of method for recognizing human eye state for fatigue driving judgement
CN110623629A (en) * 2019-07-31 2019-12-31 毕宏生 Visual attention detection method and system based on eyeball motion

Similar Documents

Publication Publication Date Title
CN108256378A (en) Driver Fatigue Detection based on eyeball action recognition
CN104881955B (en) A kind of driver tired driving detection method and system
Alshaqaqi et al. Driver drowsiness detection system
CN1225375C (en) Method for detecting fatigue driving based on multiple characteristic fusion
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
CN111062292B (en) Fatigue driving detection device and method
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN101059836A (en) Human eye positioning and human eye state recognition method
US20080101659A1 (en) Eye closure recognition system and method
CN106934808A (en) A kind of automobile headlamp taillight recognition and tracking method under visually-perceptible
CN107153816A (en) A kind of data enhancement methods recognized for robust human face
Li et al. Nighttime lane markings recognition based on Canny detection and Hough transform
CN102054163A (en) Method for testing driver fatigue based on monocular vision
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN103268479A (en) Method for detecting fatigue driving around clock
CN106407951B (en) A kind of night front vehicles detection method based on monocular vision
CN107895157B (en) Method for accurately positioning iris center of low-resolution image
CN103218615B (en) Face judgment method
CN111619324A (en) Intelligent anti-dazzling method and system for sight tracking automobile
CN106203338B (en) Human eye state method for quickly identifying based on net region segmentation and threshold adaptive
CN108256397A (en) Localization of iris circle method based on projecting integral
CN103839245B (en) The Retinex colour-image reinforcing method at night of Corpus--based Method rule
CN109886086A (en) Pedestrian detection method based on HOG feature and Linear SVM cascade classifier
CN104992160B (en) A kind of heavy truck night front vehicles detection method
CN113140093A (en) Fatigue driving detection method based on AdaBoost algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180706