CN108720851A - A kind of driving condition detection method, mobile terminal and storage medium - Google Patents

A kind of driving condition detection method, mobile terminal and storage medium Download PDF

Info

Publication number
CN108720851A
CN108720851A CN201810498589.0A CN201810498589A CN108720851A CN 108720851 A CN108720851 A CN 108720851A CN 201810498589 A CN201810498589 A CN 201810498589A CN 108720851 A CN108720851 A CN 108720851A
Authority
CN
China
Prior art keywords
eyes
video frame
iris
user
flare
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810498589.0A
Other languages
Chinese (zh)
Other versions
CN108720851B (en
Inventor
王晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wang Xiaopeng
Original Assignee
Release Code Fusion (shanghai) Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Release Code Fusion (shanghai) Mdt Infotech Ltd filed Critical Release Code Fusion (shanghai) Mdt Infotech Ltd
Priority to CN201810498589.0A priority Critical patent/CN108720851B/en
Publication of CN108720851A publication Critical patent/CN108720851A/en
Application granted granted Critical
Publication of CN108720851B publication Critical patent/CN108720851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains

Abstract

The invention discloses a kind of driving condition detection method, mobile terminal and storage mediums.This method executes in the terminal, and mobile terminal includes infrared photography module, is suitable for the face image of continuous acquisition user, generates sequence of frames of video.A video frame is often collected, eyes are positioned from the video frame and extracts eye pattern;Eyelid segmentation and iris segmentation are carried out to the eye pattern of extraction;Eyes aperture is calculated according to segmentation result, wherein the area of the lap for the figure that the figure that eyes aperture is surrounded by upper palpebra inferior curve is surrounded with iris outer profile curve, the ratio of the area of the figure surrounded with iris outer profile curve;Judge whether eyes are closed according to eyes aperture;And according in the continuous video frame of the predetermined quantity including current video frame, eyes are the accounting of the video frame of closed state, judge whether user is fatigue driving.The program is not influenced by individual difference, and accurate judgement can be made to driving condition.

Description

A kind of driving condition detection method, mobile terminal and storage medium
Technical field
The present invention relates to safe driving technical field more particularly to a kind of driving condition detection method, mobile terminal and deposit Storage media.
Background technology
Fatigue driving refers to the imbalance that people generates physiological function and mental function after continuous driving for a long time.Reaction is slow It is blunt, judge slow, rhythm slowly etc. and be the main performance of driver tired driving, fatigue driving is cause traffic accident important Factor, the traffic accident caused by fatigue driving seriously threaten the security of the lives and property of people also in rapid growth.
Currently, mainly thering is facial feature detection and physical signs to detect two kinds the detection of fatigue driving.If using face Portion's feature, such as eye activity, facial expression etc., need to acquire the face-image of driver, still, by driver head position, The limitation of the conditions such as indoor light is driven, the face-image acquired is likely to unintelligible, to influence subsequent detection accuracy; And if using physical signs, for example, electrocardiogram (ECG) data, respiratory rate etc., then need driver to wear relevant device, this just may Driver's normal driving is interfered, to reduce drive safety.
The existing fatigue detection method based on image procossing, such as the fatigue detecting side based on face stretching degree and time Formula cannot meet the high tasks of requirement of real time such as driving since people has been in doze state when face opens;Based on nature The eye motion fatigue detecting mode of illumination can not work normally in night or ambient light dark condition.
Therefore, it is necessary to a kind of general, easy driving condition detection methods, improve the accuracy that driving condition judges.
Invention content
For this purpose, the present invention provides a kind of driving condition detection method, mobile terminal and storage medium, to try hard to solve or Person at least alleviates existing at least one problem above.
According to an aspect of the invention, there is provided a kind of driving condition detection method, executes in the terminal, it is mobile Terminal includes infrared photography module, is suitable for the face image of continuous acquisition user, generates sequence of frames of video, and this method includes:Often A video frame is collected, eyes are positioned from the video frame and extracts eye pattern;Eyelid segmentation and rainbow are carried out to the eye pattern of extraction Film is divided;Eyes aperture is calculated according to segmentation result, wherein the figure and iris that eyes aperture is surrounded by upper palpebra inferior curve The area of the lap for the figure that outer profile curve is surrounded, the ratio of the area of the figure surrounded with iris outer profile curve Value;Judge whether eyes are closed according to eyes aperture;And according to the continuous video of the predetermined quantity including current video frame In frame, eyes are the accounting of the video frame of closed state, judge whether user is fatigue driving.
The program judges the closed state of eyes with two-dimensional relative opening degree, not by the small shadow of individual eyes nature aperture It rings, is judged in conjunction with continuous multiple frames, the accuracy of fatigue state detection can be improved.
Optionally, this method further includes:It is opposite with iris center to the center of the flare of infrared light based on pupil Position calculates the size at visual angle, and the direction at visual angle is that the direction at iris center is directed toward at the center of flare;Judged according to visual angle Whether eyes face front;And according in the continuous video frame of the predetermined quantity including current video frame, eyes are not faced The accounting of the video frame in front, judges whether user's attention is concentrated.
Optionally, this method further includes:It will be stored in cycle according to eyes closed state and visual angle determined by video frame In chained list, so that the accounting and/or eyes that count the video frame that wherein eyes are closed state do not face the video frame in front Accounting.
Optionally, this method further includes:After judging user for fatigue driving, the first warning information is sent out, prompts user It is currently at fatigue driving state;And/or after judging that user is absent minded, the second warning information is sent out, is prompted User focuses on.
Optionally, the position of eyes of user is determined to the position of the flare of infrared light based on pupil in video frame;With And the position according to identified eyes, to the video frame extraction eye pattern.
Optionally, the intensity profile based on flare Yu pupil iris region, it is anti-by being determined to video frame binaryzation Penetrate the zone of saturation of hot spot;Zone of saturation is filtered using horizontal gradient filter and vertical gradient filter, determines reflected light The position of spot;And the position based on flare, determine the position of eyes of user.
Optionally, based on upper gradient between palpebra inferior and sclera on vertical direction, pass through palpebra inferior on fitting of a polynomial Curve;Based on the gradient at left and right sides of iris in horizontal direction between sclera, iris outer profile curve is fitted by circle.
Optionally, this method further includes:Eyes aperture of the user under non-fatigue state is obtained, as initial opening.
Optionally, it is based on initial opening, determines predetermined threshold, the predetermined threshold is multiplying for initial opening and pre-determined factor Product;In the case where eyes aperture is less than predetermined threshold, judge eyes for closed state.
Optionally, the radius at the distance between the center based on flare and iris center and iris calculates visual angle.
Optionally, visual angle is calculated by following formula:
Wherein, α is visual angle, and R is iris radius, and d is the distance between flare center and iris center.
According to a further aspect of the present invention, a kind of mobile terminal, including one or more processors are provided;And storage Device;One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one A or multiple processors execute, and one or more of programs include the instruction for executing driving condition detection method.
According to a further aspect of the present invention, a kind of computer readable storage medium of the one or more programs of storage is provided, One or more of programs include instruction, and described instruction is when mobile terminal execution so that the mobile terminal execution drives Condition detection method.
Said program acquires continuous videos frame sequence using infrared photography module, carries out recognition of face and eyes positioning, leads to It crosses and eyelid segmentation and the upper palpebra inferior curve of iris segmentation determination and iris outer profile curve is carried out to eye pattern, then according to eyes Aperture and visual angle judge eyes closed state and whether face front, the eyes closed state accounting of comprehensive analysis continuous multiple frames and The accounting for not facing front, judges driving condition and provides warning information.Of people is considered when calculating eyes aperture Body difference, the region area of above the surrounded figure of palpebra inferior curve and the lap of the surrounded figure of iris outer profile curve For ratio with the surrounded graphics area of iris outer profile curve as eyes aperture, this method does not need additional device, only needs It wants that application program execution corresponding instruction is installed in mobile terminal, driving condition detection method is general, easy, can improve driving The accuracy of state-detection.
Description of the drawings
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings Face, these aspects indicate the various modes that can put into practice principles disclosed herein, and all aspects and its equivalent aspect It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical reference numeral generally refers to identical Component or element.
Fig. 1 shows the organigram of mobile terminal 100 according to an embodiment of the invention;
Fig. 2 shows the schematic flow charts of driving condition detection method according to an embodiment of the invention;
Fig. 3 shows the schematic diagram of upper palpebra inferior curve and iris curve according to an embodiment of the invention;
Fig. 4 shows flare center according to an embodiment of the invention and the visual angle of iris center relative position Schematic diagram;
Fig. 5 shows the schematic flow chart of driving condition detection according to an embodiment of the invention;
Fig. 6 shows the mobile terminal monitoring state according to an embodiment of the invention using driving condition detection method Schematic diagram.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Common fatigue monitoring system infers driver using facial characteristics, eye signal, head movement of driver etc. Fatigue state, and send out early warning.The present invention is based on the mobile terminals of acquisition infrared image to carry out driving condition detection, example Such as use the mobile phone with active infrared illumination, obtain the image of eye areas, between the above palpebra inferior area of iris region with The ratio of entire iris area is not influenced by individual eyes aperture difference as eyes aperture, can easily and effectively analyze and drive The state for the person of sailing reminds the driver in fatigue state.
Fig. 1 shows the structure diagram of mobile terminal 100 according to an embodiment of the invention.Mobile terminal 100 can be with Including memory interface 102, one or more data processors, image processor and/or central processing unit 104, display screen Curtain (not shown in figure 1) and peripheral interface 106.
Memory interface 102, one or more processors 104 and/or peripheral interface 106 either discrete component, It can be integrated in one or more integrated circuits.In the mobile terminal 100, various elements can pass through one or more communication Bus or signal wire couple.Sensor, equipment and subsystem may be coupled to peripheral interface 106, a variety of to help to realize Function.
For example, motion sensor 110, light sensor 112 and range sensor 114 may be coupled to peripheral interface 106, To facilitate the functions such as orientation, illumination and ranging.Other sensors 116 can equally be connected with peripheral interface 106, such as positioning system System (such as GPS receiver), temperature sensor, biometric sensor or other sensor devices, it is possible thereby to help to implement phase The function of pass.
Camera sub-system 120 and optical sensor 122 can be used for the camera of convenient such as recording photograph and video clipping The realization of function, wherein the camera sub-system and optical sensor for example can be charge coupling device (CCD) or complementary gold Belong to oxide semiconductor (centimetre OS) optical sensor.Reality can be helped by one or more radio communication subsystems 124 Existing communication function, wherein radio communication subsystem may include that radio-frequency transmitter and transmitter and/or light (such as infrared) receive Machine and transmitter.The particular design and embodiment of radio communication subsystem 124 can depend on what mobile terminal 100 was supported One or more communication networks.For example, mobile terminal 100 may include being designed to support LTE, 3G, GSM network, GPRS nets Network, EDGE network, Wi-Fi or WiMax network and BlueboothTMThe communication subsystem 124 of network.
Audio subsystem 126 can be coupled with loud speaker 128 and microphone 130, to help to implement to enable voice Function, such as speech recognition, speech reproduction, digital record and telephony feature.I/O subsystems 140 may include touch screen control Device 142 processed and/or other one or more input controllers 144.Touch screen controller 142 may be coupled to touch screen 146.It lifts For example, the touch screen 146 and touch screen controller 142 can be detected using any one of a variety of touch-sensing technologies The contact and movement or pause carried out therewith, wherein detection technology include but is not limited to capacitive character, resistive, infrared and table Face technology of acoustic wave.Other one or more input controllers 144 may be coupled to other input/control devicess 148, such as one Or the pointer device of multiple buttons, rocker switch, thumb wheel, infrared port, USB port, and/or stylus etc.It is described One or more button (not shown)s may include the up/down for 130 volume of controlling loudspeaker 128 and/or microphone Button.
Memory interface 102 can be coupled with memory 150.The memory 150 may include that high random access is deposited Reservoir and/or nonvolatile memory, such as one or more disk storage equipments, one or more optical storage apparatus, and/ Or flash memories (such as NAND, NOR).Memory 150 can store an operating system 172, for example, Android, iOS or The operating system of Windows Phone etc.The operating system 172 may include for handling basic system services and execution The instruction of task dependent on hardware.Memory 150 can also store one or more programs 174.When mobile device is run, Meeting load operating system 172 from memory 150, and executed by processor 104.Program 174 at runtime, also can be from storage It loads in device 150, and is executed by processor 104.Program 174 operates on operating system, utilizes operating system and bottom The interface that hardware provides realizes the various desired functions of user, such as instant messaging, web page browsing, pictures management.Program 174 can Can also be that operating system is included to be independently of operating system offer.In addition, program 174 is mounted to mobile terminal When in 100, drive module can also be added to operating system.Program 174 may be arranged on an operating system by one or more A processor 104 executes relevant instruction.In some embodiments, mobile terminal 100 is configured as executing according to the present invention Driving condition detection method.Wherein, one or more programs 174 of mobile terminal 100 include according to the present invention for executing The instruction of driving condition detection method.
Mobile terminal 100 can be the portable electronic devices such as smart mobile phone, tablet computer, but not limited to this.It is specific next It says, the camera sub-system 120 and optical sensor 122 in mobile terminal 100 are infrared photography module, being capable of continuous acquisition user Face's infrared image generates sequence of frames of video.Infrared photography module includes the device of active infrared line transmitting, acquisition frame rate It is generally per second not less than 15 frames, can mobile terminal be fixed on to the front of driver, so as to facial image is collected, The video frame for not capturing face is weeded out by the overall intensity of image.If continuous multiple frames do not collect facial image meeting There is corresponding prompt.The company of the information that driving condition judges mobile terminal acquisition with infrared photography module in driving procedure Continuous face video frame.
Fig. 2 shows the schematic flow charts of driving condition detection method according to an embodiment of the invention.Such as Fig. 2 institutes Show, in step s 200, a video frame is often collected to infrared photography module, the position of eyes is positioned simultaneously from the video frame Extract eye pattern.
Reflection and refractive index difference of the different parts to infrared light, according to corneal reflection principle, infrared light is by table before cornea Face is reflected, and forms a fritter bright area, i.e. flare on the image, and in face gray level image, pupil portion color is most deep, Flare corresponds to a most bright point in eye image, and rest part gray value falls between, therefore can pass through Pupil is to the flare of infrared light in the picture with respect to the intensity profile Pattern localization facula position of surrounding, indirect addressing eyes Position.
The position of eyes of user can be determined based on pupil in video frame to the position of the flare of infrared light;And According to the position of identified eyes, to the video frame extraction eye pattern.
As can be seen that flare area grayscale value has larger gradient and light with respect to pupil iris region from video frame Spot center gray scale tends to be saturated substantially, i.e. flare respective pixel value saturation or closest to saturation, and surrounding picture in the picture Element value is relatively much smaller.
According to one embodiment of present invention, intensity profile that can be based on flare Yu surrounding pupil iris region, The zone of saturation of flare is determined by binary image.For example, by Binary Sketch of Grey Scale Image, certain gray scale etc. can be taken Boundary line of the grade as differentiation.If it is 255 (complete white) to make the gray value of grade 1, grade 2 is 0, each picture in the picture The value that element will take in 0 and 1.Since noise jamming is likely to be obtained several zones of saturation, need to these zones of saturation into Row filtering.
For example, being more than given threshold value and area, depth-width ratio using vertical filter and horizontal filter filtering edge gradient Zone of saturation in the reasonable scope, you can navigate to facula position.Carrying out edge detection using pixel gradient filter can be with Navigate to the profile of flare.Wherein, edge detection is the state by detecting each pixel and its neighborhood, to determine pixel Whether it is located on the boundary of object.Edge detection algorithm is mainly based upon the single order and second dervative of image intensity, but derivative Calculating is very sensitive to noise, filter can be used to carry out convolution with image, to improve edge detection related with noise.Example Such as, using bottom profiled on vertical filter extraction hot spot, the left and right profile of hot spot is extracted using horizontal filter, so that it is determined that light The ratio of width to height of spot.For in canthus or the issuable interference small light spot of eyelashes, can be carried out by facula area and the ratio of width to height Filtering is rejected.The center that hot spot determines that eyes substantially is navigated to, and then extracts eye pattern and carries out eyelid segmentation and iris point It cuts.
In step S210, eyelid segmentation and iris segmentation can be carried out to the eye pattern of extraction.
Since between eyelid and sclera and iris or so has larger gradient between sclera, vertical direction can be based on Upper gradient between palpebra inferior and sclera passes through palpebra inferior curve on fitting of a polynomial;And based on iris in horizontal direction Gradient between the left and right sides and sclera is justified by least square fitting and determines iris outer profile curve.
For example, the discrete upper eyelid curve segment comprising noise is navigated to using gradient filter, due to of positioning It is part, it is therefore desirable to which curve matching can just obtain complete eyelid curve.Polynomial fitting curve from geometric meaning just It is the curve sought with the quadratic sum of set point distance minimum.For example, determining the larger line segment of gradient on vertical direction, these are connected Larger gradient line segment, since quadratic function property more tallies with the actual situation, while in order to avoid higher order functionality may be because noise And over-fitting, the curve of these line segments composition can be fitted with the secondary convex function that Open Side Down.It is similarly fitted palpebra inferior curve, Details are not described herein.
Iris profile by upper palpebra inferior due to being blocked, and bottom profiled is difficult to be accurately positioned sometimes on iris, can utilize level The gradient filter in direction positions discrete iris contour curve segment, for example, determining that iris right boundary horizontal gradient is larger Line segment, connect the line segment of these larger gradients, then parameters are estimated by least square method, with circle be fitted these curves Segment, so that it is determined that the outer profile of iris.It can also be carried out curve fitting using other modes, this programme does not limit this.
Fig. 3 shows the schematic diagram of upper palpebra inferior curve and iris curve according to an embodiment of the invention.
In step S220, eyes aperture can be calculated according to segmentation result, wherein eyes aperture is upper palpebra inferior curve The area of the lap for the figure that the figure surrounded is surrounded with iris outer profile curve, is enclosed with iris outer profile curve At figure area ratio.As shown in figure 3, dash area is the figure and iris foreign steamer that upper palpebra inferior curve is surrounded The lap for the figure that wide curve is surrounded.
According to one embodiment of the invention, eyes aperture of the user under non-fatigue state can be obtained, as initial opening.
Aperture is the stretching degree of eyes, for a certain individual, under non-fatigue state, the apertures of eyes compared with Greatly, i.e. the area of dash area is larger, can first be initialized under the non-fatigue state of user before carrying out driving condition detection The area of the figure that the upper palpebra inferior normally opened eyes is surrounded and the lap of the surrounded figure of iris outer profile is S0, The area of the surrounded figure of iris outer profile is S, then initial opening is a0=S0/ S, as the benchmark for judging eyes closed state Value.
In step S230, it can judge whether eyes are closed according to eyes aperture.
According to one embodiment of the invention, it can be based on initial opening, determine that predetermined threshold, the predetermined threshold are initially to open The product of degree and pre-determined factor;In the case where eyes aperture is less than predetermined threshold, judge eyes for closed state.
It is surrounded from the surrounded figure of palpebra inferior curve in certain t moment is calculated in the eye pattern of extraction with iris outer profile curve The area S of the lap of figuretWith the ratio a of the area S of the surrounded figure of iris outer profile curvet, at=St/S.It is predetermined Coefficient can be adjusted according to actual conditions, and it is 0.5, a that can rule of thumb take pre-determined factort<0.5*a0, then it is believed that eye Eyeball is in closed state, otherwise to open state.
It can will be stored in circular linked list according to eyes closed state determined by video frame, to count wherein eyes For the accounting of the video frame of closed state.
, can be according in the continuous video frame of the predetermined quantity including current video frame in step S240, eyes are The accounting of the video frame of closed state judges whether user is fatigue driving.
For example, the eyes closed state of nearest 30 frame of statistics, eyes closed state is counted by traversing 30 frame historical records More than 0.5*a0Frame number, if frame number be more than 15 if judge that driver is currently at fatigue driving state.Judging that user is After fatigue driving, the first warning information can be sent out, user is prompted to be currently at fatigue state, parking is needed to rest.
According to one embodiment of the invention, it is also based on center and iris center of the pupil to the flare of infrared light Relative position calculates visual angle, and the direction at visual angle is that the direction at iris center is directed toward at the center of flare.
Fig. 4 shows flare center according to an embodiment of the invention and the visual angle of iris center relative position Schematic diagram.As shown in figure 4, O is iris center, P is the center of flare, and view directions are that the center P of flare is directed toward The direction of iris center O.
In the case that head is static, when eye movement, the relative position of pupil center and spot center can change, The situation of change of eye sight line direction and blinkpunkt can be obtained by relative position relation.In eyes gray level image, pupil Part colours are most deep, and flare corresponds to a most bright point in eye image, the gray value of rest part between the two it Between.Gray level image progress binary conversion treatment is obtained into bianry image, is recorded after the maximum hot spot in bianry image is extracted Facula position, the center of geometric center, that is, flare.Since the gray value of pupil image is relatively low, the gray value of iris image Higher, gray value is widely varied in the two adjacent edges, and the edge detection method based on gradient filter may be used Iris boundary point is extracted, then is justified by least square fitting and determines iris outer profile, so that it is determined that iris central point.
When people faces front, hot spot falls the flare at iris center, strabismus and can fall opposite with direction of visual lines Other side iris on, the angle of strabismus is bigger, and flare distance iris other side profile and border is closer.
Therefore, the radius of the distance between center that can be based on flare and iris center and iris, calculating regard Angle:
Wherein, α is visual angle, and R is iris radius, and d is the distance between flare center O and iris center P (such as Fig. 4 It is shown).If the sight deflection on testing level direction, d is the distance of horizontal direction.Here regarding for horizontal direction is more paid close attention to Angular deflection.Because of the case where visual angle deflection comes back and bows in the vertical direction, the opening ratio of eyes is normally small, then can recognize To be in fatigue state.
It can will be stored in historical record according to visual angle determined by video frame, historical record is preserved using circular linked list The corresponding visual angle of predetermined quantity video frame, to count the accounting that wherein eyes do not face the video frame in front.Wherein, it recycles The characteristics of list structure is need not to increase amount of storage, only to the on-link mode (OLM) slight changes of table, you can so that list processing is more square Just flexibly.
It can judge whether eyes face front according to visual angle;And even according to the predetermined quantity including current video frame In continuous video frame, eyes do not face the accounting of the video frame in front, judge whether user's attention is concentrated.If visual angle is more than Zero is thought that eyes do not face front, if by the visual angle of nearest 30 frame in traversal loop chained list, the frame number for not facing front is big In predetermined threshold, such as larger than 10 frames, such case may be that user does not drive conscientiously, it may be possible to which at this moment fatigue driving can also be sent out Go out warning information, user is prompted to focus on.
Fig. 5 shows the schematic flow chart of driving condition detection according to an embodiment of the invention.It reads first red The video flowing of outer camera module acquisition, judges whether to collect facial image, and eyes are carried out to the video frame comprising facial image Then positioning carries out eyelid segmentation and iris segmentation, by being fitted obtained upper palpebra inferior curve and iris outer profile curve, lead to The aperture for crossing calculating eyes judges whether eyes are closed state, and eyes aperture surrounds figure and iris by upper palpebra inferior curve The ratio of the area and the surrounded graphics area of iris outer profile of the overlapping region of the surrounded figure of outer profile, by its with initially open Degree compares, and judges whether eyes are closed state, and the visual angle of eyes is central point and iris central point according to flare Relative position is calculated.Then comprehensive statistics include current video frame the continuous video frame of predetermined quantity in, eyes are not Accounting and the eyes of the video frame in front are faced as the accounting of closed state, judge driving condition, it can be according to different accountings Different driving conditions is determined with index, such as waking state, slight fatigue driving state, severe fatigue driving state, according to not Same state sends out different early warning informations.At every predetermined time or scheduled frame number updates the history in circular linked list Record.
Fig. 6 shows the mobile terminal monitoring state according to an embodiment of the invention using driving condition detection method Schematic diagram.As shown in fig. 6, mobile terminal can identify face, based on the positioning and analysis to eyes, current driving shape is judged State can give a warning when being judged as fatigue driving, prompt user Don't Drive When Tired, while can show lasting driving when Between, the statistics of driving condition in continuous driving time can also provide personalized service by user setting.
Scheme according to the present invention acquires continuous videos frame sequence by using infrared photography module, carries out recognition of face It is positioned with eyes, by carrying out eyelid segmentation and the upper palpebra inferior curve of iris segmentation determination and iris outer profile curve to eye pattern, Then eyes closed state is judged according to the aperture of eyes and visual angle and whether faces front, the eyes of comprehensive analysis continuous multiple frames Closed state accounting and the accounting for not facing front, judge driving condition and provide warning information.It is opened calculating eyes The individual difference that people is considered when spending, judges whether eyes are closed state, this method according to the different a reference values of eyes aperture Additional device is not needed, it is only necessary to application program be installed in mobile terminal and execute corresponding instruction, driving condition detection method It is general, easy, the accuracy of driving condition detection can be improved.
A7, the method as described in A6, wherein described to judge that the step of whether eyes are closed includes according to eyes aperture:Base In initial opening, determine that predetermined threshold, the predetermined threshold are the product of initial opening and pre-determined factor;It is less than in eyes aperture In the case of predetermined threshold, judge eyes for closed state.
A8, the method as described in A2, wherein the center and iris center based on pupil to the flare of infrared light Relative position, calculate visual angle the step of include:The distance between center and iris center based on flare and iris Radius, calculate visual angle.
A9, the method as described in A8, wherein the visual angle is calculated by following formula:
Wherein, α is visual angle, and R is iris radius, and d is the distance between flare center and iris center.
It should be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, it is right above In the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure or In person's descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. claimed hair The bright feature more features required than being expressly recited in each claim.More precisely, as the following claims As book reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows specific real Thus the claims for applying mode are expressly incorporated in the specific implementation mode, wherein each claim itself is used as this hair Bright separate embodiments.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example In different one or more equipment.Module in aforementioned exemplary can be combined into a module or be segmented into addition multiple Submodule.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
Various technologies described herein are realized together in combination with hardware or software or combination thereof.To the present invention Method and apparatus or the process and apparatus of the present invention some aspects or part can take embedded tangible media, such as it is soft The form of program code (instructing) in disk, CD-ROM, hard disk drive or other arbitrary machine readable storage mediums, Wherein when program is loaded into the machine of such as computer etc, and is executed by the machine, the machine becomes to put into practice this hair Bright equipment.
In the case where program code executes on programmable computers, computing device generally comprises processor, processor Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory Instruction in the said program code of middle storage executes method of the present invention.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates Machine readable medium includes computer storage media and communication media.Computer storage media storage such as computer-readable instruction, The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc. Data-signal processed embodies computer-readable instruction, data structure, program module or other data, and includes that any information passes Pass medium.Above any combination is also included within the scope of computer-readable medium.
In addition, be described as herein can be by the processor of computer system or by executing for some in the embodiment The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodiment Element described in this is the example of following device:The device is used to implement performed by the element by the purpose in order to implement the invention Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc. Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being described in this way must Must have the time it is upper, spatially, in terms of sequence or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that The language that is used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.

Claims (10)

1. a kind of driving condition detection method, executes in the terminal, the mobile terminal includes infrared photography module, is suitable for The face image of continuous acquisition user generates sequence of frames of video, the method includes:
A video frame is often collected, eyes are positioned from the video frame and extracts eye pattern;
Eyelid segmentation and iris segmentation are carried out to the eye pattern of extraction;
Eyes aperture is calculated according to segmentation result, wherein the figure and rainbow that the eyes aperture is surrounded by upper palpebra inferior curve The area of the lap for the figure that film outer profile curve is surrounded, the area of the figure surrounded with iris outer profile curve Ratio;
Judge whether eyes are closed according to eyes aperture;And
According in the continuous video frame of the predetermined quantity including current video frame, eyes are accounting for for the video frame of closed state Than judging whether user is fatigue driving.
2. the method for claim 1, wherein further including:
Based on pupil to the relative position at the center and iris center of the flare of infrared light, visual angle is calculated, the visual angle Direction is that the direction at iris center is directed toward at the center of flare;
Judge whether eyes face front according to visual angle;And
According in the continuous video frame of the predetermined quantity including current video frame, eyes do not face accounting for for the video frame in front Than judging whether user's attention is concentrated.
3. the method for claim 1, wherein described often collect a video frame, eyes are positioned from the video frame And the step of extracting eye pattern, includes:
Based on pupil in the video frame to the position of the flare of infrared light, the position of eyes of user is determined;And
According to the position of identified eyes, to the video frame extraction eye pattern.
4. method as claimed in claim 3, wherein it is described based on pupil in the video frame to the position of the flare of infrared light It sets, the step of position for determining eyes of user includes:
Intensity profile based on flare Yu pupil iris region, by the saturation for determining flare to video frame binaryzation Region;
The zone of saturation is filtered using horizontal gradient filter and vertical gradient filter, determines the position of flare; And
Position based on flare determines the position of eyes of user.
5. the method for claim 1, wherein the eye pattern of described pair of extraction carries out the step of eyelid segmentation and iris segmentation Including:
Based on upper gradient between palpebra inferior and sclera on vertical direction, pass through palpebra inferior curve on fitting of a polynomial;And
Based on the gradient at left and right sides of iris in horizontal direction between sclera, iris outer profile curve is fitted by circle.
6. the method for claim 1, wherein further including:
Eyes aperture of the user under non-fatigue state is obtained, as initial opening.
7. method as claimed in claim 2, wherein further include:
It will be stored in circular linked list according to eyes closed state and visual angle determined by video frame, and be to count wherein eyes The accounting and/or eyes of the video frame of closed state do not face the accounting of the video frame in front.
8. the method as described in any one of claim 2-7, wherein further include:
After judging user for fatigue driving, the first warning information is sent out, user is prompted to be currently at fatigue driving state;With And/or person
After judging that user is absent minded, the second warning information is sent out, user is prompted to focus on.
9. a kind of mobile terminal, including:
One or more processors;With
Memory;
One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one A or multiple processors execute, and one or more of programs include for executing according in claim 1-8 the methods The instruction of either method.
10. a kind of computer readable storage medium of the one or more programs of storage, one or more of programs include instruction, Described instruction is when mobile terminal execution so that appointing in method of the mobile terminal execution according to claim 1-8 One method.
CN201810498589.0A 2018-05-23 2018-05-23 Driving state detection method, mobile terminal and storage medium Active CN108720851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810498589.0A CN108720851B (en) 2018-05-23 2018-05-23 Driving state detection method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810498589.0A CN108720851B (en) 2018-05-23 2018-05-23 Driving state detection method, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN108720851A true CN108720851A (en) 2018-11-02
CN108720851B CN108720851B (en) 2021-06-29

Family

ID=63935042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810498589.0A Active CN108720851B (en) 2018-05-23 2018-05-23 Driving state detection method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN108720851B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765911A (en) * 2019-10-14 2020-02-07 武汉虹识技术有限公司 Alarm clock turning-off method and device
CN111152653A (en) * 2018-11-07 2020-05-15 行为科技(北京)有限公司 Fatigue driving detection method based on multi-information fusion
CN111540208A (en) * 2020-05-12 2020-08-14 济南浪潮高新科技投资发展有限公司 Method for preventing driving without license and fatigue driving based on block chain technology
CN111832344A (en) * 2019-04-17 2020-10-27 深圳熙卓科技有限公司 Dynamic pupil detection method and device
CN111882827A (en) * 2020-07-27 2020-11-03 复旦大学 Fatigue driving monitoring method, system and device and readable storage medium
CN111913561A (en) * 2019-05-07 2020-11-10 中移(苏州)软件技术有限公司 Display method and device based on eye state, display equipment and storage medium
CN112052770A (en) * 2020-08-31 2020-12-08 北京地平线信息技术有限公司 Method, apparatus, medium, and electronic device for fatigue detection
CN112329643A (en) * 2020-11-06 2021-02-05 重庆第二师范学院 Learning efficiency detection method, system, electronic device and medium
CN112381871A (en) * 2020-10-16 2021-02-19 华东交通大学 Method for realizing locomotive alertness device based on face recognition
CN112580464A (en) * 2020-12-08 2021-03-30 北京工业大学 Method and device for judging iris occlusion of upper eyelid
CN115366909A (en) * 2022-10-21 2022-11-22 四川省公路规划勘察设计研究院有限公司 Dynamic early warning method and device for driver accidents in long and large longitudinal slope section and electronic equipment
CN116974370A (en) * 2023-07-18 2023-10-31 深圳市本顿科技有限公司 Anti-addiction child learning tablet computer control method and system
CN117523649A (en) * 2024-01-04 2024-02-06 成都科瑞特电气自动化有限公司 Mining iris safety recognition detection method, system, medium and terminal
CN116974370B (en) * 2023-07-18 2024-04-16 深圳市本顿科技有限公司 Anti-addiction child learning tablet computer control method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100191A1 (en) * 2003-11-11 2005-05-12 Harbach Andrew P. Imaging system and method for monitoring an eye
WO2009138828A1 (en) * 2008-05-12 2009-11-19 Toyota Jidosha Kabushiki Kaisha Driver imaging apparatus and driver imaging method
CN102436715A (en) * 2011-11-25 2012-05-02 大连海创高科信息技术有限公司 Detection method for fatigue driving
CN103136519A (en) * 2013-03-22 2013-06-05 中国移动通信集团江苏有限公司南京分公司 Sight tracking and positioning method based on iris recognition
CN107016381A (en) * 2017-05-11 2017-08-04 南宁市正祥科技有限公司 A kind of driven fast person's fatigue detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100191A1 (en) * 2003-11-11 2005-05-12 Harbach Andrew P. Imaging system and method for monitoring an eye
WO2009138828A1 (en) * 2008-05-12 2009-11-19 Toyota Jidosha Kabushiki Kaisha Driver imaging apparatus and driver imaging method
CN102436715A (en) * 2011-11-25 2012-05-02 大连海创高科信息技术有限公司 Detection method for fatigue driving
CN103136519A (en) * 2013-03-22 2013-06-05 中国移动通信集团江苏有限公司南京分公司 Sight tracking and positioning method based on iris recognition
CN107016381A (en) * 2017-05-11 2017-08-04 南宁市正祥科技有限公司 A kind of driven fast person's fatigue detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
史春蕾等: "虹膜定位算法的研究", 《计算机科学》 *
周泓等: "面向驾驶员状态监测的人眼凝视方向判别", 《电路与系统学报》 *
苑玮琦等: "基于PERCLOS的眼睛张开程度检测算法研究", 《微计算机信息》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111152653A (en) * 2018-11-07 2020-05-15 行为科技(北京)有限公司 Fatigue driving detection method based on multi-information fusion
CN111832344A (en) * 2019-04-17 2020-10-27 深圳熙卓科技有限公司 Dynamic pupil detection method and device
CN111832344B (en) * 2019-04-17 2023-10-24 深圳熙卓科技有限公司 Dynamic pupil detection method and device
CN111913561A (en) * 2019-05-07 2020-11-10 中移(苏州)软件技术有限公司 Display method and device based on eye state, display equipment and storage medium
CN110765911A (en) * 2019-10-14 2020-02-07 武汉虹识技术有限公司 Alarm clock turning-off method and device
CN111540208A (en) * 2020-05-12 2020-08-14 济南浪潮高新科技投资发展有限公司 Method for preventing driving without license and fatigue driving based on block chain technology
CN111882827A (en) * 2020-07-27 2020-11-03 复旦大学 Fatigue driving monitoring method, system and device and readable storage medium
CN112052770A (en) * 2020-08-31 2020-12-08 北京地平线信息技术有限公司 Method, apparatus, medium, and electronic device for fatigue detection
CN112381871A (en) * 2020-10-16 2021-02-19 华东交通大学 Method for realizing locomotive alertness device based on face recognition
CN112329643A (en) * 2020-11-06 2021-02-05 重庆第二师范学院 Learning efficiency detection method, system, electronic device and medium
CN112580464A (en) * 2020-12-08 2021-03-30 北京工业大学 Method and device for judging iris occlusion of upper eyelid
CN115366909A (en) * 2022-10-21 2022-11-22 四川省公路规划勘察设计研究院有限公司 Dynamic early warning method and device for driver accidents in long and large longitudinal slope section and electronic equipment
CN116974370A (en) * 2023-07-18 2023-10-31 深圳市本顿科技有限公司 Anti-addiction child learning tablet computer control method and system
CN116974370B (en) * 2023-07-18 2024-04-16 深圳市本顿科技有限公司 Anti-addiction child learning tablet computer control method and system
CN117523649A (en) * 2024-01-04 2024-02-06 成都科瑞特电气自动化有限公司 Mining iris safety recognition detection method, system, medium and terminal
CN117523649B (en) * 2024-01-04 2024-03-15 成都科瑞特电气自动化有限公司 Mining iris safety recognition detection method, system, medium and terminal

Also Published As

Publication number Publication date
CN108720851B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN108720851A (en) A kind of driving condition detection method, mobile terminal and storage medium
KR102209595B1 (en) Detailed eye shape model for robust biometric applications
CN106295533B (en) A kind of optimization method, device and the camera terminal of self-timer image
CN114399831A (en) Blue light modulation for biosafety
Kurylyak et al. Detection of the eye blinks for human's fatigue monitoring
CN108073889A (en) The method and apparatus of iris region extraction
KR102322029B1 (en) Method and Apparatus for acquiring a biometric information
KR102469720B1 (en) Electronic device and method for determining hyperemia grade of eye using the same
Ghosh et al. Real time eye detection and tracking method for driver assistance system
CN106529436A (en) Identity consistency authentication method and device, and mobile terminal
KR101758754B1 (en) Apparatus for recognizing iris and operating method thereof
US11625952B2 (en) Iris recognition system, iris recognition method, and storage medium
US11670066B2 (en) Iris recognition system, iris recognition method, and storage medium
CN110222597B (en) Method and device for adjusting screen display based on micro-expressions
CN107710221A (en) A kind of method, apparatus and mobile terminal for being used to detect live subject
CN111353404A (en) Face recognition method, device and equipment
KR20200134160A (en) Living-body detection method and apparatus for face, electronic device ad computer readable medium
CN111259757B (en) Living body identification method, device and equipment based on image
JP7298720B2 (en) Image processing system, image processing method and storage medium
CN110363782B (en) Region identification method and device based on edge identification algorithm and electronic equipment
CN107368783A (en) Living body iris detection method, electronic installation and computer-readable recording medium
CN108255452B (en) Blue light filtering method and electronic equipment
WO2020222785A1 (en) Facial action unit detection
CN111723636B (en) Fraud detection using optokinetic responses
CN107273847B (en) Iris acquisition method and apparatus, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211210

Address after: 541000 building D2, HUTANG headquarters economic Park, Guimo Avenue, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi Code Interpretation Intelligent Information Technology Co.,Ltd.

Address before: 201207 2 / F, building 13, 27 Xinjinqiao Road, Pudong New Area pilot Free Trade Zone, Shanghai

Patentee before: SHIMA RONGHE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230822

Address after: No. 8, Row 9, Fatou Dongli Middle Yard, Chaoyang District, Beijing, 100000

Patentee after: Wang Xiaopeng

Address before: 541000 building D2, HUTANG headquarters economic Park, Guimo Avenue, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Patentee before: Guangxi Code Interpretation Intelligent Information Technology Co.,Ltd.