CN108830240A - Fatigue driving state detection method, device, computer equipment and storage medium - Google Patents

Fatigue driving state detection method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108830240A
CN108830240A CN201810649253.XA CN201810649253A CN108830240A CN 108830240 A CN108830240 A CN 108830240A CN 201810649253 A CN201810649253 A CN 201810649253A CN 108830240 A CN108830240 A CN 108830240A
Authority
CN
China
Prior art keywords
key point
fatigue driving
driving state
eyes
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810649253.XA
Other languages
Chinese (zh)
Inventor
邢映彪
黄海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tongda Auto Electric Co Ltd
Original Assignee
Guangzhou Tongda Auto Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tongda Auto Electric Co Ltd filed Critical Guangzhou Tongda Auto Electric Co Ltd
Priority to CN201810649253.XA priority Critical patent/CN108830240A/en
Publication of CN108830240A publication Critical patent/CN108830240A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

This application involves a kind of fatigue driving state detection method, system, computer equipment and storage mediums.Fatigue driving state detection method includes:The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;By comparing the similarity in two neighboring region in the regional ensemble to obtain the object candidate area in the monitoring image, and the face area in the object candidate area is identified;It identifies the mouth key point and eyes key point in the face area, and is changed with time according to the position of the mouth key point and eyes key point judge whether driver is in fatigue driving state respectively.Above-mentioned fatigue driving state detection method, can be improved the accuracy rate for judging whether driver is in fatigue driving state.

Description

Fatigue driving state detection method, device, computer equipment and storage medium
Technical field
This application involves technical field of image detection, more particularly to a kind of fatigue driving state detection method, device, meter Calculate machine equipment and storage medium.
Background technique
With the development of economy and society, transportation is gradually expanded, motor vehicles are growing day by day.The fatigue of driver is driven Road traffic accident caused by sailing also increases therewith.It can avoid handing over by the way that whether detection driver is in fatigue driving state The generation of interpreter's event.
It, can be according to the various travelings of vehicle in the process of moving when whether detection driver is in fatigue driving state Data judge thus driving behavior with the presence or absence of abnormal, and identifies driving fatigue.However, due to influencing running data Environmental disturbances factor is numerous, leads to the method accuracy rate for judging whether driver is in fatigue driving state based on running data It is low.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide one kind can be improved judge driver whether be in fatigue Fatigue driving state detection method, device, computer equipment and the storage medium of the accuracy rate of driving condition.
A kind of fatigue driving state detection method, the method includes:
The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;
The target in the monitoring image is obtained by comparing the similarity in two neighboring region in the regional ensemble Candidate region, and the face area in the object candidate area is identified;
Identify the mouth key point and eyes key point in the face area, and respectively according to the mouth key point and The position of eyes key point, which is changed with time, judges whether driver is in fatigue driving state, wherein the eyes are crucial Point is the coordinate points of characterization eye position, and the mouth key point is to characterize the coordinate points of mouth position.
In one embodiment, the fatigue driving state detection method, it is described by comparing in the regional ensemble The similarity in two neighboring region obtains the object candidate area in the monitoring image, including:
It is each in the monitoring image to obtain by the similarity for calculating two neighboring region in the regional ensemble Similar area;
The similar area that area in each similar area is more than specified threshold is determined as object candidate area.
In one embodiment, the fatigue driving state detection method, it is described by calculating in the regional ensemble The similarity in two neighboring region obtains each similar area in the monitoring image, including:
Calculate the similarity in two neighboring region in the regional ensemble;
The two neighboring region that the similarity meets preset condition is merged, the region after merging is set as similar Region.
In one embodiment, the fatigue driving state detection method, the similarity include texture similarity;
The step of comparing the texture similarity in two neighboring region in the regional ensemble include:
It handles monitoring image progress gray processing to obtain gray level image;
The binary pattern feature of each region in the two neighboring region is calculated, and obtains the binary pattern feature pair The vector answered;
The texture similarity in two neighboring region is calculated according to the vector.
In one embodiment, the fatigue driving state detection method, the mouth identified in the face area Bar key point and eyes key point, including:
The face area is matched with template, according in the template mouth key point and eyes key point obtain Take the mouth key point and eyes key point in the face area.
In one embodiment, the fatigue driving state detection method, it is described respectively according to the mouth key point It changes with time with the position of eyes key point and judges whether driver is in fatigue driving state, including:
It is changed with time according to the position of the mouth key point and judges whether driver is yawning;
It is changed with time according to the position of the eyes key point and judges whether driver is closing one's eyes;
By judge the driver whether yawn and whether close one's eyes judge driver whether be in fatigue Driving condition.
In one embodiment, the fatigue driving state detection method, it is described by whether judging the driver It is yawning and whether is judging whether driver is in fatigue driving state closing one's eyes, including:
Obtain the frequency that driver yawns;
The time that pupil is blocked by eyelid when obtaining driver's eye closing accounts for the ratio of specific time;
According to the time that pupil when the frequency and eye closing yawned is blocked by eyelid account for the ratio of specific time come Judge whether driver is in fatigue driving state.
A kind of fatigue driving state detection device, including:
Module is obtained to be split image to obtain corresponding regional ensemble for obtaining the monitoring image of driving procedure;
Identification module obtains the monitoring for the similarity by comparing two neighboring region in the regional ensemble Object candidate area in image, and the face area in the object candidate area is identified;
Judgment module, the mouth key point and eyes key point in the face area for identification, and respectively according to institute The position for stating mouth key point and eyes key point, which is changed with time, judges whether driver is in fatigue driving state, In, the eyes key point is to characterize the coordinate points of eye position, and the mouth key point is to characterize the coordinate points of mouth position.
A kind of computer equipment can be run on a memory and on a processor including memory, processor and storage Computer program, the processor realize following steps when executing the computer program:
The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;
The target in the monitoring image is obtained by comparing the similarity in two neighboring region in the regional ensemble Candidate region, and the face area in the object candidate area is identified;
Identify the mouth key point and eyes key point in the face area, and respectively according to the mouth key point and The position of eyes key point, which is changed with time, judges whether driver is in fatigue driving state, wherein the eyes are crucial Point is the coordinate points of characterization eye position, and the mouth key point is to characterize the coordinate points of mouth position.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor Following steps are realized when row:
The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;
The target in the monitoring image is obtained by comparing the similarity in two neighboring region in the regional ensemble Candidate region, and the face area in the object candidate area is identified;
Identify the mouth key point and eyes key point in the face area, and respectively according to the mouth key point and The position of eyes key point, which is changed with time, judges whether driver is in fatigue driving state, wherein the eyes are crucial Point is the coordinate points of characterization eye position, and the mouth key point is to characterize the coordinate points of mouth position.
Fatigue driving state detection method, device, computer equipment and storage medium in the embodiment of the present application, will drive The monitoring image image of process is split to obtain corresponding regional ensemble, by comparing region two neighboring in regional ensemble Similarity identifies the face area in object candidate area to obtain object candidate area, respectively according to facial regions The position of mouth key point and eyes key point in domain, which is changed with time, judges whether driver is in fatigue driving state, The accuracy rate for judging whether driver is in fatigue driving state can be improved.
Detailed description of the invention
Fig. 1 is the applied environment figure of fatigue driving state detection method in one embodiment;
Fig. 2 is the flow diagram of fatigue driving state detection method in one embodiment;
Fig. 3 is the binary pattern feature extraction schematic diagram of image in one embodiment;
Fig. 4 is the structural block diagram of fatigue driving state detection device in one embodiment;
Fig. 5 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not For limiting the application.
Fatigue driving state detection method provided by the present application, can be applied in application environment as shown in Figure 1.Its In, terminal 102 is communicated with server 104 by network.Wherein, terminal 102 can be, but not limited to be various individual calculus Machine, laptop, smart phone, tablet computer and portable wearable device, server 104 can use independent server The either server cluster of multiple servers composition is realized.
In one embodiment, it as shown in Fig. 2, providing a kind of fatigue driving state detection method, applies in this way It is illustrated, includes the following steps for server in Fig. 1:
Step 202, image is split to obtain corresponding regional ensemble by the monitoring image for obtaining driving procedure.
In this step, monitoring image can be the video frame images of monitoring driver, and the image based on figure can be used Monitoring image is divided into original area collection by dividing method (Efficient Graph-Based Image Segmentation) Close R={ r1..., rn, wherein n is integer, indicates the region quantity of original area set-partition.
Step 204, the target in monitoring image is obtained by comparing the similarity in region two neighboring in regional ensemble Candidate region, and the face area in object candidate area is identified.
In above-mentioned steps, similarity may include color similarity, texture similarity, size similarity and identical phase Like degree, the similarity in finally obtained two neighboring region can be color similarity, texture similarity, size similarity and The combination for similarity of coincideing.Following steps can be executed when identifying to the face area in object candidate area:It utilizes Haar feature describes the shared attribute of face;A kind of feature for being known as integral image is established, and is based on integral image, it can be with Several different rectangular characteristics of quick obtaining;It is trained using iterative algorithm;Establish hierarchical classification device.
Step 206, the mouth key point and eyes key point in face area are identified, and respectively according to mouth key point and The position of eyes key point, which is changed with time, judges whether driver is in fatigue driving state, wherein eyes key point is The coordinate points of eye position are characterized, mouth key point is to characterize the coordinate points of mouth position.
Wherein it is possible to carry out the detection of human eye and mouth key point by Face datection library.
The monitoring image of driving procedure is split to obtain corresponding regional ensemble, by comparing area by above-described embodiment The similarity in two neighboring region obtains object candidate area in the set of domain, and to the face area in object candidate area into Row identification, changes with time according to the position of mouth key point and eyes key point in face area judge driver respectively Whether it is in fatigue driving state, the accuracy rate for judging whether driver is in fatigue driving state can be improved.
In one embodiment, the object candidate area in monitoring image can be obtained by following steps, including:Pass through The similarity in two neighboring region obtains each similar area in monitoring image in the set of zoning;By each similar area Area is more than that the similar area of specified threshold is determined as object candidate area in domain.
In the above-described embodiments, the similarity in finally obtained two neighboring region can be color similarity, texture phase Like degree, the combination of size similarity and identical similarity.For example, using Scolour(ri, rj) indicate the face in two neighboring region Color similarity, uses Stexture(ri, rj) indicate the texture similarity in two neighboring region, use Ssize(ri, rj) indicate adjacent The size similarity in two regions, uses Sfill(ri, rj) indicate the identical similarity in two neighboring region, then it can be with following Formula indicates final similarity:
S(ri, rj)=a1Scolour(ri, rj)+a2Stexture(ri, rj)+a3Ssize(ri, rj)+a4Sfill(ri, rl)
Wherein, aiFor constant, ai∈ { 0,1 }.
Above-described embodiment, by zoning gather in two neighboring region similarity it is each in monitoring image to obtain A similar area, and the similar area that area in each similar area is more than specified threshold is determined as object candidate area, it can Judge whether driver is in the accurate of fatigue driving state so that object candidate area is precisely located out, and then can be improved Rate.
In one embodiment, similarity includes texture similarity Stexture(ri, rj).It can be compared by following steps The texture similarity in two neighboring region in regional ensemble:It handles monitoring image progress gray processing to obtain gray level image;It calculates The binary pattern feature of each region in two neighboring region, and obtain the corresponding vector of binary pattern feature;According to meter Calculate the texture similarity in two neighboring region.
In the above-described embodiments, image can be converted to grayscale image, then calculates the local binary patterns (Local of image Binary Patterns, LBP) feature, calculation formula is as follows:
(xc, yc) be center pixel coordinate, p be neighborhood p-th of pixel (one shares eight pixels), ipFor neighborhood picture The gray value of element, icFor the gray value of center pixel, S (x) is that the formula of sign function is as follows:
It is illustrated with reference to the accompanying drawing, as shown in figure 3, the number in grid is pixel gray value size, left side is Original image does a thresholding processing, the label more than or equal to central point pixel is, small for the intermediate grid in 9 grids It is in the label of central point pixel.The binary number around central pixel point is finally turned into decimal number, obtains LBP value.It can To indicate LBP feature with the histogram of 26bin, the vector of 26 dimensions can be obtained:
The correlation of texture can be calculated by the following formula:
Wherein, it calculates color histogram and needs for color space to be divided into several small color intervals, is i.e. histogram Bin obtains color histogram by calculating pixel of the color in each minizone, and bin is more, resolution of the histogram to color Rate is stronger.
For color similarity Scolour(ri, rj) can be calculated according to the following formula:
Wherein,For the corresponding multi-C vector in each region, n CiDimension.
For size similarity Ssize(ri, rj) can be calculated according to the following formula:
Wherein, size (ri) it is region riPixel number, size (im) indicates the number of pixels of entire picture.
For the similarity S that coincidefill(ri, rl), it can be calculated according to the following formula:
Wherein, BBijRefer to include i, the minimum Outsourcing area in the region j.
In above-described embodiment when calculating texture similarity, local binary patterns feature is extracted, is keeping accurate In the case where rate, the calculation amount of feature extraction is greatly reduced.
In one embodiment, pass through two neighboring region in the set of zoning in fatigue driving state detection method Similarity obtains each similar area in monitoring image, including:The similarity in two neighboring region in the set of zoning; The two neighboring region that similarity meets preset condition is merged, the region after merging is set as similar area.
In the above-described embodiments, original area set R={ r can have been calculated1..., rnInner each adjacent area Similarity S={ s1..., snAfter, execute region merging technique step:Find out the highest two region r of similarityiAnd rj, closed It and is new collection rt, it is added to R.All and r is removed from S againiAnd rjRelated data, and calculate new collection rtWith all and its phase Similitude s (the r in neighbouring regiont, r*), region merging technique step is repeated, until S collection is sky.When S collection is empty, obtain merging Region together is exactly similar area, may include multiple similar areas in monitoring image.
The monitoring image of driving procedure is split to obtain corresponding regional ensemble, by comparing area by above-described embodiment The similarity in two neighboring region obtains object candidate area in the set of domain, and to the face area in object candidate area into Row identification, changes with time according to the position of mouth key point and eyes key point in face area judge driver respectively Whether it is in fatigue driving state, the accuracy rate for judging whether driver is in fatigue driving state can be improved.
In one embodiment, the mouth key point and eyes key in face area can be identified in the following manner Point:Face area is matched with template, according in the mouth key point and eyes key point acquisition face area in template Mouth key point and eyes key point.
In above-described embodiment, template can be the library Dlib, can detecte out 68 key points of face, wherein at the 36th point ~41 points are the corresponding key point of right eye, and the 42nd point~47 points are the corresponding key point of left eye, and the 48th point~68 points are mouth pair The key point answered.
The monitoring image of driving procedure is split to obtain corresponding regional ensemble, by comparing area by above-described embodiment The similarity in two neighboring region obtains object candidate area in the set of domain, and to the face area in object candidate area into Row identification, changes with time according to the position of mouth key point and eyes key point in face area judge driver respectively Whether it is in fatigue driving state, the accuracy rate for judging whether driver is in fatigue driving state can be improved.
In one embodiment, it can judge whether driver is in fatigue driving state by following steps:According to mouth The position of bar key point, which is changed with time, judges whether driver is yawning;At any time according to the position of eyes key point Variation judges whether driver is closing one's eyes;By judging whether driver is yawning and whether judging driver closing one's eyes Whether fatigue driving state is in.
In above-described embodiment, it can be judged by the following manner whether driver is yawning:The mouth that will test closes Key point is converted into relative coordinate, and for mouth using 48 points as origin, formula is as follows
X '=x-xo
Y '=y-yo
Wherein xo、yoFor origin.
It calculates the difference of with the identical point of previous frame according to relative coordinate at every, calculates shift length:
DtFor the shift length of the t moment, x 'tFor the X-axis relative coordinate of the t moment, x 't-1For the previous frame X-axis relative coordinate.Each coordinate points shift length data can be calculated by the following formula.It is accumulative 5 frames and all the points Shift length data are set as the state of yawning when aggregate-value is more than respective thresholds.
Wherein,In i be crucial point serial number in region, such as mouth obtains key point 48-68, and t is the moment.
The closed state of eyes can be judged by following steps:Calculate the Y-axis difference of eye areas respective point:
Right eye:E=(y37-y41)+(y38-y40);
Left eye:E=(y43-y47)+(y44-y46)。
When E be less than threshold values when, be set as blink state, when two eyes be close one's eyes, be considered as closed-eye state.
The monitoring image of driving procedure is split to obtain corresponding regional ensemble, by comparing area by above-described embodiment The similarity in two neighboring region obtains object candidate area in the set of domain, and to the face area in object candidate area into Row identification, changes with time according to the position of mouth key point and eyes key point in face area judge driver respectively Whether it is in fatigue driving state, the accuracy rate for judging whether driver is in fatigue driving state can be improved.
In one embodiment, it can judge whether driver is in fatigue driving state by following steps:It obtains The frequency that driver yawns;The time that pupil is blocked by eyelid when obtaining driver's eye closing accounts for the ratio of specific time;According to The time that pupil is blocked by eyelid when the frequency and eye closing yawned accounts for the ratio of specific time to judge whether driver locates In fatigue driving state.
The time that pupil is blocked by eyelid when driver closes one's eyes accounts for ratio (the Percentage of EyeIid of specific time CIosure over the PupiI, PERCLOS) it can be calculated with following formula:
When PERCLOS value is more than threshold values, and has behavior of yawning to occur frequently, identification driver is fatigue driving shape State.
The monitoring image of driving procedure is split to obtain corresponding regional ensemble, by comparing area by above-described embodiment The similarity in two neighboring region obtains object candidate area in the set of domain, and to the face area in object candidate area into Row identification, changes with time according to the position of mouth key point and eyes key point in face area judge driver respectively Whether it is in fatigue driving state, the accuracy rate for judging whether driver is in fatigue driving state can be improved.
It should be understood that although each step in the flow chart of Fig. 2 is successively shown according to the instruction of arrow, this A little steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these steps It executes there is no the limitation of stringent sequence, these steps can execute in other order.Moreover, at least part in Fig. 2 Step may include that perhaps these sub-steps of multiple stages or stage are executed in synchronization to multiple sub-steps It completes, but can execute at different times, the execution sequence in these sub-steps or stage, which is also not necessarily, successively to be carried out, But it can be executed in turn or alternately at least part of the sub-step or stage of other steps or other steps.
In one embodiment, as shown in figure 4, providing a kind of fatigue driving state detection device, including:
Module 402 is obtained to be split image to obtain corresponding region collection for obtaining the monitoring image of driving procedure It closes;
Identification module 404 obtains monitoring image for the similarity by comparing two neighboring region in regional ensemble In object candidate area, and the face area in object candidate area is identified;
Judgment module 406, the mouth key point and eyes key point in face area for identification, and respectively according to mouth The position of key point and eyes key point, which is changed with time, judges whether driver is in fatigue driving state, wherein eyes Key point is to characterize the coordinate points of eye position, and mouth key point is to characterize the coordinate points of mouth position.
Specific restriction about fatigue driving state detection device may refer to detect above for fatigue driving state The restriction of method, details are not described herein.Modules in above-mentioned fatigue driving state detection device can be fully or partially through Software, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the place in computer equipment It manages in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution or more The corresponding operation of modules.
The term " includes " of the embodiment of the present invention and " having " and their any deformations, it is intended that cover non-exclusive Include.Such as contain series of steps or the process, method, system, product or equipment of (module) unit are not limited to The step of listing or unit, but optionally further comprising the step of not listing or unit, or optionally further comprising for these The intrinsic other step or units of process, method, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Referenced herein " multiple " refer to two or more."and/or", the association for describing affiliated partner are closed System indicates may exist three kinds of relationships, for example, A and/or B, can indicate:Individualism A exists simultaneously A and B, individualism These three situations of B.Character "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or".
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is for storing fatigue driving state detection data.The network interface of the computer equipment is used for and outside Terminal passes through network connection communication.To realize a kind of fatigue driving state detection side when the computer program is executed by processor Method.
It will be understood by those skilled in the art that structure shown in Fig. 5, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor realize following steps when executing computer program:
The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;
The object candidate area in monitoring image is obtained by comparing the similarity in region two neighboring in regional ensemble, And the face area in object candidate area is identified;
Identify the mouth key point and eyes key point in face area, and crucial according to mouth key point and eyes respectively Change with time judge whether driver is in fatigue driving state in the position of point, wherein eyes key point is characterization eyes The coordinate points of position, mouth key point are to characterize the coordinate points of mouth position.
In one embodiment, following steps are also realized when processor executes computer program:Gathered by zoning In the similarity in two neighboring region obtain each similar area in monitoring image;It is more than by area in each similar area The similar area of specified threshold is determined as object candidate area.
In one embodiment, following steps are also realized when processor executes computer program:Phase in the set of zoning The similarity in adjacent two regions;The two neighboring region that similarity meets preset condition is merged, by the region after merging It is set as similar area.
In one embodiment, following steps are also realized when processor executes computer program:Similarity includes texture phase Like degree;The step of texture similarity in two neighboring region, includes in comparison domain set:Monitoring image is carried out at gray processing Reason obtains gray level image;The binary pattern feature of each region in two neighboring region is calculated, and obtains binary pattern feature pair The vector answered;The texture similarity in two neighboring region is calculated according to vector.
In one embodiment, following steps are also realized when processor executes computer program:By face area and template Matched, according in template mouth key point and eyes key point obtain face area in mouth key point and eyes close Key point.
In one embodiment, following steps are also realized when processor executes computer program:According to mouth key point Position, which is changed with time, judges whether driver is yawning;It is driven according to the judgement that changes with time of the position of eyes key point Whether the person of sailing is closing one's eyes;By judging whether driver is yawning and whether judging whether driver is in tired closing one's eyes Labor driving condition.
In one embodiment, following steps are also realized when processor executes computer program:Driver is obtained to yawn Frequency;The time that pupil is blocked by eyelid when obtaining driver's eye closing accounts for the ratio of specific time;According to the frequency yawned And the time that pupil is blocked by eyelid when closing one's eyes accounts for the ratio of specific time to judge whether driver is in fatigue driving shape State.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program realizes following steps when being executed by processor:
The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;
The object candidate area in monitoring image is obtained by comparing the similarity in region two neighboring in regional ensemble, And the face area in object candidate area is identified;
Identify the mouth key point and eyes key point in face area, and crucial according to mouth key point and eyes respectively Change with time judge whether driver is in fatigue driving state in the position of point, wherein eyes key point is characterization eyes The coordinate points of position, mouth key point are to characterize the coordinate points of mouth position.
In one embodiment, following steps are also realized when computer program is executed by processor:Pass through zoning collection The similarity in two neighboring region obtains each similar area in monitoring image in conjunction;Area in each similar area is surpassed The similar area for crossing specified threshold is determined as object candidate area.
In one embodiment, following steps are also realized when computer program is executed by processor:In the set of zoning The similarity in two neighboring region;The two neighboring region that similarity meets preset condition is merged, by the area after merging Domain is set as similar area.
In one embodiment, following steps are also realized when computer program is executed by processor:Similarity includes texture Similarity;The step of texture similarity in two neighboring region, includes in comparison domain set:Monitoring image is subjected to gray processing Processing obtains gray level image;The binary pattern feature of each region in two neighboring region is calculated, and obtains binary pattern feature Corresponding vector;The texture similarity in two neighboring region is calculated according to vector.
In one embodiment, following steps are also realized when computer program is executed by processor:By face area and mould Plate is matched, according to the mouth key point and eyes in the mouth key point and eyes key point acquisition face area in template Key point.
In one embodiment, following steps are also realized when computer program is executed by processor:According to mouth key point Position change with time and judge whether driver is yawning;It is changed with time judgement according to the position of eyes key point Whether driver is closing one's eyes;By judging whether driver is yawning and whether judging whether driver is in closing one's eyes Fatigue driving state.
In one embodiment, following steps are also realized when computer program is executed by processor:It obtains driver and beats Kazakhstan Deficient frequency;The time that pupil is blocked by eyelid when obtaining driver's eye closing accounts for the ratio of specific time;According to the frequency yawned The time that pupil is blocked by eyelid when rate and eye closing accounts for the ratio of specific time to judge whether driver is in fatigue driving State.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Instruct relevant hardware to complete by computer program, computer program to can be stored in a non-volatile computer readable It takes in storage medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, this Shen Please provided by any reference used in each embodiment to memory, storage, database or other media, may each comprise Non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
Above embodiments only express the several embodiments of the application, and the description thereof is more specific and detailed, but can not Therefore it is construed as limiting the scope of the patent.It should be pointed out that for those of ordinary skill in the art, Under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the protection scope of the application. Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

1. a kind of fatigue driving state detection method, which is characterized in that including:
The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;
The target candidate in the monitoring image is obtained by comparing the similarity in two neighboring region in the regional ensemble Region, and the face area in the object candidate area is identified;
Identify the mouth key point and eyes key point in the face area, and respectively according to the mouth key point and eyes The position of key point, which is changed with time, judges whether driver is in fatigue driving state, wherein the eyes key point is The coordinate points of eye position are characterized, the mouth key point is to characterize the coordinate points of mouth position.
2. fatigue driving state detection method according to claim 1, which is characterized in that described by comparing the region The similarity in two neighboring region obtains the object candidate area in the monitoring image in set, including:
It is each similar in the monitoring image to obtain by the similarity for calculating two neighboring region in the regional ensemble Region;
The similar area that area in each similar area is more than specified threshold is determined as object candidate area.
3. fatigue driving state detection method according to claim 2, which is characterized in that described by calculating the region The similarity in two neighboring region obtains each similar area in the monitoring image in set, including:
Calculate the similarity in two neighboring region in the regional ensemble;
The two neighboring region that the similarity meets preset condition is merged, the region after merging is set as similar area Domain.
4. fatigue driving state detection method according to claim 1, which is characterized in that the similarity includes texture phase Like degree;
The step of comparing the texture similarity in two neighboring region in the regional ensemble include:
It handles monitoring image progress gray processing to obtain gray level image;
The binary pattern feature of each region in the two neighboring region is calculated, and it is corresponding to obtain the binary pattern feature Vector;
The texture similarity in two neighboring region is calculated according to the vector.
5. fatigue driving state detection method according to any one of claim 1 to 4, which is characterized in that the identification Mouth key point and eyes key point in the face area, including:
The face area is matched with template, according in the template mouth key point and eyes key point obtain institute State the mouth key point and eyes key point in face area.
6. fatigue driving state detection method according to any one of claim 1 to 4, which is characterized in that the difference It is changed with time according to the position of the mouth key point and eyes key point and judges whether driver is in fatigue driving shape State, including:
It is changed with time according to the position of the mouth key point and judges whether driver is yawning;
It is changed with time according to the position of the eyes key point and judges whether driver is closing one's eyes;
By judging whether the driver is yawning and whether judging whether driver is in fatigue driving closing one's eyes State.
7. fatigue driving state detection method according to claim 6, which is characterized in that described by judging the driving Whether member is yawning and whether is judging whether driver is in fatigue driving state closing one's eyes, including:
Obtain the frequency that driver yawns;
The time that pupil is blocked by eyelid when obtaining driver's eye closing accounts for the ratio of specific time;
The ratio of specific time is accounted for according to the time that pupil when the frequency and eye closing yawned is blocked by eyelid to judge Whether driver is in fatigue driving state.
8. a kind of fatigue driving state detection device, which is characterized in that including:
Module is obtained to be split image to obtain corresponding regional ensemble for obtaining the monitoring image of driving procedure;
Identification module obtains the monitoring image for the similarity by comparing two neighboring region in the regional ensemble In object candidate area, and the face area in the object candidate area is identified;
Judgment module, the mouth key point and eyes key point in the face area for identification, and respectively according to the mouth The position of bar key point and eyes key point, which is changed with time, judges whether driver is in fatigue driving state, wherein institute Stating eyes key point is to characterize the coordinate points of eye position, and the mouth key point is to characterize the coordinate points of mouth position.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, which is characterized in that the processor realizes any one of claims 1 to 7 institute when executing the computer program The step of fatigue driving state detection method stated.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of fatigue driving state detection method described in any one of claims 1 to 7 is realized when being executed by processor.
CN201810649253.XA 2018-06-22 2018-06-22 Fatigue driving state detection method, device, computer equipment and storage medium Pending CN108830240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810649253.XA CN108830240A (en) 2018-06-22 2018-06-22 Fatigue driving state detection method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810649253.XA CN108830240A (en) 2018-06-22 2018-06-22 Fatigue driving state detection method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN108830240A true CN108830240A (en) 2018-11-16

Family

ID=64143257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810649253.XA Pending CN108830240A (en) 2018-06-22 2018-06-22 Fatigue driving state detection method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108830240A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948434A (en) * 2019-01-31 2019-06-28 平安科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium for demographics of going on board
CN110723072A (en) * 2019-10-09 2020-01-24 卓尔智联(武汉)研究院有限公司 Driving assistance method and device, computer equipment and storage medium
CN111241874A (en) * 2018-11-28 2020-06-05 中国移动通信集团有限公司 Behavior monitoring method and device and computer readable storage medium
CN111797654A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Driver fatigue state detection method and device, storage medium and mobile terminal
CN112052770A (en) * 2020-08-31 2020-12-08 北京地平线信息技术有限公司 Method, apparatus, medium, and electronic device for fatigue detection
CN112699768A (en) * 2020-12-25 2021-04-23 哈尔滨工业大学(威海) Fatigue driving detection method and device based on face information and readable storage medium
CN117935231A (en) * 2024-03-20 2024-04-26 杭州臻稀生物科技有限公司 Non-inductive fatigue driving monitoring and intervention method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950355A (en) * 2010-09-08 2011-01-19 中国人民解放军国防科学技术大学 Method for detecting fatigue state of driver based on digital video
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN102436715A (en) * 2011-11-25 2012-05-02 大连海创高科信息技术有限公司 Detection method for fatigue driving
CN104574820A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Fatigue drive detecting method based on eye features
CN104732251A (en) * 2015-04-23 2015-06-24 郑州畅想高科股份有限公司 Video-based method of detecting driving state of locomotive driver
CN105844248A (en) * 2016-03-29 2016-08-10 北京京东尚科信息技术有限公司 Human face detection method and human face detection device
CN105844252A (en) * 2016-04-01 2016-08-10 南昌大学 Face key part fatigue detection method
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN106778633A (en) * 2016-12-19 2017-05-31 江苏慧眼数据科技股份有限公司 A kind of pedestrian recognition method based on region segmentation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950355A (en) * 2010-09-08 2011-01-19 中国人民解放军国防科学技术大学 Method for detecting fatigue state of driver based on digital video
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN102436715A (en) * 2011-11-25 2012-05-02 大连海创高科信息技术有限公司 Detection method for fatigue driving
CN104574820A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Fatigue drive detecting method based on eye features
CN104732251A (en) * 2015-04-23 2015-06-24 郑州畅想高科股份有限公司 Video-based method of detecting driving state of locomotive driver
CN105844248A (en) * 2016-03-29 2016-08-10 北京京东尚科信息技术有限公司 Human face detection method and human face detection device
CN105844252A (en) * 2016-04-01 2016-08-10 南昌大学 Face key part fatigue detection method
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN106778633A (en) * 2016-12-19 2017-05-31 江苏慧眼数据科技股份有限公司 A kind of pedestrian recognition method based on region segmentation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KING NGI NGAN等: "《视频分割及其应用》", 30 April 2014 *
曾龙龙: "基于视频监控的实时人脸检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王琳琳: "基于肤色模型和AdaBoost算法的人脸检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
雷万军等: "《生物医学工程专业实验指导》", 30 September 2012 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241874A (en) * 2018-11-28 2020-06-05 中国移动通信集团有限公司 Behavior monitoring method and device and computer readable storage medium
CN109948434A (en) * 2019-01-31 2019-06-28 平安科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium for demographics of going on board
CN109948434B (en) * 2019-01-31 2023-07-21 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for boarding number statistics
CN111797654A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Driver fatigue state detection method and device, storage medium and mobile terminal
CN110723072A (en) * 2019-10-09 2020-01-24 卓尔智联(武汉)研究院有限公司 Driving assistance method and device, computer equipment and storage medium
CN110723072B (en) * 2019-10-09 2021-06-01 卓尔智联(武汉)研究院有限公司 Driving assistance method and device, computer equipment and storage medium
CN112052770A (en) * 2020-08-31 2020-12-08 北京地平线信息技术有限公司 Method, apparatus, medium, and electronic device for fatigue detection
CN112699768A (en) * 2020-12-25 2021-04-23 哈尔滨工业大学(威海) Fatigue driving detection method and device based on face information and readable storage medium
CN117935231A (en) * 2024-03-20 2024-04-26 杭州臻稀生物科技有限公司 Non-inductive fatigue driving monitoring and intervention method
CN117935231B (en) * 2024-03-20 2024-06-07 杭州臻稀生物科技有限公司 Non-inductive fatigue driving monitoring and intervention method

Similar Documents

Publication Publication Date Title
CN108830240A (en) Fatigue driving state detection method, device, computer equipment and storage medium
CN110490202B (en) Detection model training method and device, computer equipment and storage medium
CN110852285B (en) Object detection method and device, computer equipment and storage medium
CN112801008B (en) Pedestrian re-recognition method and device, electronic equipment and readable storage medium
US20180260793A1 (en) Automatic assessment of damage and repair costs in vehicles
CN111680746B (en) Vehicle damage detection model training, vehicle damage detection method, device, equipment and medium
CN101482923B (en) Human body target detection and sexuality recognition method in video monitoring
CN110569721A (en) Recognition model training method, image recognition method, device, equipment and medium
CN109035295B (en) Multi-target tracking method, device, computer equipment and storage medium
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
CN111178245A (en) Lane line detection method, lane line detection device, computer device, and storage medium
Zare et al. Possibilistic fuzzy local information c-means for sonar image segmentation
CN111192277A (en) Instance partitioning method and device
CN112560796A (en) Human body posture real-time detection method and device, computer equipment and storage medium
CN112241952B (en) Brain midline identification method, device, computer equipment and storage medium
CN107844742A (en) Facial image glasses minimizing technology, device and storage medium
CN109002776B (en) Face recognition method, system, computer device and computer-readable storage medium
CN106778634B (en) Salient human body region detection method based on region fusion
CN111860582B (en) Image classification model construction method and device, computer equipment and storage medium
CN111539320A (en) Multi-view gait recognition method and system based on mutual learning network strategy
CN111914668A (en) Pedestrian re-identification method, device and system based on image enhancement technology
CN104751485A (en) GPU adaptive foreground extracting method
CN112001378A (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
CN103034840A (en) Gender identification method
Silva et al. A SOM combined with KNN for classification task

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181116