CN109117810A - Fatigue driving behavioral value method, apparatus, computer equipment and storage medium - Google Patents

Fatigue driving behavioral value method, apparatus, computer equipment and storage medium Download PDF

Info

Publication number
CN109117810A
CN109117810A CN201810974266.4A CN201810974266A CN109117810A CN 109117810 A CN109117810 A CN 109117810A CN 201810974266 A CN201810974266 A CN 201810974266A CN 109117810 A CN109117810 A CN 109117810A
Authority
CN
China
Prior art keywords
image
facial image
driving
eye
fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810974266.4A
Other languages
Chinese (zh)
Inventor
曹阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guomai Travel Polytron Technologies Inc
Original Assignee
Shenzhen Guomai Travel Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guomai Travel Polytron Technologies Inc filed Critical Shenzhen Guomai Travel Polytron Technologies Inc
Priority to CN201810974266.4A priority Critical patent/CN109117810A/en
Publication of CN109117810A publication Critical patent/CN109117810A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

The present invention relates to fatigue driving behavioral value method, apparatus, computer equipment and storage medium, this method includes the driving image for obtaining driver;Image dividing processing is carried out to driving image, obtains facial image;Human eye positioning is carried out according to facial image, obtains eye image;Time scale value shared by eyes closed in setting time in calculating eye image;The degree of fatigue of the acquisition driving behavior of the time scale value according to shared by eyes closed in setting time.The present invention passes through the positioning and acquisition to driving image using classifier progress facial image and eye image, the accuracy rate of face and human eye positioning can be improved, and it carries out the processing such as counting using the human eye area obtained in eye image, obtain perclos value, fatigue driving behavior is judged using perclos value, it can reduce to judge by accident as caused by driver's moment eye closing behavior and happen, improve Detection accuracy, can effectively reduce the probability for causing traffic accident to occur because of fatigue driving.

Description

Fatigue driving behavioral value method, apparatus, computer equipment and storage medium
Technical field
The present invention relates to driving behavior detection method, more specifically refers to fatigue driving behavioral value method, apparatus, calculates Machine equipment and storage medium.
Background technique
Fatigue drives to be a key factor for causing road traffic accident, from a large amount of of domestic statistics of traffic accidents over the years It may be concluded that the road traffic accident caused by driving because of tired driver accounts for about the 15% to 20% of sum in analysis of cases. And with the continuous development of transportation, this ratio may also will continue to rise.Traffic accident causes huge to country Economic loss and casualties cause unthinkable consequence to individual, increase the factor leading to social instability of society, thus study pre- The method of anti-detection fatigue is of far-reaching significance and great.
The detection mode of current fatigue driving behavior obtains facial image generally by camera, passes through multiple faces Image judges whether driver the phenomenon that yawning or closing one's eyes occurs, but this detection mode is more bright due to that can only detect It is aobvious to yawn or eye closing phenomenon, if driver at that time merely because personal habits are of short duration closes one's eyes, can also be recognized To be fatigue driving, cause Detection accuracy not high, and can not greatly reduce traffic accident rate.
Therefore, it is necessary to design a kind of new method, the fatigue state for accurately detecting driver is realized, greatly Reduce traffic accident rate.
Summary of the invention
It is an object of the invention to overcome the deficiencies of existing technologies, fatigue driving behavioral value method, apparatus is provided, is calculated Machine equipment and storage medium.
To achieve the above object, the invention adopts the following technical scheme: fatigue driving behavioral value method, comprising:
Obtain the driving image of driver;
Image dividing processing is carried out to driving image, obtains facial image;
Human eye positioning is carried out according to the facial image, obtains eye image;
Time scale value shared by eyes closed in setting time in calculating eye image;
The degree of fatigue of the acquisition driving behavior of the time scale value according to shared by eyes closed in setting time.
Its further technical solution are as follows: it is described that image dividing processing is carried out to driving image, obtain facial image, comprising:
Judge whether the driving image can carry out skin color segmentation;
If so, carrying out skin color segmentation processing to driving image, preliminary facial image is obtained;
Image preprocessing is carried out to preliminary facial image;
Pattern-recognition is carried out to the preliminary facial image after image preprocessing, to obtain facial image;
If it is not, then pattern-recognition is carried out to driving image, to obtain facial image.
Its further technical solution are as follows: it is described that skin color segmentation processing is carried out to driving image, preliminary facial image is obtained, is wrapped It includes:
The skin color segmentation of setting pixel chromaticity is carried out, to driving image to obtain broca scale;
Area of skin color is formed to broca scale compartmentalization;
Classification processing is carried out to area of skin color using classifier, to obtain preliminary facial image.
Its further technical solution are as follows: it is described that classification processing is carried out to area of skin color using classifier, to obtain preliminary people Face image, comprising:
The classification processing of area of skin color is carried out, using Haar-like feature composition and classification device to obtain preliminary facial image.
Its further technical solution are as follows: described that image preprocessing is carried out to preliminary facial image, comprising:
Preliminary facial image is subjected to single channel conversion;
Preliminary facial image after equalization conversion.
Its further technical solution are as follows: it is described that human eye positioning is carried out according to the facial image, obtain eye image, packet It includes:
Even number line and even column are inserted into the facial image;
Gaussian convolution is carried out to the facial image of insertion even number line and even column;
Classified using cascade classifier to the facial image after Gaussian convolution, to obtain eye image.
Its further technical solution are as follows: time scale shared by eyes closed in setting time in the calculating eye image Value, comprising:
Obtain the human eye area in eye image;
The frequency of each stage human eye area is counted, to obtain the frequency summation of each stage human eye area;
The accounting of the frequency summation of each stage human eye area and the frequency summation of all human eye areas is obtained, to obtain people Time scale value shared by eyes closed in setting time in eye image.
The present invention also provides fatigue driving behavioral value devices, comprising:
Driving image acquiring unit, for obtaining the driving image of driver;
Facial image acquiring unit obtains facial image for carrying out image dividing processing to driving image;
Eye image acquiring unit obtains eye image for carrying out human eye positioning according to the facial image;
Ratio value computing unit, for calculating in eye image time scale value shared by eyes closed in setting time;
Degree of fatigue acquiring unit obtains for the time scale value according to shared by eyes closed in setting time and drives row For degree of fatigue.
The present invention also provides a kind of computer equipment, the computer equipment includes memory and processor, described to deposit Computer program is stored on reservoir, the processor realizes above-mentioned fatigue driving behavior inspection when executing the computer program Survey method.
The present invention also provides a kind of storage medium, the storage medium is stored with computer program, the computer journey Sequence can realize above-mentioned fatigue driving behavioral value method when being executed by processor.
Compared with the prior art, the invention has the advantages that: the present invention is by carrying out people using classifier to driving image The positioning and acquisition of face image and eye image, can be improved the accuracy rate of face and human eye positioning, and use and obtain in eye image The human eye area taken carries out the processing such as counting, and obtains perclos value, fatigue driving behavior is judged using perclos value, can be subtracted Few judged by accident as caused by driver's moment eye closing behavior is happened, and is improved Detection accuracy, be can effectively reduce because fatigue is driven The probability sailed and traffic accident is caused to occur.
The invention will be further described in the following with reference to the drawings and specific embodiments.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the application scenarios schematic diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 3 is the sub-process schematic diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 4 is the sub-process schematic diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 5 is the sub-process schematic diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 6 is the sub-process schematic diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 7 is the sub-process schematic diagram of fatigue driving behavioral value method provided in an embodiment of the present invention;
Fig. 8 is the schematic diagram of the probability distribution of skin pixel provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of human eye aperture and the relationship of time provided in an embodiment of the present invention;
Figure 10 is the schematic block diagram of fatigue driving behavioral value device provided in an embodiment of the present invention;
Figure 11 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this description of the invention merely for the sake of description specific embodiment And be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
Fig. 1 and Fig. 2 are please referred to, Fig. 1 is the application scenarios of fatigue driving behavioral value method provided in an embodiment of the present invention Schematic diagram.Fig. 2 is the schematic flow chart of fatigue driving behavioral value method provided in an embodiment of the present invention.The fatigue driving row It is applied in server 20 for detection method, exists in the form of detection platform, carried out between the server 20 and user terminal 10 Data interaction, the driving image of driver is shot with user terminal 10, and the detection APP as input data from user terminal 10 is defeated Enter, and be transmitted in the server 20, feeds back testing result to user terminal 10 after carrying out fatigue driving behavioral value by service.
Fig. 2 is the flow diagram of fatigue driving behavioral value method provided in an embodiment of the present invention.As shown, the party Method includes the following steps S110-S150.
S110, the driving image for obtaining driver.
In the present embodiment, above-mentioned driving image refer to that driver shoots when driving containing driver's Image can specifically obtain and the camera that is installed in the vehicle or the user terminal 10 for being integrated with camera are shot.
It is, of course, also possible to intercept driving image out of camera or user terminal 10 are got video, this is needed in the process Camera is configured, that is, the attribute and variable of video are defined.
S120, image dividing processing is carried out to driving image, obtains facial image.
In the present embodiment, facial image refers to that removal background image only has the image of face.Driving image is carried out Image dividing processing, specifically in order to be distinguished face and background and only extract human face region, in order to subsequent to face figure As being analyzed, to obtain fatigue driving situation.
In one embodiment, as shown in figure 3, above-mentioned step S120 may include having S121~S125.
S121, judge whether the driving image can carry out skin color segmentation.
Skin color segmentation is only applicable to the biggish driving image of colouring discrimination of the color and face of background object, if similarity Biggish driving image is not available then, it is therefore desirable to tentatively be judged, to improve the efficiency of detection.
S122, if so, to driving image carry out skin color segmentation processing, obtain preliminary facial image;
Since the colour of skin is the important information of face, independent of the minutia of face, for feelings such as rotation, expression shape changes Condition can be applicable in, and had opposite stability and distinguished with the color of most of background objects, with this using the colour of skin point Cut dividing background object and face.
Cluster compact process can be used using skin color segmentation processing detection face, YCbCr space is selected to be easily achieved cluster Algorithm.Through meeting dimensional gaussian distribution known to analysis, then for the chrominance C of pixel=[Cb, Cr]T, the probability point of skin pixel Cloth is as shown in figure 8, wherein Cb and Cr respectively refers to generation blue and red concentration excursion amount composition.
In one embodiment, as shown in figure 4, above-mentioned step S122 may include having step S122a~S122c:
S122a, the skin color segmentation that setting pixel chromaticity is carried out to driving image, to obtain broca scale.
Specifically skin color segmentation is carried out using 0.4 < Cb < 0.6 and 0.5 < Cr < 0.7 to the image of each frame of acquisition to obtain Broca scale.
S122b, area of skin color is formed to broca scale compartmentalization.
Different broca scales is specifically subjected to region division, to form each area of skin color.
S122c, classification processing is carried out to area of skin color using classifier, to obtain preliminary facial image.
In the present embodiment, the classification processing of area of skin color is specifically carried out using Haar-like feature composition and classification device, To obtain preliminary facial image.
It needs to be trained classifier when using Haar-like feature composition and classification device, detailed process is as follows:
2000 positive examples and 2000 counter-example pictures for obtaining face 20X20, build positive sample with CreateSamples program Collection;With HaarTraining procedural training, final sorter model (xml document) is obtained, wherein in training process, needs to create It builds Haar feature, is loaded into positive sample and sample, fall short of the target in false alarm rate and when training specifies number strong classifier, calculate Haar characteristic value, one strong classifier of training, saves strong classification information into temporary file, and is back to and is loaded into positive sample and sample This;Data strong classifier is formulated when false alarm rate is touched the mark or had trained, then saves cascade of strong classifiers information to XML file, Final classification device performance is tested, a relatively good cascade classifier can be obtained, after classifier training is complete, so that it may apply The detection of area-of-interest in input picture, that is, the inspection of preliminary human face region can be carried out for input broca scale It surveys.Detect that (automobile or face) classifier output in target area is 1, otherwise output is 0.It, can be in order to detect whole sub-picture Mobile search window in the picture detects each position to determine possible target.In order to search for different size of object Body, classifier is designed to carry out size change, more more effective than changing the size of image to be checked in this way.So In order to detect the target object of unknown size in the picture, scanner program usually requires the search window pair with different proportion size Picture is scanned several times." cascade " in classifier refers to that final classifier is made of the cascade of several simple classification devices. In image detection, tested window passes sequentially through every first-level class device, so most candidate in several layers of detection in front Region is just excluded, and is all target area by the region that every first-level class device detects.Fundamental classifier is at least The decision tree classifier of two leaf nodes.
Certainly, in other embodiments, skin specifically is carried out using classifier obtained by the training of AdaBoost learning algorithm The classification processing in color region, to obtain preliminary facial image.In this embodiment, the learning process of Adaboost learning algorithm, It can be understood as " greedy feature selection process ".To a problem, by Nearest Neighbor with Weighted Voting mechanism, with a large amount of classification function Weighted array judges.The key of algorithm is exactly when classifier correctly classifies to certain samples, then to reduce the power of these samples Value;When mistake classification, then increases the weight of these samples, learning algorithm is allowed to concentrate in subsequent study to more difficult instruction Practice sample to be learnt, finally obtains the ideal classifier of recognition accuracy.Subsequent classifier can be reinforced dividing sample to mistake This training.Finally, the Weak Classifier formation strong classifier that combination is all, the weighted sum voted by comparing these Weak Classifiers Carry out detection image with average voting results;The corresponding Weak Classifier of each rectangular characteristic, it is raw using AdaBoost learning algorithm Process at strong classifier is exactly to find those to face and non-face distinction those of preferably rectangular characteristic, by these features The strong classifier of corresponding Weak Classifier combination producing is optimal the discrimination of face.The meaning of training process can be with table It states are as follows: iterative process finds the Weak Classifier with minimal error rate in current probability distribution each time, then adjusts Whole probability distribution increases the probability value of the sample of current Weak Classifier classification error, and it is correct to reduce current Weak Classifier classification The probability value of sample makes next iteration more be directed to this incorrect classification, that is, is directed to protrude the sample of classification error The sample being more difficult, so that those are further paid attention to by the sample of mistake point.In this way, if the classifier that training is extracted below will More strengthen the training to these classification error samples.The strong classification being made of important feature is generated by AdaBoost algorithm Device.For the strong classifier of 200 features, it can be used for Face datection, but due to detection process to scan it is to be detected Each window of each scale of each position of image, so there are many number of windows to be detected, under the conditions of this, if often The characteristic value that a window all carries out 200 features calculates, and the whole process for detecting work will be expend a great deal of time.Actual It can be using the thought of cascade classifier light after first weighing during Face datection.What it was constituted using prior feature first The better simply strong classifier of structure carries out the exclusion of non-face window, as the importance of feature gradually decreases, classifier Number is more and more, but window to be detected is also fewer and fewer simultaneously.
Each layer of cascade classifier is the strong classifier obtained by the training of continuous AdaBoost algorithm.It is arranged every layer Threshold value abandon negative sample as far as possible on this basis so that most of faces can be transferred through.The layer of position more rearward is more complicated, Include more Weak Classifiers, thus also there is stronger classification capacity.This is done because the layers that non-face sample passes through Number is more just more like face, thus closer classification boundaries.A series of sieve that cascade classifier successively decreases just as slot sizes, often One step can screen out the negative sample that some front sieves leak down, and just be accepted as face eventually by the sample of all sieves.Grade The series for joining classifier series winding depends on the error rate and response speed of system.The usual structure letter of several layers of strong classifiers of front Single, usual one layer is only made of one to two Weak Classifiers, but the simple strong classifier of these structures can reach in early days close 100% verification and measurement ratio, while false detection rate is also very high, can use them and quickly screens out the sub- window that those are apparently not face Mouthful, to greatly reduce the child window for needing subsequent processing.
Assuming that input data set D={ (x1, y1), (x2, y2) ..., (xm, ym) };Wherein yi=0,1 indicates negative sample And positive sample;The cycle-index of study is T;Initialization sample weight: for yi=0,1 sample initializes its weight respectively and is ω 1, i=1/m, 1/l;Wherein m and l is expressed as the quantity of negative sample and the quantity of positive sample;For t=1 ... T: power Renormalization, to each feature j, one Weak Classifier h of trainingj, calculate the weighting fault rate ε of all featuresf;From determining weak In classifier, finding out one has minimum εtWeak Classifier ht;The corresponding weight of each sample is updated, strong classification is ultimately formed Device.
S123, image preprocessing is carried out to preliminary facial image.
Detection face needs to carry out pattern-recognition to single pass image, and is transmitted through the preliminary facial image come at this time and remains unchanged It is area of skin color, and area of skin color is triple channel RGB image.Therefore image must be carried out before carrying out Face datection to locate in advance Reason.
In one embodiment, as shown in figure 5, above-mentioned steps S123 may include step S123a~S123b.
S123a, preliminary facial image is subjected to single channel conversion;
Preliminary facial image after S123b, equalization conversion.
In the present embodiment, the conversion formula of triple channel image to single channel image is as follows:
Y=0.299R+0.587G+0.114B;Wherein R is red channel, and G is green channel, and B is blue channel.
Since single pass luminance information may cause very big influence to detection process, preprocessing process is also needed to list Channel image equalization, the effect of equalization is image enhancement, can specifically calculate the number of each gray value (0-255) appearance, That is statistic histogram;The cumulative distribution table that each gray value is calculated with statistic histogram is exactly all the points from 1 to this gray scale Number;The new value after the equalization of original each gray value, new ash are calculated according to cumulative distribution table s (k) normalization Angle value=255* (s (k)/nm), thus complete post-equalization single channel image, to improve the contrast of image, i.e., by relatively narrow figure As tonal range is stretched to certain rule the range of larger (in entire grey level range), obtaining in entire grey level range It is interior that there is equally distributed image.
S124, pattern-recognition is carried out to the preliminary facial image after image preprocessing, to obtain facial image.
Preliminary facial image is first obtained using skin color segmentation processing mode, quickly, but accuracy is less high for locating speed, For this purpose, just carrying out pattern-recognition in zonule after using skin color model, obtaining point-device human face region, if skin The region that color identifies is less than setting value, then illustrates that skin color model fails, obtain and carry out pattern-recognition to whole region, obtain people Face region.In order to detect whole sub-picture, can mobile search window in the picture, detect each position to determine possible mesh Mark.In order to search for different size of target object, classifier is designed to carry out size change.In order to detect in the picture The target object of unknown size, scanner program usually require to sweep picture several times with the search window of different proportion size It retouches, since the size of human face region is unknown, needs program several to scanning after the area of skin color scaling come into according to 1:3 ratio It is secondary, it is finally obtained be whether there are also human face region in area of skin color, if return be human face region relevant information, packet Include the size of human face region, location information.
In the present embodiment, pattern-recognition refers to that facial contour etc. identifies, to obtain facial image.
S125, if it is not, then to driving image carry out pattern-recognition, to obtain facial image.
S130, human eye positioning is carried out according to the facial image, obtains eye image.
It detects human eye area and detection human face region is similar, pass through 2000 positive examples and 2000 of training face 20X20 Counter-example picture is opened, a relatively good cascade classifier is obtained.After classifier training is complete, so that it may be applied in input picture Area-of-interest detection.
In one embodiment, as shown in fig. 6, above-mentioned steps S130 may include step S131~S133.
S131, even number line and even column are inserted into the facial image;
S132, Gaussian convolution is carried out to the facial image of insertion even number line and even column;
S133, classified using cascade classifier to the facial image after Gaussian convolution, to obtain eye image.
Since face size is unknown, the area-of-interest (namely human eye area) that may be transmitted through in the facial image come compares It is smaller, cause to can't detect human eye area.It is logical first using Gaussian pyramid decomposition to human face region image to up-sampling It crosses and is inserted into even number line and even column in the picture, Gaussian convolution then is carried out with specified filter to obtained image, wherein Filter does interpolation multiplied by 4.So the image of output is 4 times of sizes of former input human face region, so as to improve human eye positioning Accuracy rate, to improve the accuracy rate of fatigue driving behavior.
Above-mentioned cascade classifier can refer to the acquisition of the classifier acquisition modes in step S122c, only by training sample For 2000 positive examples and 2000 counter-example pictures of human eye 20X20, for convenience of description and succinctly, details are not described herein.
Time scale value shared by eyes closed in setting time in S140, calculating eye image.
Time scale value shared by eyes closed refers to perclos (Percentage of Eyelid in setting time Closure Over the Pupil Over Time) value, that is, fatigue/drowsiness physical quantity is measured, PERCLOS value has In real time, the characteristic of non-contact detection;As long as referring to Fig. 9, measuring t1~t4Value, so that it may calculate the value of PERCLOS.It is public Formula are as follows: wherein t1 is the time used in largest eyes coreclisis to 80% pupil;t2Largest eyes coreclisis from 80% to Time used in 20%;t3It is that 20% coreclisis of eyes to 20% pupil opens the time used;t4It is that 20% pupil of eyes is opened Time used in 80% pupil;F is the percentage for the eyes closed time accounting for a certain specific time, as perclos value, is specifically being tried There are 3 kinds of modules of P70, P80, EYEMEA (EM) in testing.P70 is the percentage of time of 70% or more eyes closed area;P80 For the percentage of time of 80% or more eyes closed area, which is most common P80.
In one embodiment, as shown in fig. 7, above-mentioned step S140 may include step S141~step S143.
Human eye area in S141, acquisition eye image;
S142, the frequency of each stage human eye area is counted, it is total with the frequency for obtaining each stage human eye area With;
The accounting of the frequency summation of S143, the frequency summation for obtaining each stage human eye area and all human eye areas, with Obtain in eye image time scale value shared by eyes closed in setting time.
Calculating human eye area is to obtain the premise of perclos value, and human eye area as accurate as possible, needs to count in order to obtain All squares for calculating eye image, extract the spatial moment of eye image, and calculate the center of gravity of eye image, carry out two to eye image Value is extracted human eye profile, is corroded to eye image specific region, and sum, to obtain area, the people that will acquire Eye area is presented with waveform, and in the waveform, the longitudinal axis indicates that human eye area, horizontal axis indicate the time;Contain a certain amount of noise; But in blink, human eye area is minimum, exist in the form of trough in waveform, and it is most to occur frequency in waveform, it should It is that human eye is in and opens state usually.
In the buffer by human eye area record, when data record is 300 full in caching, Frequency statistics are just carried out, it will Data break 500 of the area between 4000-11500 is counted, and since there are noises in collection process, takes the maximum face of frequency Product is the maximum area of human eye, then is counted respectively to greater than maximum area 80%, less than the record of maximum area 20%, if It is N for the frequency greater than maximum area 80%1, the frequency less than maximum area 20% is N2;Frequency summation is N, thenBy counting to 300 human eye areas, the error as caused by noise etc. is reduced, so that The result calculated is more scientific, and in order to further decrease error, desirable five perclos values are averaged, and by this Average value is as perclos value.
S150, the time scale value according to shared by eyes closed in setting time obtain the degree of fatigue of driving behavior.
In the present embodiment, if value > 0.15 perclos (numerical value can be other numerical value, according to depending on actual conditions), Then tested people is in a state of fatigue, and server 20 will carry out some relevant treatments, is such as carried out by user terminal 10 Reminder announced etc..
In addition, server 20 is reading video and show that video module is all made of is Infinite Cyclic strategy, if directly It is placed on main thread, then entire application program just will appear seemingly-dead, all fail to all system operatios, for this purpose, using multi-thread The strategy of journey will read video and be placed in a new thread, meanwhile, it is image to improve the speed of system processing video A new thread has been opened in processing, the calculating of the record of human eye area, perclos value respectively, in the execution function of a thread Customized message is sent to another thread to achieve the purpose that communication, and a thread, which sends message to another thread, is It is realized by operating system.Using the message driving mechanism of Windows operating system, when a thread issues a piece of news When, operating system is firstly received the message, the message is then transmitted to subject thread, but the thread for receiving message is necessary Have been set up message loop.
pRecordThread->PostThreadMessageA(WM_USER_NEWPARAMETER,(in t)param, NULL the thread being directed toward to pRecordThread pointer) can be sent and send a WM_USER_NEWPARAMETER message, it should Thread just carries out specified operation after receiving this message, and parameter param is written and is cached, a cross-thread is completed Communication.
When there are when multiple threads in server 20, it is necessary to thread is organized and managed, within the system, thread It is to be controlled by message, since cross-thread communication uses pointer to transmit, after the message transmission past, just exists more A thread accesses a memory headroom simultaneously, causes access conflict, in order to solve this problem, when thread receives data, Just image is replicated in a another address of deposit, and the space is locked, the interference of other threads is prevented, to make to service Device 20 can smoothly realize its function.Thread is managed by using message loop, not only thread can be made to become safer, The management of thread can also be made to become more simple, enhance the scalability of server 20.
Above-mentioned fatigue driving behavioral value method, by carrying out facial image and human eye using classifier to driving image The accuracy rate of face and human eye positioning can be improved in the positioning and acquisition of image, and using the human eye area obtained in eye image It carries out the processing such as counting, obtains perclos value, fatigue driving behavior is judged using perclos value, can be reduced due to driver Erroneous judgement caused by moment eye closing behavior happens, and improves Detection accuracy, can effectively reduce leads to traffic because of fatigue driving The probability that accident occurs.
Figure 10 is the schematic block diagram of fatigue driving behavioral value device 300 provided in an embodiment of the present invention.Such as Figure 10 institute Show, corresponds to the above fatigue driving behavioral value method, the present invention also provides a kind of fatigue driving behavioral value devices 300.It should Fatigue driving behavioral value device 300 includes the unit for executing above-mentioned fatigue driving behavioral value method, which can be with It is configured in server 20.Specifically, referring to Fig. 10, the fatigue driving behavioral value device 300 includes:
Driving image acquiring unit 301, for obtaining the driving image of driver;
Facial image acquiring unit 302 obtains facial image for carrying out image dividing processing to driving image;
Eye image acquiring unit 303 obtains eye image for carrying out human eye positioning according to the facial image;
Ratio value computing unit 304, for calculating in eye image time scale shared by eyes closed in setting time Value;
Degree of fatigue acquiring unit 305 is driven for the acquisition of the time scale value according to shared by eyes closed in setting time Sail the degree of fatigue of behavior.
In one embodiment, above-mentioned facial image acquiring unit 302 includes:
Judgment sub-unit, for judging whether the driving image can carry out skin color segmentation;
Skin color segmentation subelement is used for if so, obtaining preliminary face figure to driving image progress skin color segmentation processing Picture;
Subelement is pre-processed, for carrying out image preprocessing to preliminary facial image;
First mode identifies subelement, for carrying out pattern-recognition to the preliminary facial image after image preprocessing, with To facial image;
Second mode identifies subelement, is used for if it is not, then pattern-recognition is carried out to driving image, to obtain facial image.
In one embodiment, above-mentioned skin color segmentation subelement includes:
Broca scale obtains module, for carrying out the skin color segmentation of setting pixel chromaticity to driving image, to obtain broca scale;
Compartmentalization module, for forming area of skin color to broca scale compartmentalization;
Classification processing module, for carrying out classification processing to area of skin color using classifier, to obtain preliminary facial image.
In one embodiment, pretreatment subelement includes:
Conversion module, for preliminary facial image to be carried out single channel conversion;
Balance module, for equalizing the preliminary facial image after converting.
In one embodiment, the eye image acquiring unit 303 includes:
It is inserted into subelement, for being inserted into even number line and even column to the facial image;
Convolution subelement carries out Gaussian convolution for the facial image to insertion even number line and even column;
Classification subelement, for being classified using cascade classifier to the facial image after Gaussian convolution, with To eye image.
In one embodiment, the ratio value computing unit 304 includes:
Area obtains subelement, for obtaining the human eye area in eye image;
Subelement is counted, is counted for the frequency to each stage human eye area, to obtain each stage human eye face Long-pending frequency summation;
Accounting obtains subelement, for obtaining the frequency summation of each stage human eye area and the frequency of all human eye areas The accounting of summation, to obtain in eye image time scale value shared by eyes closed in setting time.
It should be noted that it is apparent to those skilled in the art that, above-mentioned fatigue driving behavioral value The specific implementation process of device 300 and each unit, can be with reference to the corresponding description in preceding method embodiment, for the side of description Just and succinctly, details are not described herein.
Above-mentioned fatigue driving behavioral value device 300 can be implemented as a kind of form of computer program, the computer journey Sequence can be run in computer equipment as shown in figure 11.
Figure 11 is please referred to, Figure 11 is a kind of schematic block diagram of computer equipment provided by the embodiments of the present application.The calculating Machine equipment 500 can be server 20.
Refering to fig. 11, which includes processor 502, memory and the net connected by system bus 501 Network interface 505, wherein memory may include non-volatile memory medium 503 and built-in storage 504.
The non-volatile memory medium 503 can storage program area 5031 and computer program 5032.The computer program 5032 include program instruction, which is performed, and processor 502 may make to execute a kind of fatigue driving behavioral value side Method.
The processor 502 is for providing calculating and control ability, to support the operation of entire computer equipment 500.
The built-in storage 504 provides environment for the operation of the computer program 5032 in non-volatile memory medium 503, should When computer program 5032 is executed by processor 502, processor 502 may make to execute a kind of fatigue driving behavioral value method.
The network interface 505 is used to carry out network communication with other equipment.It will be understood by those skilled in the art that in Figure 11 The structure shown, only the block diagram of part-structure relevant to application scheme, does not constitute and is applied to application scheme The restriction of computer equipment 500 thereon, specific computer equipment 500 may include more more or fewer than as shown in the figure Component perhaps combines certain components or with different component layouts.
Wherein, the processor 502 is for running computer program 5032 stored in memory, to realize following step It is rapid:
Obtain the driving image of driver;
Image dividing processing is carried out to driving image, obtains facial image;
Human eye positioning is carried out according to the facial image, obtains eye image;
Time scale value shared by eyes closed in setting time in calculating eye image;
The degree of fatigue of the acquisition driving behavior of the time scale value according to shared by eyes closed in setting time.
In one embodiment, processor 502 is described to driving image progress image dividing processing in realization, obtains face figure When as step, it is implemented as follows step:
Judge whether the driving image can carry out skin color segmentation;
If so, carrying out skin color segmentation processing to driving image, preliminary facial image is obtained;
Image preprocessing is carried out to preliminary facial image;
Pattern-recognition is carried out to the preliminary facial image after image preprocessing, to obtain facial image;
If it is not, then pattern-recognition is carried out to driving image, to obtain facial image.
In one embodiment, processor 502 is described to driving image progress skin color segmentation processing in realization, obtains preliminary people When face image step, it is implemented as follows step:
The skin color segmentation of setting pixel chromaticity is carried out, to driving image to obtain broca scale;
Area of skin color is formed to broca scale compartmentalization;
Classification processing is carried out to area of skin color using classifier, to obtain preliminary facial image.
In one embodiment, processor 502 realize it is described using classifier to area of skin color carry out classification processing, with must When to preliminary facial image step, it is implemented as follows step:
The classification processing of area of skin color is carried out, using Haar-like feature composition and classification device to obtain preliminary facial image.
In one embodiment, processor 502 is when realizing the progress image preprocessing step to preliminary facial image, tool Body realizes following steps:
Preliminary facial image is subjected to single channel conversion;
Preliminary facial image after equalization conversion.
In one embodiment, processor 502 is described according to facial image progress human eye positioning in realization, obtains human eye When image step, it is implemented as follows step:
Even number line and even column are inserted into the facial image;
Gaussian convolution is carried out to the facial image of insertion even number line and even column;
Classified using cascade classifier to the facial image after Gaussian convolution, to obtain eye image.
In one embodiment, processor 502 is in realizing the calculating eye image in setting time shared by eyes closed Time scale value step when, be implemented as follows step:
Obtain the human eye area in eye image;
The frequency of each stage human eye area is counted, to obtain the frequency summation of each stage human eye area;
The accounting of the frequency summation of each stage human eye area and the frequency summation of all human eye areas is obtained, to obtain people Time scale value shared by eyes closed in setting time in eye image.
It should be appreciated that in the embodiment of the present application, processor 502 can be central processing unit (Central Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic Device, discrete gate or transistor logic, discrete hardware components etc..Wherein, general processor can be microprocessor or Person's processor is also possible to any conventional processor etc..
Those of ordinary skill in the art will appreciate that be realize above-described embodiment method in all or part of the process, It is that relevant hardware can be instructed to complete by computer program.The computer program includes program instruction, computer journey Sequence can be stored in a storage medium, which is computer readable storage medium.The program instruction is by the department of computer science At least one processor in system executes, to realize the process step of the embodiment of the above method.
Therefore, the present invention also provides a kind of storage mediums.The storage medium can be computer readable storage medium.This is deposited Storage media is stored with computer program, and processor is made to execute following steps when wherein the computer program is executed by processor:
Obtain the driving image of driver;
Image dividing processing is carried out to driving image, obtains facial image;
Human eye positioning is carried out according to the facial image, obtains eye image;
Time scale value shared by eyes closed in setting time in calculating eye image;
The degree of fatigue of the acquisition driving behavior of the time scale value according to shared by eyes closed in setting time.
In one embodiment, the processor is realized and described carries out figure to driving image executing the computer program As dividing processing is implemented as follows step when obtaining facial image step:
Judge whether the driving image can carry out skin color segmentation;
If so, carrying out skin color segmentation processing to driving image, preliminary facial image is obtained;
Image preprocessing is carried out to preliminary facial image;
Pattern-recognition is carried out to the preliminary facial image after image preprocessing, to obtain facial image;
If it is not, then pattern-recognition is carried out to driving image, to obtain facial image.
In one embodiment, the processor is realized described to driving image progress skin in the execution computer program Color dividing processing is implemented as follows step when obtaining preliminary facial image step:
The skin color segmentation of setting pixel chromaticity is carried out, to driving image to obtain broca scale;
Area of skin color is formed to broca scale compartmentalization;
Classification processing is carried out to area of skin color using classifier, to obtain preliminary facial image.
In one embodiment, the processor realizes the utilization classifier to the colour of skin executing the computer program Region carries out classification processing and is implemented as follows step when obtaining preliminary facial image step:
The classification processing of area of skin color is carried out, using Haar-like feature composition and classification device to obtain preliminary facial image.
In one embodiment, the processor execute the computer program and realize it is described to preliminary facial image into When row image preprocessing step, it is implemented as follows step:
Preliminary facial image is subjected to single channel conversion;
Preliminary facial image after equalization conversion.
In one embodiment, the processor is realized described according to the facial image in the execution computer program Human eye positioning is carried out, when obtaining eye image step, is implemented as follows step:
Even number line and even column are inserted into the facial image;
Gaussian convolution is carried out to the facial image of insertion even number line and even column;
Classified using cascade classifier to the facial image after Gaussian convolution, to obtain eye image.
In one embodiment, the processor is realized in the calculating eye image and is set in the execution computer program When time scale value step shared by interior eyes closed of fixing time, it is implemented as follows step:
Obtain the human eye area in eye image;
The frequency of each stage human eye area is counted, to obtain the frequency summation of each stage human eye area;
The accounting of the frequency summation of each stage human eye area and the frequency summation of all human eye areas is obtained, to obtain people Time scale value shared by eyes closed in setting time in eye image.
The storage medium can be USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), magnetic disk Or the various computer readable storage mediums that can store program code such as CD.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This A little functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Specially Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not It is considered as beyond the scope of this invention.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary.For example, the division of each unit, only Only a kind of logical function partition, there may be another division manner in actual implementation.Such as multiple units or components can be tied Another system is closed or is desirably integrated into, or some features can be ignored or not executed.
The steps in the embodiment of the present invention can be sequentially adjusted, merged and deleted according to actual needs.This hair Unit in bright embodiment device can be combined, divided and deleted according to actual needs.In addition, in each implementation of the present invention Each functional unit in example can integrate in one processing unit, is also possible to each unit and physically exists alone, can also be with It is that two or more units are integrated in one unit.
If the integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product, It can store in one storage medium.Based on this understanding, technical solution of the present invention is substantially in other words to existing skill The all or part of part or the technical solution that art contributes can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, terminal or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection scope subject to.

Claims (10)

1. fatigue driving behavioral value method characterized by comprising
Obtain the driving image of driver;
Image dividing processing is carried out to driving image, obtains facial image;
Human eye positioning is carried out according to the facial image, obtains eye image;
Time scale value shared by eyes closed in setting time in calculating eye image;
The degree of fatigue of the acquisition driving behavior of the time scale value according to shared by eyes closed in setting time.
2. fatigue driving behavioral value method according to claim 1, which is characterized in that described to carry out figure to driving image As dividing processing, facial image is obtained, comprising:
Judge whether the driving image can carry out skin color segmentation;
If so, carrying out skin color segmentation processing to driving image, preliminary facial image is obtained;
Image preprocessing is carried out to preliminary facial image;
Pattern-recognition is carried out to the preliminary facial image after image preprocessing, to obtain facial image;
If it is not, then pattern-recognition is carried out to driving image, to obtain facial image.
3. fatigue driving behavioral value method according to claim 2, which is characterized in that described to carry out skin to driving image Color dividing processing obtains preliminary facial image, comprising:
The skin color segmentation of setting pixel chromaticity is carried out, to driving image to obtain broca scale;
Area of skin color is formed to broca scale compartmentalization;
Classification processing is carried out to area of skin color using classifier, to obtain preliminary facial image.
4. fatigue driving behavioral value method according to claim 3, which is characterized in that described to utilize classifier to the colour of skin Region carries out classification processing, to obtain preliminary facial image, comprising:
The classification processing of area of skin color is carried out, using Haar-like feature composition and classification device to obtain preliminary facial image.
5. fatigue driving behavioral value method according to claim 4, which is characterized in that it is described to preliminary facial image into Row image preprocessing, comprising:
Preliminary facial image is subjected to single channel conversion;
Preliminary facial image after equalization conversion.
6. fatigue driving behavioral value method according to any one of claims 1 to 5, which is characterized in that described according to institute It states facial image and carries out human eye positioning, obtain eye image, comprising:
Even number line and even column are inserted into the facial image;
Gaussian convolution is carried out to the facial image of insertion even number line and even column;
Classified using cascade classifier to the facial image after Gaussian convolution, to obtain eye image.
7. fatigue driving behavioral value method according to claim 6, which is characterized in that set in the calculating eye image It fixes time time scale value shared by interior eyes closed, comprising:
Obtain the human eye area in eye image;
The frequency of each stage human eye area is counted, to obtain the frequency summation of each stage human eye area;
The accounting of the frequency summation of each stage human eye area and the frequency summation of all human eye areas is obtained, to obtain human eye figure The time scale value shared by eyes closed in setting time as in.
8. fatigue driving behavioral value device characterized by comprising
Driving image acquiring unit, for obtaining the driving image of driver;
Facial image acquiring unit obtains facial image for carrying out image dividing processing to driving image;
Eye image acquiring unit obtains eye image for carrying out human eye positioning according to the facial image;
Ratio value computing unit, for calculating in eye image time scale value shared by eyes closed in setting time;
Degree of fatigue acquiring unit obtains driving behavior for the time scale value according to shared by eyes closed in setting time Degree of fatigue.
9. a kind of computer equipment, which is characterized in that the computer equipment includes memory and processor, on the memory It is stored with computer program, the processor is realized as described in any one of claims 1 to 7 when executing the computer program Fatigue driving behavioral value method.
10. a kind of storage medium, which is characterized in that the storage medium is stored with computer program, the computer program quilt Processor can realize the fatigue driving behavioral value method as described in any one of claims 1 to 7 when executing.
CN201810974266.4A 2018-08-24 2018-08-24 Fatigue driving behavioral value method, apparatus, computer equipment and storage medium Pending CN109117810A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810974266.4A CN109117810A (en) 2018-08-24 2018-08-24 Fatigue driving behavioral value method, apparatus, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810974266.4A CN109117810A (en) 2018-08-24 2018-08-24 Fatigue driving behavioral value method, apparatus, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN109117810A true CN109117810A (en) 2019-01-01

Family

ID=64860804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810974266.4A Pending CN109117810A (en) 2018-08-24 2018-08-24 Fatigue driving behavioral value method, apparatus, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109117810A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263730A (en) * 2019-06-24 2019-09-20 北京达佳互联信息技术有限公司 Image-recognizing method, device, electronic equipment and storage medium
CN110633665A (en) * 2019-09-05 2019-12-31 卓尔智联(武汉)研究院有限公司 Recognition method, device and storage medium
CN111209833A (en) * 2019-12-31 2020-05-29 广东科学技术职业学院 Fatigue driving detection method and unmanned driving equipment
CN111645695A (en) * 2020-06-28 2020-09-11 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102028577A (en) * 2010-10-27 2011-04-27 冠捷显示科技(厦门)有限公司 Intelligent eye vision protection system
CN104021370A (en) * 2014-05-16 2014-09-03 浙江传媒学院 Driver state monitoring method based on vision information fusion and driver state monitoring system based on vision information fusion
CN106156780A (en) * 2016-06-29 2016-11-23 南京雅信科技集团有限公司 The method getting rid of wrong report on track in foreign body intrusion identification
CN107169437A (en) * 2017-05-11 2017-09-15 南宁市正祥科技有限公司 The method for detecting fatigue driving of view-based access control model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102028577A (en) * 2010-10-27 2011-04-27 冠捷显示科技(厦门)有限公司 Intelligent eye vision protection system
CN104021370A (en) * 2014-05-16 2014-09-03 浙江传媒学院 Driver state monitoring method based on vision information fusion and driver state monitoring system based on vision information fusion
CN106156780A (en) * 2016-06-29 2016-11-23 南京雅信科技集团有限公司 The method getting rid of wrong report on track in foreign body intrusion identification
CN107169437A (en) * 2017-05-11 2017-09-15 南宁市正祥科技有限公司 The method for detecting fatigue driving of view-based access control model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李强: "基于PERCLOS的列车司机驾驶疲劳检测研究", 《中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263730A (en) * 2019-06-24 2019-09-20 北京达佳互联信息技术有限公司 Image-recognizing method, device, electronic equipment and storage medium
US11341376B2 (en) 2019-06-24 2022-05-24 Beijing Dajia Internet Information Technology Co., Ltd. Method and device for recognizing image and storage medium
CN110633665A (en) * 2019-09-05 2019-12-31 卓尔智联(武汉)研究院有限公司 Recognition method, device and storage medium
CN110633665B (en) * 2019-09-05 2023-01-10 卓尔智联(武汉)研究院有限公司 Identification method, device and storage medium
CN111209833A (en) * 2019-12-31 2020-05-29 广东科学技术职业学院 Fatigue driving detection method and unmanned driving equipment
CN111209833B (en) * 2019-12-31 2023-06-30 广东科学技术职业学院 Fatigue driving detection method and unmanned equipment
CN111645695A (en) * 2020-06-28 2020-09-11 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107423700B (en) Method and device for verifying testimony of a witness
CN110363182B (en) Deep learning-based lane line detection method
CN108090902B (en) Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network
US11176418B2 (en) Model test methods and apparatuses
CN103136504B (en) Face identification method and device
CN109117810A (en) Fatigue driving behavioral value method, apparatus, computer equipment and storage medium
US8320643B2 (en) Face authentication device
CN110349136A (en) A kind of tampered image detection method based on deep learning
CN111507426B (en) Non-reference image quality grading evaluation method and device based on visual fusion characteristics
CN109948616A (en) Image detecting method, device, electronic equipment and computer readable storage medium
CN108876756A (en) The measure and device of image similarity
CN117011563B (en) Road damage inspection cross-domain detection method and system based on semi-supervised federal learning
US20060257017A1 (en) Classification methods, classifier determination methods, classifiers, classifier determination devices, and articles of manufacture
CN109871845A (en) Certificate image extracting method and terminal device
CN110717554A (en) Image recognition method, electronic device, and storage medium
CN103839033A (en) Face identification method based on fuzzy rule
CN116541545A (en) Method, device, equipment and storage medium for identifying flip image
CN114255403A (en) Optical remote sensing image data processing method and system based on deep learning
WO2022156214A1 (en) Liveness detection method and apparatus
CN111582057A (en) Face verification method based on local receptive field
CN110858304A (en) Method and equipment for identifying identity card image
CN114373213A (en) Juvenile identity recognition method and device based on face recognition
Anila et al. An efficient preprocessing technique for face recognition under difficult lighting conditions
CN111062338A (en) Certificate portrait consistency comparison method and system
CN110147824A (en) A kind of automatic classification method and device of image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190101

RJ01 Rejection of invention patent application after publication