CN107832721A - Method and apparatus for output information - Google Patents
Method and apparatus for output information Download PDFInfo
- Publication number
- CN107832721A CN107832721A CN201711139967.8A CN201711139967A CN107832721A CN 107832721 A CN107832721 A CN 107832721A CN 201711139967 A CN201711139967 A CN 201711139967A CN 107832721 A CN107832721 A CN 107832721A
- Authority
- CN
- China
- Prior art keywords
- face
- status indicator
- classification logotype
- convolutional neural
- mentioned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for output information.One embodiment of this method includes:Obtain classification logotype set and the certificate photograph after pretreatment of driver;Target classification logotype is determined based on primary vector corresponding to certificate photograph and each classification logotype;Perform following processing step:Obtain the facial image of driver;For default face site marking, determine the face position in the face area in facial image whether in the state corresponding to the face site marking indicated by status indicator;If current time reaches specified time, the state corresponding with face area then determined based on fatigue threshold corresponding to target classification logotype and in the preceding preset time period at current time determines whether driver is in fatigue state, it is determined that driver exports prompt message when being in fatigue state;If current time does not reach specified time, continue the processing step.The embodiment, which realizes, is imbued with targetedly information output.
Description
Technical field
The invention relates to field of computer technology, and in particular to Internet technical field, it is more particularly, to defeated
Go out the method and apparatus of information.
Background technology
When driver is in fatigue state, notice generally can not be concentrated, and run into the reflection to be taken measures during emergency
Also it can slow up, therefore accident easily occur in the driver of fatigue state, cause casualties and financial losses.At present
Fatigue driving monitoring method be typically in the scene of setting effectively, it is difficult in actual use to the fatigue state of driver
Correct monitoring.
The content of the invention
The embodiment of the present application proposes the method and apparatus for output information.
In a first aspect, the embodiment of the present application provides a kind of method for output information, this method includes:Obtain and drive
The certificate photograph after pretreatment of member;Default classification logotype set is obtained, wherein, classification logotype has corresponding threshold in fatigue
Value and primary vector, the different parts that each component in above-mentioned primary vector is used to characterize face are signified in above-mentioned classification logotype
The location of in the image including face under the classification shown;Based in above-mentioned certificate photograph and above-mentioned classification logotype set
Primary vector corresponding to classification logotype, target classification logotype is determined in above-mentioned classification logotype set;Perform following processing step
Suddenly:Obtain the facial image for including face towards above-mentioned driver collection;For in default face site marking set
Each face bit identification, extracts the people for including the face position indicated by the face site marking from above-mentioned facial image
Face area, wherein, the face site marking is corresponding with the status indicator in default status indicator set;To extracting
Face area analyzed, it is determined that whether the face position included by the face area extracted is in the face
The state indicated by status indicator corresponding to site marking;If current time reaches specified time, based on above-mentioned target
Fatigue threshold corresponding to classification logotype and the face position area extracted in the preceding preset time period at above-mentioned current time
Face position state in which included by domain, determines whether above-mentioned driver is in fatigue state;In response to determining above-mentioned drive
The person of sailing is in fatigue state, then exports prompt message;If above-mentioned current time does not reach above-mentioned specified time, continue to hold
The above-mentioned processing step of row.
In certain embodiments, it is above-mentioned right based on the classification logotype institute in above-mentioned certificate photograph and above-mentioned classification logotype set
The primary vector answered, target classification logotype is determined in above-mentioned classification logotype set, including:Above-mentioned certificate photograph is divided
Analysis, secondary vector is generated, wherein, each component in above-mentioned secondary vector is used to characterize the different people in above-mentioned certificate photograph
Face position location in above-mentioned certificate photograph;Determine above-mentioned secondary vector and each class in above-mentioned classification logotype set
The similarity between corresponding primary vector is not identified, will be right with the similarity highest primary vector of above-mentioned secondary vector institute
The classification logotype answered is defined as above-mentioned target classification logotype.
In certain embodiments, status indicator also has corresponding score value;And above-mentioned it is based on above-mentioned target classification logotype
Included by corresponding fatigue threshold and the face area extracted in the preceding preset time period at above-mentioned current time
Face position state in which, determine whether above-mentioned driver is in fatigue state, including:From before above-mentioned current time
Target face position region is selected in each face area extracted in preset time period, calculates above-mentioned target person
The summation of score value included by face area corresponding to the status indicator of face position state in which, wherein, above-mentioned target
Face position state in which included by face area is indicated by the status indicator in above-mentioned status indicator set
State;Determine whether above-mentioned summation is less than the fatigue threshold corresponding to above-mentioned target classification logotype, if not, it is determined that above-mentioned driving
Member is in fatigue state.
In certain embodiments, classification logotype also has the corresponding convolutional neural networks group through training in advance, above-mentioned volume
Each convolutional neural networks in product neural network group correspond to the different status indicators in above-mentioned status indicator set, convolution god
Also there is corresponding probability threshold value through network;For any one convolutional neural networks, the convolutional neural networks are used to characterize people
Face area is in the status indicator corresponding to the convolutional neural networks with the face position included by the face area
Corresponding relation between indicated shape probability of state.
In certain embodiments, the above-mentioned face area to extracting is analyzed, it is determined that the face extracted
Whether the face position included by the region of position is in the state indicated by the status indicator corresponding to the face site marking, bag
Include:The face area is inputted into target convolutional neural networks, obtains the face position included by the face area
In the shape probability of state indicated by the status indicator corresponding to above-mentioned target convolutional neural networks, wherein, above-mentioned target convolution god
It is the shape corresponding in convolutional neural networks group corresponding with the above-mentioned target classification logotype and face site marking through network
State identifies corresponding convolutional neural networks;It is determined that whether the probability of gained is less than corresponding to above-mentioned target convolutional neural networks
Probability threshold value, if not, it is determined that the face position included by the face area is in above-mentioned target convolutional neural networks institute
State indicated by corresponding status indicator.
In certain embodiments, in the convolutional neural networks group corresponding to any one classification logotype with it is any one
Convolutional neural networks corresponding to individual status indicator, the convolutional neural networks train to obtain by following training step:Obtain
Training sample preset while corresponding with category mark and the status indicator, wherein, above-mentioned training sample includes showing
The sample image at the face position indicated by face site marking corresponding with the status indicator and the mark of above-mentioned sample image,
Wherein, whether above-mentioned mark is included for indicating the face position in above-mentioned sample image in the shape indicated by the status indicator
The data markers of state;Using machine learning method, based on above-mentioned sample image, above-mentioned data markers, default Classification Loss letter
Number and back-propagation algorithm train to obtain convolutional neural networks.
In certain embodiments, for any one classification logotype, at the same with the category mark and above-mentioned status indicator collection
Sample image in training sample corresponding to status indicator in conjunction is through the following steps that generation:Obtain it is preset and such
Not Biao Shi corresponding to driver identification's set, wherein, driver identification has corresponding video labeling to group, above-mentioned video labeling
To each video labeling in group to the different status indicators in corresponding above-mentioned status indicator set, for each video labeling
Right, the video labeling is to including the first video labeling and the second video labeling, in the video indicated by above-mentioned first video labeling
Image show face position indicated by the face site marking corresponding to corresponding status indicator with the video labeling
In the state indicated by the status indicator, the image in video indicated by above-mentioned second video labeling shows the face position not
In the state;For each status indicator in above-mentioned status indicator set, from the first video corresponding to the status indicator
Extracted in the image included by video that mark and the second video labeling indicate respectively including the people corresponding to the status indicator
The face area at the face position indicated by face's bit identification, based on the face position Area generation sample graph extracted
Picture.
In certain embodiments, it is charge-coupled to belong to default driver identification's collection for above-mentioned driver identification set, above-mentioned to drive
The person's of sailing identification sets it is charge-coupled through the following steps that generation:Certificate photograph set after pretreatment is obtained, wherein, certificate photograph
With corresponding driver identification;Certificate photograph in above-mentioned certificate photograph set is analyzed, generation and the certificate photograph
Corresponding characteristic vector, wherein, each component in this feature vector is used to characterize the different people included by the certificate photograph
Face position location in the certificate photograph;Based on each characteristic vector generated, in above-mentioned certificate photograph set
Certificate photograph clustered, obtain at least one class cluster, and classification logotype is set for each class cluster;Same class cluster will be in
Certificate photograph corresponding to driver identification be included into same driver identification set, establish the classification logotype of such cluster with should
Corresponding relation between driver identification's set;Each driver identification collection of gained is combined into above-mentioned driver identification's set
Group.
In certain embodiments, the primary vector corresponding to classification logotype through the following steps that generation:For such
Corresponding class cluster is not identified, to point of the correspondence position in characteristic vector corresponding to each certificate photograph difference in such cluster
The value of amount is averaged, by the corresponding primary vector of the average value obtained generation category mark.
Second aspect, the embodiment of the present application provide a kind of device for output information, and the device includes:First obtains
Unit, it is configured to obtain the certificate photograph after pretreatment of driver;Second acquisition unit, obtain default classification logotype
Set, wherein, classification logotype has corresponding a fatigue threshold and primary vector, and each component in above-mentioned primary vector is used for table
The location of in the image including face of the different parts of traveller on a long journey's face under the classification indicated by above-mentioned classification logotype;It is determined that
Unit, the primary vector corresponding to based on the classification logotype in above-mentioned certificate photograph and above-mentioned classification logotype set is configured to,
Target classification logotype is determined in above-mentioned classification logotype set;First processing units, it is configured to carry out following processing step:Obtain
Take the facial image for including face towards above-mentioned driver collection;For everyone in default face site marking set
Face's bit identification, the face position for including the face position indicated by the face site marking is extracted from above-mentioned facial image
Region, wherein, the face site marking is corresponding with the status indicator in default status indicator set;To the face extracted
Area is analyzed, it is determined that whether the face position included by the face area extracted marks in the face position
Know the state indicated by corresponding status indicator;If current time reaches specified time, based on above-mentioned target classification mark
Wrapped the face area known corresponding fatigue threshold and extracted in the preceding preset time period at above-mentioned current time
The face position state in which included, determines whether above-mentioned driver is in fatigue state;In response to determining at above-mentioned driver
In fatigue state, then prompt message is exported;Second processing unit, if being configured to above-mentioned current time does not reach above-mentioned finger
Fix time, then continue executing with above-mentioned processing step.
In certain embodiments, above-mentioned determining unit includes:Subelement is generated, is configured to carry out above-mentioned certificate photograph
Analysis, secondary vector is generated, wherein, each component in above-mentioned secondary vector is different in above-mentioned certificate photograph for characterizing
Face position location in above-mentioned certificate photograph;First determination subelement, be configured to determine above-mentioned secondary vector with
The similarity between the primary vector corresponding to each classification logotype in above-mentioned classification logotype set, will be with above-mentioned secondary vector
Similarity highest primary vector corresponding to classification logotype be defined as above-mentioned target classification logotype.
In certain embodiments, status indicator also has corresponding score value;And above-mentioned first processing units include:Calculate
Subelement, it is configured to choose from each face area extracted in the preceding preset time period at above-mentioned current time
Go out target face position region, calculate the state mark of face position state in which included by above-mentioned target face position region
Know the summation of corresponding score value, wherein, on the face position state in which included by above-mentioned target face position region is
State the state indicated by the status indicator in status indicator set;Second determination subelement, it is configured to determine that above-mentioned summation is
The no fatigue threshold less than corresponding to above-mentioned target classification logotype, if not, it is determined that above-mentioned driver is in fatigue state.
In certain embodiments, classification logotype also has the corresponding convolutional neural networks group through training in advance, above-mentioned volume
Each convolutional neural networks in product neural network group correspond to the different status indicators in above-mentioned status indicator set, convolution god
Also there is corresponding probability threshold value through network;For any one convolutional neural networks, the convolutional neural networks are used to characterize people
Face area is in the status indicator corresponding to the convolutional neural networks with the face position included by the face area
Corresponding relation between indicated shape probability of state.
In certain embodiments, above-mentioned first processing units include:Subelement is inputted, is configured to the face position area
Domain inputs target convolutional neural networks, obtains the face position included by the face area and is in above-mentioned target convolutional Neural
Shape probability of state indicated by status indicator corresponding to network, wherein, above-mentioned target convolutional neural networks are and above-mentioned target
Convolution in convolutional neural networks group corresponding to classification logotype, corresponding with the status indicator corresponding to the face site marking
Neutral net;3rd determination subelement, it is configured to determine whether the probability of gained is less than above-mentioned target convolutional neural networks institute
Corresponding probability threshold value, if not, it is determined that the face position included by the face area is in above-mentioned target convolutional Neural
The state indicated by status indicator corresponding to network.
In certain embodiments, in the convolutional neural networks group corresponding to any one classification logotype with it is any one
Convolutional neural networks corresponding to individual status indicator, the convolutional neural networks train to obtain by following training step:Obtain
Training sample preset while corresponding with category mark and the status indicator, wherein, above-mentioned training sample includes showing
The sample image at the face position indicated by face site marking corresponding with the status indicator and the mark of above-mentioned sample image,
Wherein, whether above-mentioned mark is included for indicating the face position in above-mentioned sample image in the shape indicated by the status indicator
The data markers of state;Using machine learning method, based on above-mentioned sample image, above-mentioned data markers, default Classification Loss letter
Number and back-propagation algorithm train to obtain convolutional neural networks.
In certain embodiments, for any one classification logotype, at the same with the category mark and above-mentioned status indicator collection
Sample image in training sample corresponding to status indicator in conjunction is through the following steps that generation:Obtain it is preset and such
Not Biao Shi corresponding to driver identification's set, wherein, driver identification has corresponding video labeling to group, above-mentioned video labeling
To each video labeling in group to the different status indicators in corresponding above-mentioned status indicator set, for each video labeling
Right, the video labeling is to including the first video labeling and the second video labeling, in the video indicated by above-mentioned first video labeling
Image show face position indicated by the face site marking corresponding to corresponding status indicator with the video labeling
In the state indicated by the status indicator, the image in video indicated by above-mentioned second video labeling shows the face position not
In the state;For each status indicator in above-mentioned status indicator set, from the first video corresponding to the status indicator
Extracted in the image included by video that mark and the second video labeling indicate respectively including the people corresponding to the status indicator
The face area at the face position indicated by face's bit identification, based on the face position Area generation sample graph extracted
Picture.
In certain embodiments, it is charge-coupled to belong to default driver identification's collection for above-mentioned driver identification set, above-mentioned to drive
The person's of sailing identification sets it is charge-coupled through the following steps that generation:Certificate photograph set after pretreatment is obtained, wherein, certificate photograph
With corresponding driver identification;Certificate photograph in above-mentioned certificate photograph set is analyzed, generation and the certificate photograph
Corresponding characteristic vector, wherein, each component in this feature vector is used to characterize the different people included by the certificate photograph
Face position location in the certificate photograph;Based on each characteristic vector generated, in above-mentioned certificate photograph set
Certificate photograph clustered, obtain at least one class cluster, and classification logotype is set for each class cluster;Same class cluster will be in
Certificate photograph corresponding to driver identification be included into same driver identification set, establish the classification logotype of such cluster with should
Corresponding relation between driver identification's set;Each driver identification collection of gained is combined into above-mentioned driver identification's set
Group.
In certain embodiments, the primary vector corresponding to classification logotype through the following steps that generation:For such
Corresponding class cluster is not identified, to point of the correspondence position in characteristic vector corresponding to each certificate photograph difference in such cluster
The value of amount is averaged, by the corresponding primary vector of the average value obtained generation category mark.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, and the electronic equipment includes:One or more processing
Device;Storage device, for storing one or more programs;When said one or multiple programs are by said one or multiple processors
Perform so that the method for said one or the realization of multiple processors as described in any implementation in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, are stored thereon with computer journey
Sequence, the method as described in any implementation in first aspect is realized when said procedure is executed by processor.
The method and apparatus for output information that the embodiment of the present application provides, by obtaining driver after pretreatment
Certificate photograph and default classification logotype set, so as to based on the classification mark in the certificate photograph and category logo collection
Know corresponding primary vector, target classification logotype is determined in category logo collection.Then by performing following processing step
Suddenly to export prompt message when detecting that human pilot is in fatigue state:Obtaining towards driver's collection includes face
Facial image;For each face bit identification in default face site marking set, extracted from facial image
Include the face area at the face position indicated by the face site marking, wherein, the face site marking with it is default
Status indicator in status indicator set is corresponding;The face area extracted is analyzed, it is determined that the people extracted
Whether the face position included by face area is in the state indicated by the status indicator corresponding to the face site marking;
If current time reaches specified time, based on the fatigue threshold corresponding to target classification logotype and before current time
Face position state in which included by the face area extracted in preset time period, determines whether driver is in
Fatigue state;In response to determining that driver is in fatigue state, then prompt message is exported.If current time is not reaching to specified
Time, then it can continue executing with the processing step.So as to be effectively utilized determination to target classification logotype and to above-mentioned
The determination of face position state in which included by the face area extracted in preset time period, realize to driving
Whether member is in the detection of fatigue state, and exports prompt message when detecting that driver is in fatigue state, it is possible to achieve
It is imbued with targetedly information output.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the application can apply to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for output information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for output information of the application;
Fig. 4 is the structural representation according to one embodiment of the device for output information of the application;
Fig. 5 is adapted for the structural representation of the computer system of the electronic equipment for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Be easy to describe, illustrate only in accompanying drawing to about the related part of invention.
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the implementation of the method for output information that can apply the application or the device for output information
The exemplary system architecture 100 of example.
As shown in figure 1, system architecture 100 can include image collecting device 101, network 102 and message processing device
103.Network 102 between image collecting device 101 and message processing device 103 provide communication link medium.Network
102 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
Image collecting device 101 can be for gathering the device of image, such as image first-class.Image collecting device 101
It may be mounted in the vehicle that driver is driven and positioned at the front of driver.Image collecting device 101 can be towards driving
Collection includes the facial image of face to member in real time, and the facial image is sent to message processing device 103.
Message processing device 103 can be car-mounted terminal, and the car-mounted terminal can receive the transmission of image collecting device 101
Facial image, and the facial image is carried out the processing such as to analyze.
It should be noted that the method for output information that the embodiment of the present application is provided is typically by message processing device
103 are performed, and correspondingly, the device for output information is generally positioned in message processing device 103.
It should be understood that the number of the image collecting device, network and message processing device in Fig. 1 is only schematical.
According to needs are realized, can have any number of image collecting device, network and message processing device.
With continued reference to Fig. 2, the flow of one embodiment of the method for output information according to the application is shown
200.This is used for the flow 200 of the method for output information, comprises the following steps:
Step 201, the certificate photograph after pretreatment of driver is obtained.
In the present embodiment, method operation electronic equipment (such as the information shown in Fig. 1 thereon for output information
Processing equipment 103) it can obtain driver's from the server connected by wired connection mode or radio connection
Certificate photograph after pretreatment.Certainly, above-mentioned electronic equipment can also locally obtain the certificate photograph, and the present embodiment is not to this
Aspect content does any restriction.
Here, above-mentioned certificate photograph can be the photo that the certificates such as identity card or the driver's license of above-mentioned driver use.Separately
Outside, pretreatment can include at least one of following:Cut according to pre-set dimension, noise reduction, binaryzation.Wherein, the two of image
Value, the gray value of the pixel on image is exactly arranged to 0 or 255, that is, whole image is showed obvious
There is black and white visual effect.
In some optional implementations of the present embodiment, what above-mentioned electronic equipment can be driven in above-mentioned driver
Above-mentioned certificate photograph is obtained before vehicle start.
Step 202, default classification logotype set is obtained.
In the present embodiment, above-mentioned electronic equipment can obtain default classification logotype collection from local or above-mentioned server
Close.Wherein, classification logotype can be the mark for including the classification that the image of face is belonged to.Classification logotype can have corresponding
Fatigue threshold and primary vector.For the primary vector corresponding to any one classification logotype, each point in the primary vector
Amount can be used for characterize face different parts (such as left eyebrow, right eyebrow, left eye eyeball, right eye eyeball, nose, face, face wheel
Exterior feature etc.) location in the image including face under the indicated classification of category mark.
Step 203, based on the classification logotype in certificate photograph and classification logotype set corresponding to primary vector, in classification
Target classification logotype is determined in logo collection.
In the present embodiment, above-mentioned electronic equipment can be based on the class in above-mentioned certificate photograph and above-mentioned classification logotype set
Corresponding primary vector is not identified, and target classification logotype is determined in above-mentioned classification logotype set.As an example, above-mentioned certificate
Photo can have corresponding photo id.Above-mentioned electronic equipment can be locally previously stored with for characterizing photo id and the
The mapping table of corresponding relation between one vector.Above-mentioned electronic equipment can be searched and above-mentioned card in the mapping table
The associated primary vector of photo id corresponding to part photo, using the classification logotype corresponding to the primary vector found as
Target classification logotype.
In some optional implementations of the present embodiment, above-mentioned electronic equipment can be carried out first to above-mentioned certificate photograph
Analysis, generate secondary vector.Wherein, each component in the secondary vector can be used for characterizing the difference in above-mentioned certificate photograph
Face position the location of in above-mentioned certificate photograph.Afterwards above-mentioned electronic equipment can determine the secondary vector with it is above-mentioned
The similarity between the primary vector corresponding to each classification logotype in classification logotype set, will be similar to the secondary vector
Classification logotype corresponding to degree highest primary vector is defined as target classification logotype.Here, the secondary vector and primary vector
There can be identical dimension.The secondary vector can corresponding phase with the component on the correspondence position in any one primary vector
Same face position.
It is pointed out that above-mentioned electronic equipment can utilize it is default be used for detect face algorithm (such as based on
Adaboost algorithm of Harr features etc.) in above-mentioned certificate photograph detect human face region;Then enter in the human face region
The positioning of row eyes, eyebrow, nose, face, face contour;Then above-mentioned secondary vector is generated based on positioning result again.Here,
Harr features are properly termed as rectangular characteristic.Adaboost is a kind of iterative algorithm, and the realization of Adaboost algorithm is using defeated
Enter the rectangular characteristic of image.Because Adaboost algorithm is widely studied at present and application known technology, will not be repeated here.
In addition, above-mentioned electronic equipment can use any one be used to calculating the algorithm of similarity between vector (such as
Cosine similarity algorithm or Euclidean distance etc.) determine the similarity between above-mentioned secondary vector and any one primary vector.
Step 204, processing step is performed.
In the present embodiment, for above-mentioned electronic equipment it is determined that after target classification logotype, above-mentioned electronic equipment can perform place
Manage step.Wherein, the processing step can include step 2041-2045.
Step 2041, the facial image for including face towards above-mentioned driver collection is obtained.
Step 2042, for each face bit identification in default face site marking set, from above-mentioned face figure
Being extracted as in includes the face area at the face position indicated by the face site marking.
Step 2043, the face area extracted is analyzed, it is determined that the face area extracted is wrapped
Whether the face position included is in the state indicated by the status indicator corresponding to the face site marking.
Step 2044, if current time reaches specified time, based on the fatigue corresponding to above-mentioned target classification logotype
Wrapped the face area extracted in the preceding preset time period (such as previous minute etc.) at threshold value and in this prior moment
The face position state in which included, determines whether above-mentioned driver is in fatigue state.
Step 2045, in response to determining that above-mentioned driver is in fatigue state, then prompt message is exported.
In step 2041, above-mentioned electronic equipment can be from image collecting device (such as the image shown in Fig. 1 connected
Harvester 101) facial image is obtained in real time.The image collecting device can be in vehicle travel process towards above-mentioned driver
Real-time image acquisition.
In step 2042, the face position indicated by face site marking can be eyes, face or face.In addition,
Face site marking can be corresponding with the status indicator in default status indicator set.State in the status indicator set
The indicated state of mark can be it is following in one:Close one's eyes, open one's mouth, come back, bow.It is eye with indicated face position
State corresponding to the face site marking of eyeball indicated by status indicator can be closed one's eyes.It is face with indicated face position
Face site marking corresponding to state indicated by status indicator can open one's mouth.Here, indicated face position is face
The face site marking in portion can correspond at least one status indicator.At least one status indicator can include indicated state
It is the status indicator to come back for the status indicator bowed and/or indicated state.
It is pointed out that above-mentioned electronic equipment can utilize the above-mentioned algorithm for being used to detect face in above-mentioned facial image
It is middle first to detect human face region, then the positioning of eyes, face, face contour is carried out in the human face region, it is then based on positioning
As a result face area is extracted from the human face region, such as the face area including eyes includes the face of face
Area including the face area of face etc..
In step 2043, above-mentioned electronic equipment can be analyzed the face area extracted, it is determined that extraction
Whether the face position included by face area gone out is (i.e. corresponding with the face position in associated status indicator
Status indicator corresponding to face site marking) indicated by state.As an example, above-mentioned electronic equipment can locally be deposited in advance
The mapping table for characterizing the corresponding relation between face bit image and status indicator is contained, wherein, the face position
Face position included by image is the face position indicated by the face site marking.Above-mentioned electronic equipment can be corresponding at this
The target face bit image with the face position Region Matching is searched in relation table, by corresponding to the target face bit image
Status indicator indicated by state be defined as face position state in which included by the face area.
In step 2044, for the length of preset time period such as can be 1 minute, above-mentioned electronic equipment for example can be with every
Determined once whether above-mentioned driver is in fatigue state every one minute.Above-mentioned specified time can be the next of current slot
The initial time of period.It is pointed out that fatigue threshold can refer to frequency threshold value.Above-mentioned electronic equipment can be by will be upper
State the state indicated by the status indicator in the above-mentioned status indicator set that human pilot occurs in above-mentioned preset time period
Number is compared with the frequency threshold value, if the number is not less than the frequency threshold value, above-mentioned electronic equipment can determine above-mentioned
Human pilot is in fatigue state.Here, above-mentioned electronic equipment can be from each individual extracted in above-mentioned preset time period
Target face position region is selected in face area, the total number in the target face position region is defined as the number.
Wherein, the face position state in which included by the target face position region is the state mark in above-mentioned status indicator set
Know indicated state.
In step 2045, prompt message that above-mentioned electronic equipment is exported can be information of voice prompt or
Text prompt information, the present embodiment do not do any restriction to content in this respect.Above-mentioned electronic equipment by exporting the prompt message,
Above-mentioned driver can be reminded to take care driving, above-mentioned driver can be made the dangerous thing such as to be avoided as much as getting into an accident
Therefore.
In some optional implementations of the present embodiment, after above-mentioned electronic equipment execution of step 2045, Ke Yizhuan
Follow-up flow is continued executing with to step 2041.
In some optional implementations of the present embodiment, after above-mentioned driver-operated vehicle parking, above-mentioned electricity
Sub- equipment can stop performing above-mentioned processing step, it is meant that can be with ending message handling process.
In some optional implementations of the present embodiment, status indicator can also have corresponding score value.Threshold in fatigue
Value can refer to summation threshold value.Above-mentioned electronic equipment can be from each face area extracted in above-mentioned preset time period
Above-mentioned target face position region is selected, calculates face position state in which included by above-mentioned target face position region
Status indicator corresponding to score value summation.Then above-mentioned electronic equipment can determine whether the summation is less than above-mentioned target class
Corresponding fatigue threshold is not identified, if it is not, then above-mentioned electronic equipment can determine that above-mentioned driver is in fatigue state.This
In, the score value corresponding to status indicator can be set according to the degree of danger of driver.For example, the danger closed one's eyes and bowed
Degree is higher, and the degree of danger opened one's mouth and (can characterize driver yawning) time is high, then state of closing one's eyes and bow is right respectively
The score value corresponding to status indicator answered can be 3, state of opening one's mouth respectively corresponding to score value corresponding to status indicator can be with
For 2.Certainly, the score value corresponding to status indicator can be adjusted according to being actually needed, and the present embodiment is not in this respect
Content does any restriction.
In addition, the fatigue threshold corresponding to classification logotype can also be based on largely with category mark is associated drives
Tired situation of the driver that indicates respectively of the person's of sailing mark in certain period of time and set.
In some optional implementations of the present embodiment, classification logotype can also have corresponding through training in advance
Convolutional neural networks group, each convolutional neural networks in the convolutional neural networks group can be corresponded in above-mentioned status indicator set
Different status indicators.Convolutional neural networks can also have corresponding probability threshold value.For any one convolutional Neural net
Network, the convolutional neural networks can be used for sign face area and are in the face position included by the face area
The corresponding relation between shape probability of state indicated by status indicator corresponding to the convolutional neural networks.It is pointed out that
Probability threshold value corresponding to convolutional neural networks can be directed to negative data in the training process based on the convolutional neural networks
(included face position is not in the sample graph of the state indicated by the status indicator corresponding to the convolutional neural networks to image
Picture) probability of output is configured when carrying out probabilistic forecasting.
In practice, convolutional neural networks (Convolutional Neural Network, CNN) are a kind of Feedforward Neural Networks
Network, its artificial neuron can respond the surrounding cells in a part of coverage, have outstanding performance for image procossing.Need
It is noted that any one convolutional neural networks in any one convolutional neural networks group can utilize machine learning side
Method and training sample are carried out to existing depth convolutional neural networks (such as DenseBox, VGGNet, ResNet, SegNet etc.)
Obtained from Training.
It is pointed out that for any one convolutional neural networks in any one convolutional neural networks group, the volume
Product neutral net can include at least one convolutional layer, at least one maximum pond layer, at least one dropout layers and at least one
Individual full articulamentum.For example, the convolutional neural networks can include 3 convolutional layers, 3 maximum pond layers, 3 dropout layers and
Two full articulamentums.Wherein, a full articulamentum in this two full articulamentums can be used for output probability value, the full articulamentum
Classification function (such as Softmax classification functions) can be provided with.Here, dropout layers can be used for preventing convolutional Neural net
Network over-fitting etc..It should be noted that 3 convolutional layers can include the first volume that convolution kernel size is 3 × 3 and depth is 32
Lamination, the second convolutional layer that convolution kernel size is 3 × 3 and depth is 64, convolution kernel size be 3 × 3 and depth be 64 the 3rd
Convolutional layer.Any one maximum pond layer can be 2 × 2 maximum pond layer.In addition, it can connect behind any one convolutional layer
A maximum pond layer being connected in above-mentioned 3 maximum pond layers.And it can be connected with behind any one maximum pond layer
A dropout layer in above-mentioned 3 dropout layers.
In some optional implementations of the present embodiment, above-mentioned electronic equipment will can carry from above-mentioned facial image
The face area input target convolutional neural networks of taking-up, obtain the face position included by the face area and are in
The shape probability of state indicated by status indicator corresponding to the target convolutional neural networks.Wherein, the target convolutional neural networks
It can be face site marking institute in convolutional neural networks group corresponding with above-mentioned target classification logotype, with the face position
The corresponding convolutional neural networks of corresponding status indicator.Above-mentioned electronic equipment can determine whether the probability of gained is less than the mesh
The probability threshold value corresponding to convolutional neural networks is marked, if it is not, then above-mentioned electronic equipment can determine that the face area is wrapped
The face position included is in the state indicated by the status indicator corresponding to the target convolutional neural networks.
In some optional implementations of the present embodiment, for the convolutional Neural corresponding to any one classification logotype
Convolutional neural networks corresponding with any one status indicator in group of networks, the convolutional neural networks can be that above-mentioned electronics is set
Standby or above-mentioned server trains what is obtained by performing following training step:Obtain preset while identify and be somebody's turn to do with the category
Training sample corresponding to status indicator, wherein, the training sample can include showing face corresponding with the status indicator
The sample image at the face position indicated by bit identification and the mark of the sample image, wherein, the mark can include being used to refer to
Show face position in the sample image whether in the state indicated by the status indicator data markers (such as numeral 0 or
1,0 can represent that the face position is not in the state, and 1 can represent that the face position is in the state);Utilize machine learning
Method, train to obtain convolution god based on the sample image, the data markers, default Classification Loss function and back-propagation algorithm
Through network.The Classification Loss function can be various loss function (such as Hinge Loss functions or the Softmax for being used to classify
Loss functions etc.).In the training process, Classification Loss function can constrain mode and the direction of convolution kernel modification, the mesh of training
It is designated as making the value of Classification Loss function minimum.
It should be noted that above-mentioned back-propagation algorithm (Back Propgation Algorithm, BP algorithm) can also claim
For error back propagation (Error Back Propagation, BP) algorithm, or Back Propagation Algorithm.BP algorithm is by learning
Process is made up of the forward-propagating of signal and two processes of backpropagation of error.In feedforward network, input signal is through input
Layer input, is calculated by hidden layer and is exported by output layer, output valve is compared with mark value, if there is error, by error reversely by exporting
Layer in this process, can utilize gradient descent algorithm to neuron weights (such as convolution in convolutional layer to input Es-region propagations
Parameter of core etc.) it is adjusted.Herein, above-mentioned Classification Loss function can be used to characterize the error of output valve and mark value.
In some optional implementations of the present embodiment, for any one classification logotype, while with category mark
Sample image in training sample corresponding to status indicator in knowledge and above-mentioned status indicator set can be above-mentioned electronic equipment
Or above-mentioned server is by performing following steps generation:Obtain preset driver identification's collection corresponding with category mark
Close, wherein, driver identification can have corresponding video labeling to group, and the video labeling is to each video labeling pair in group
The different status indicators in above-mentioned status indicator set can be corresponded to, for each video labeling pair, the video labeling pair can
With including the first video labeling and the second video labeling, the image in video indicated by first video labeling can show with
Face position indicated by the video labeling face site marking corresponding to corresponding status indicator is in the status indicator
Indicated state, the image in video indicated by second video labeling can show that the face position is not in the shape
State;For each status indicator in above-mentioned status indicator set, from the first video labeling corresponding to the status indicator and
Extracted in the image included by video that two video labelings indicate respectively including the face position mark corresponding to the status indicator
The face area at indicated face position is known, based on the face position Area generation sample image extracted.
Here, the face area can be used as face bit image., can be right in order to increase the quantity of sample image
At least one face bit image carries out the processing such as brightness, to generate new face bit image.Above-mentioned electronic equipment is above-mentioned
The face area and the new face bit image can be generated sample image by server.
It is pointed out that the video indicated by any one video labeling of any one video labeling centering can be
Frame number is more than the video of default value (such as 50000 etc.).
In some optional implementations of the present embodiment, above-mentioned driver identification's set can belong to default drive
The person's of sailing identification sets are charge-coupled.Charge-coupled above-mentioned driver identification's collection can be that above-mentioned electronic equipment or above-mentioned server are following by performing
Step generation:
First, certificate photograph set after pretreatment is obtained, wherein, certificate photograph can have corresponding driver to mark
Know.The pretreatment can include at least one of following:Cut according to above-mentioned pre-set dimension, noise reduction, binaryzation.Here, demonstrate,prove
Part photo can be photo used in the certificates such as identity card or driver's license.It is certificate photograph quality using the reason for certificate photograph
Higher, because certificate photograph compares specification, face are clear, all in normal condition during shooting.
Afterwards, the certificate photograph in the certificate photograph set is analyzed, generates feature corresponding with the certificate photograph
Vector.Wherein, each component in this feature vector can be used for characterizing the different face positions included by the certificate photograph
The location of in the certificate photograph.It should be noted that for the generation method of this feature vector, above-mentioned second is referred to
The generation method of vector, will not be repeated here.
Then, based on each characteristic vector generated, the certificate photograph in the certificate photograph set is clustered, obtained
To at least one class cluster, and for each class cluster, classification logotype is set.Here, above-mentioned electronic equipment or above-mentioned server can use
Any one clustering method (such as synthesis cluster (Agglomerative Hierarchical Clustering, AHC) or K it is equal
Value (K-means) etc.) certificate photograph in the certificate photograph set is clustered.AHC is one kind of hierarchical clustering method, its base
This thought is:Single document is regarded as class one by one, is then merged using different methods, makes the number of class gradual
Reduce, to the last for a class or gather required class number.K-means algorithms are hard clustering algorithms, are typically based on original
The representative of the object function clustering method of type, it is certain object function of distance as optimization of data point to prototype, is utilized
Function asks the method for extreme value to obtain the regulation rule of interative computation.It is pointed out that above-mentioned electronic equipment or above-mentioned server
On can be previously provided with possess software and hardware resources be used for generate class cluster classification logotype thread.Above-mentioned electronic equipment or on
Thread generation classification logotype can be utilized by stating server, and the classification logotype generated is arranged to the classification logotype of class cluster.
Then, the driver identification corresponding to the certificate photograph in same class cluster is included into same driver identification
Set, the corresponding relation established between the classification logotype of such cluster and driver identification set.As an example, above-mentioned electronics is set
Standby or above-mentioned server can utilize default mark generating algorithm generation mark, and the mark of generation is arranged into the driver
The mark of logo collection.Above-mentioned electronic equipment or above-mentioned server can use the mark comprising driver identification set and should
The information of classification logotype come characterize the category mark the driver identification gather between corresponding relation.
Finally, each driver identification collection of gained is combined into above-mentioned driver identification and collects charge-coupled.
In some optional implementations of the present embodiment, the primary vector corresponding to classification logotype can be by with
Lower step generation:For the corresponding class cluster of category mark, to special corresponding to each certificate photograph difference in such cluster
The value of the component of correspondence position in sign vector is averaged, by first corresponding to the average value obtained generation category mark
Vector.Wherein, each average value obtained the location of in the primary vector of place with the feature corresponding to the average value to
The position of component in amount is identical.
Step 205, if current time does not reach specified time, processing step is continued executing with.
In the present embodiment, if above-mentioned current time does not reach above-mentioned specified time, above-mentioned electronic equipment can be with
Continue executing with above-mentioned processing step.
With continued reference to Fig. 3, Fig. 3 is a signal according to the application scenarios of the method for output information of the present embodiment
Figure.In Fig. 3 application scenarios, the length of preset time period can be 1 minute.As shown in label 301, message processing device can
With the local certificate photograph after pretreatment for obtaining driver A.Afterwards, as shown in label 302, above- mentioned information processing equipment can
Default classification logotype set is obtained with local, wherein, classification logotype has corresponding fatigue threshold and primary vector.Then,
As shown in label 303, above- mentioned information processing equipment can be based on the classification in above-mentioned certificate photograph and above-mentioned classification logotype set
The corresponding primary vector of mark, determines target classification logotype B in above-mentioned classification logotype set.Then, such as the institute of label 304
Show, above- mentioned information processing equipment can perform processing step to determine whether driver A is in tired shape in preset time period
State, and it is determined that driver A exports prompt message when being in fatigue state in the preset time period.Wherein, processing step can
With including:Obtain the facial image for including face that connected image collecting device gathers towards driver A;For default
Each face bit identification in face site marking set, is extracted from above-mentioned facial image including the face site marking
The face area at indicated face position, wherein, the face site marking and the shape in default status indicator set
State mark is corresponding;The face area extracted is analyzed, it is determined that included by the face area extracted
Whether face position is in the state indicated by the status indicator corresponding to the face site marking;If current time arrival refers to
Fix time, then based on the fatigue threshold corresponding to target classification logotype B and above-mentioned current time preceding preset time period (i.e.
Previous minute) in face position state in which included by the face area that extracts, determine whether driver A is in
Fatigue state;In response to determining that driver A is in fatigue state, then prompt message is exported.As shown in label 305, if above-mentioned
Current time does not reach above-mentioned specified time, then above- mentioned information processing equipment can continue executing with above-mentioned processing step.
The method that above-described embodiment of the application provides, be effectively utilized determination to target classification logotype and to
The determination of face position state in which included by the face area extracted in above-mentioned preset time period, is realized pair
Whether driver is in the detection of fatigue state, and exports prompt message when detecting that driver is in fatigue state, can be with
Realization is imbued with targetedly information output.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, it is used to export letter this application provides one kind
One embodiment of the device of breath, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2, and the device can specifically answer
For in various electronic equipments.
As shown in figure 4, the device 400 for output information shown in the present embodiment includes:First acquisition unit 401,
Two acquiring units 402, determining unit 403, first processing units 404 and second processing unit 405.Wherein, first acquisition unit
401 are configured to obtain the certificate photograph after pretreatment of driver;Second acquisition unit 402 obtains default classification logotype
Set, wherein, classification logotype has corresponding a fatigue threshold and primary vector, and each component in above-mentioned primary vector is used for table
The location of in the image including face of the different parts of traveller on a long journey's face under the classification indicated by above-mentioned classification logotype;It is determined that
Unit 403 be configured to based on first corresponding to the classification logotype in above-mentioned certificate photograph and above-mentioned classification logotype set to
Amount, target classification logotype is determined in above-mentioned classification logotype set;First processing units 404 are configured to carry out following processing step
Suddenly:Obtain the facial image for including face towards above-mentioned driver collection;For in default face site marking set
Each face bit identification, extracts the people for including the face position indicated by the face site marking from above-mentioned facial image
Face area, wherein, the face site marking is corresponding with the status indicator in default status indicator set;To extracting
Face area analyzed, it is determined that whether the face position included by the face area extracted is in the face
The state indicated by status indicator corresponding to site marking;If current time reaches specified time, based on above-mentioned target
Fatigue threshold corresponding to classification logotype and the face position area extracted in the preceding preset time period at above-mentioned current time
Face position state in which included by domain, determines whether above-mentioned driver is in fatigue state;In response to determining above-mentioned drive
The person of sailing is in fatigue state, then exports prompt message;If second processing unit 405 is configured to above-mentioned current time and not arrived
Up to above-mentioned specified time, then above-mentioned processing step is continued executing with.
In the present embodiment, in the device 400 of output information:First acquisition unit 401, second acquisition unit 402,
Determining unit 403, the specific processing of first processing units 404 and second processing unit 405 and its caused technique effect can
Respectively with reference to the related description of the step 201 in the corresponding embodiment of figure 2, step 202, step 203, step 204 and step 205,
It will not be repeated here.
In some optional implementations of the present embodiment, above-mentioned determining unit 403 can include:Generate subelement
(not shown), it is configured to analyze above-mentioned certificate photograph, generates secondary vector, wherein, in above-mentioned secondary vector
The different face positions that can be used for characterizing in above-mentioned certificate photograph of each component position residing in above-mentioned certificate photograph
Put;First determination subelement (not shown), it is configured to determine in above-mentioned secondary vector and above-mentioned classification logotype set
The similarity between primary vector corresponding to each classification logotype, by with the similarity highest first of above-mentioned secondary vector to
The corresponding classification logotype of amount is defined as above-mentioned target classification logotype.
In some optional implementations of the present embodiment, status indicator can also have corresponding score value;On and
Stating first processing units 404 can include:Computation subunit (not shown), it is configured to from before above-mentioned current time
Target face position region is selected in each face area extracted in preset time period, calculates above-mentioned target person
The summation of score value included by face area corresponding to the status indicator of face position state in which, wherein, above-mentioned target
Face position state in which included by face area can be that the status indicator in above-mentioned status indicator set is signified
The state shown;Second determination subelement (not shown), it is configured to determine whether above-mentioned summation is less than above-mentioned target classification
The corresponding fatigue threshold of mark, if not, it is determined that above-mentioned driver is in fatigue state.
In some optional implementations of the present embodiment, classification logotype can also have corresponding through training in advance
Convolutional neural networks group, each convolutional neural networks in above-mentioned convolutional neural networks group can correspond to above-mentioned status indicator set
In different status indicators, convolutional neural networks can also have corresponding probability threshold value;For any one convolutional Neural
Network, the convolutional neural networks can be used for characterizing face area and the face position included by the face area
Corresponding relation between the shape probability of state indicated by the status indicator corresponding to the convolutional neural networks.
In some optional implementations of the present embodiment, above-mentioned first processing units 404 can include:Input is single
First (not shown), it is configured to the face area inputting target convolutional neural networks, obtains the face position area
Face position included by domain is in the shape probability of state indicated by the status indicator corresponding to above-mentioned target convolutional neural networks,
Wherein, above-mentioned target convolutional neural networks can be it is in convolutional neural networks group corresponding with above-mentioned target classification logotype, with
The corresponding convolutional neural networks of status indicator corresponding to the face site marking;3rd determination subelement (is not shown in figure
Go out), whether it is configured to determine the probability of gained less than the probability threshold value corresponding to above-mentioned target convolutional neural networks, if it is not,
Then determine that the face position included by the face area is in the status indicator corresponding to above-mentioned target convolutional neural networks
Indicated state.
In some optional implementations of the present embodiment, for the convolutional Neural corresponding to any one classification logotype
Convolutional neural networks corresponding with any one status indicator in group of networks, the convolutional neural networks can be by following instruction
Practice step and train what is obtained:Training sample preset while corresponding with category mark and the status indicator is obtained, wherein,
Above-mentioned training sample can include the sample for showing the face position indicated by face site marking corresponding with the status indicator
The mark of this image and above-mentioned sample image, wherein, above-mentioned mark can include being used to indicate the face in above-mentioned sample image
Whether position is in the data markers of the state indicated by the status indicator;Using machine learning method, based on above-mentioned sample graph
Picture, above-mentioned data markers, default Classification Loss function and back-propagation algorithm train to obtain convolutional neural networks.
In some optional implementations of the present embodiment, for any one classification logotype, while with category mark
Know and above-mentioned status indicator set in status indicator corresponding to sample image in training sample can with through the following steps that
Generation:Preset driver identification's set corresponding with category mark is obtained, wherein, driver identification can have corresponding
Video labeling to group, above-mentioned video labeling is to each video labeling in group to that can correspond in above-mentioned status indicator set
Different status indicators, for each video labeling pair, the video labeling is to that can include the first video labeling and the second video
Mark, the image in video indicated by above-mentioned first video labeling can be shown with the video labeling to corresponding status indicator
Face position indicated by corresponding face site marking is in the state indicated by the status indicator, above-mentioned second video mark
The image known in indicated video can show that the face position is not in the state;For in above-mentioned status indicator set
Each status indicator, the video indicated respectively from the first video labeling corresponding to the status indicator and the second video labeling are wrapped
Being extracted in the image included includes the face position at the face position indicated by the face site marking corresponding to the status indicator
Region, based on the face position Area generation sample image extracted.
In some optional implementations of the present embodiment, above-mentioned driver identification's set can belong to default drive
The person's of sailing identification sets are charge-coupled, and above-mentioned driver identification collection is charge-coupled can be with through the following steps that generation:Obtain after pretreatment
Certificate photograph set, wherein, certificate photograph can have corresponding driver identification;To the certificate in above-mentioned certificate photograph set
Photo is analyzed, and generates characteristic vector corresponding with the certificate photograph, wherein, each component in this feature vector can be used
In characterizing the different face positions included by the certificate photograph the location of in the certificate photograph;It is each based on what is generated
Individual characteristic vector, the certificate photograph in above-mentioned certificate photograph set is clustered, obtain at least one class cluster, and be each class
Cluster sets classification logotype;Driver identification corresponding to certificate photograph in same class cluster is included into same driver's mark
Know set, the corresponding relation established between the classification logotype of such cluster and driver identification set;By each driving of gained
It is charge-coupled that member's logo collection forms above-mentioned driver identification's collection.
In some optional implementations of the present embodiment, the primary vector corresponding to classification logotype can be by with
Lower step generation:For the corresponding class cluster of category mark, to special corresponding to each certificate photograph difference in such cluster
The value of the component of correspondence position in sign vector is averaged, by first corresponding to the average value obtained generation category mark
Vector.
In some optional implementations of the present embodiment, above-mentioned first processing units can be with after prompt message is exported
Continue executing with above-mentioned processing step.
In some optional implementations of the present embodiment, after above-mentioned driver-operated vehicle parking, above-mentioned dress
Putting 400 can be with above-mentioned first processing units out of service, it is meant that can be with ending message handling process.
The device that above-described embodiment of the application provides, be effectively utilized determination to target classification logotype and to
The determination of face position state in which included by the face area extracted in above-mentioned preset time period, is realized pair
Whether driver is in the detection of fatigue state, and exports prompt message when detecting that driver is in fatigue state, can be with
Realization is imbued with targetedly information output.
Below with reference to Fig. 5, it illustrates suitable for for realizing the computer system 500 of the electronic equipment of the embodiment of the present application
Structural representation.Electronic equipment shown in Fig. 5 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes CPU (CPU) 501, it can be read-only according to being stored in
Program in memory (ROM) 502 or be loaded into program in random access storage device (RAM) 503 from storage part 508 and
Perform various appropriate actions and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.
CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always
Line 504.
I/O interfaces 505 are connected to lower component:Importation 506 including keyboard, mouse etc.;Penetrated including such as negative electrode
The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage part 508 including hard disk etc.;
And the communications portion 509 of the NIC including LAN card, modem etc..Communications portion 509 via such as because
The network of spy's net performs communication process.Driver 510 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, such as
Disk, CD, magneto-optic disk, semiconductor memory etc., it is arranged on as needed on driver 510, in order to read from it
Computer program be mounted into as needed storage part 508.
Especially, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it includes being carried on computer-readable medium
On computer program, the computer program include be used for execution flow chart shown in method program code.In such reality
To apply in example, the computer program can be downloaded and installed by communications portion 509 from network, and/or from detachable media
511 are mounted.When the computer program is performed by CPU (CPU) 501, perform what is limited in the system of the application
Above-mentioned function.
It should be noted that the computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.Meter
The more specifically example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more wires, just
Take formula computer disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer-readable recording medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for
By instruction execution system, device either device use or program in connection.Included on computer-readable medium
Program code can be transmitted with any appropriate medium, be included but is not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned
Any appropriate combination.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more
For realizing the executable instruction of defined logic function.It should also be noted that some as replace realization in, institute in square frame
The function of mark can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actual
On can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also
It is noted that the combination of each square frame and block diagram in block diagram or flow chart or the square frame in flow chart, can use and perform rule
Fixed function or the special hardware based system of operation are realized, or can use the group of specialized hardware and computer instruction
Close to realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Include first acquisition unit, second acquisition unit, determining unit, first processing units and second processing unit.Wherein, these units
Title do not form restriction to the unit in itself under certain conditions, for example, first acquisition unit is also described as
" unit for obtaining the certificate photograph after pretreatment of driver ".
As on the other hand, present invention also provides a kind of computer-readable medium, the computer-readable medium can be
Included by electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer computer-readable recording medium carries one or more program, and when said one or multiple programs, by one, the electronics is set
During standby execution so that the electronic equipment includes:Obtain the certificate photograph after pretreatment of driver;Obtain default classification mark
Know set, wherein, classification logotype has corresponding a fatigue threshold and primary vector, and each component in above-mentioned primary vector is used for
Characterize location in the image including face of the different parts of face under the classification indicated by above-mentioned classification logotype;Base
The primary vector corresponding to classification logotype in above-mentioned certificate photograph and above-mentioned classification logotype set, in above-mentioned classification logotype collection
Target classification logotype is determined in conjunction;Perform following processing step:Obtain the face for including face towards above-mentioned driver collection
Image;For each face bit identification in default face site marking set, bag is extracted from above-mentioned facial image
The face area at the face position indicated by the face site marking is included, wherein, the face site marking and default shape
Status indicator in state logo collection is corresponding;The face area extracted is analyzed, it is determined that the face extracted
Whether the face position included by area is in the state indicated by the status indicator corresponding to the face site marking;Such as
Fruit current time reaches specified time, then based on the fatigue threshold corresponding to above-mentioned target classification logotype and when above-mentioned current
The face position state in which included by face area extracted in the preceding preset time period carved, determines above-mentioned driving
Whether member is in fatigue state;In response to determining that above-mentioned driver is in fatigue state, then prompt message is exported;If above-mentioned work as
The preceding moment does not reach above-mentioned specified time, then continues executing with above-mentioned processing step.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, carried out by above-mentioned technical characteristic or its equivalent feature
The other technical schemes for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical scheme that the technical characteristic of energy is replaced mutually and formed.
Claims (16)
1. a kind of method for output information, including:
Obtain the certificate photograph after pretreatment of driver;
Default classification logotype set is obtained, wherein, classification logotype fatigue threshold and primary vector with corresponding to, described first
The different parts that each component in vector is used to characterize face include face under the classification indicated by the classification logotype
Image in the location of;
Based on the primary vector corresponding to the classification logotype in the certificate photograph and the classification logotype set, in the classification
Target classification logotype is determined in logo collection;
Perform following processing step:Obtain the facial image for including face towards driver collection;For default people
Each face bit identification in face's bit identification set, extracted from the facial image including the face site marking institute
The face area at the face position of instruction, wherein, the face site marking and the state in default status indicator set
Identify corresponding;The face area extracted is analyzed, it is determined that the people included by the face area extracted
Whether face position is in the state indicated by the status indicator corresponding to the face site marking;Specified if current time reaches
Time, then based on the fatigue threshold corresponding to the target classification logotype and in the preceding preset time period at the current time
Face position state in which included by the face area extracted, determines whether the driver is in tired shape
State;In response to determining that the driver is in fatigue state, then prompt message is exported;
If the current time does not reach the specified time, the processing step is continued executing with.
2. the method according to claim 11, wherein, it is described based in the certificate photograph and the classification logotype set
Primary vector corresponding to classification logotype, target classification logotype is determined in the classification logotype set, including:
The certificate photograph is analyzed, generates secondary vector, wherein, each component in the secondary vector is used to characterize
Different face positions in the certificate photograph are the location of in the certificate photograph;
Determine between the primary vector corresponding to each classification logotype in the secondary vector and the classification logotype set
Similarity, the classification logotype corresponding to the similarity highest primary vector with the secondary vector is defined as the target class
Do not identify.
3. according to the method for claim 1, wherein, status indicator also has corresponding score value;And
The fatigue threshold based on corresponding to the target classification logotype and the preceding preset time period at the current time
Face position state in which included by the face area inside extracted, determines whether the driver is in tired shape
State, including:
Target face is selected from each face area extracted in the preceding preset time period at the current time
Area, calculate corresponding to the status indicator of face position state in which included by the target face position region
The summation of score value, wherein, the face position state in which included by the target face position region is the status indicator
The state indicated by status indicator in set;
Determine whether the summation is less than the fatigue threshold corresponding to the target classification logotype, if not, it is determined that the driving
Member is in fatigue state.
4. according to the method for claim 1, wherein, classification logotype also has the corresponding convolutional Neural net through training in advance
Network group, each convolutional neural networks in the convolutional neural networks group correspond to the different states in the status indicator set
Mark, convolutional neural networks also have corresponding probability threshold value;For any one convolutional neural networks, the convolutional neural networks
It is in for characterizing face area with the face position included by the face area corresponding to the convolutional neural networks
Status indicator indicated by shape probability of state between corresponding relation.
5. according to the method for claim 4, wherein, the described pair of face area extracted is analyzed, it is determined that carrying
Whether the face position included by the face area of taking-up is signified in the status indicator corresponding to the face site marking
The state shown, including:
The face area is inputted into target convolutional neural networks, obtains the face position included by the face area
In the shape probability of state indicated by the status indicator corresponding to the target convolutional neural networks, wherein, the target convolution god
It is the shape corresponding in convolutional neural networks group corresponding with the target classification logotype and face site marking through network
State identifies corresponding convolutional neural networks;
It is determined that whether the probability of gained is less than the probability threshold value corresponding to the target convolutional neural networks, if not, it is determined that should
Face position included by face area is in indicated by the status indicator corresponding to the target convolutional neural networks
State.
6. the method according to claim 11, wherein, for the convolutional neural networks group corresponding to any one classification logotype
In convolutional neural networks corresponding with any one status indicator, the convolutional neural networks are trained by following training step
Obtain:
Training sample preset while corresponding with category mark and the status indicator is obtained, wherein, the training sample bag
Include the sample image for showing the face position indicated by face site marking corresponding with the status indicator and the sample graph
The mark of picture, wherein, the mark includes being used to indicate whether the face position in the sample image is in the status indicator
The data markers of indicated state;
Using machine learning method, pass based on the sample image, the data markers, default Classification Loss function and reversely
Broadcast Algorithm for Training and obtain convolutional neural networks.
7. according to the method for claim 6, wherein, for any one classification logotype, at the same with the category mark and institute
State the sample image in training sample corresponding to the status indicator in status indicator set through the following steps that generation:
Preset driver identification's set corresponding with category mark is obtained, wherein, driver identification has corresponding video
Mark is to group, and the video labeling is to each video labeling in group to the different states in the corresponding status indicator set
Mark, for each video labeling pair, the video labeling including the first video labeling and the second video labeling, described first to regarding
The image that frequency marking is known in indicated video shows the face position mark corresponding to corresponding status indicator with the video labeling
Know indicated face position and be in state indicated by the status indicator, in the video indicated by second video labeling
Image shows that the face position is not in the state;
For each status indicator in the status indicator set, from the first video labeling corresponding to the status indicator and
Extracted in the image included by video that two video labelings indicate respectively including the face position mark corresponding to the status indicator
The face area at indicated face position is known, based on the face position Area generation sample image extracted.
8. according to the method for claim 7, wherein, driver identification's set belongs to default driver identification's collection
Charge-coupled, driver identification collection is charge-coupled through the following steps that generation:
Certificate photograph set after pretreatment is obtained, wherein, certificate photograph has corresponding driver identification;
Certificate photograph in the certificate photograph set is analyzed, generates characteristic vector corresponding with the certificate photograph, its
In, each component in this feature vector is for characterizing the different face positions included by the certificate photograph in the certificate photograph
It is the location of middle;
Based on each characteristic vector generated, the certificate photograph in the certificate photograph set is clustered, obtained at least
One class cluster, and classification logotype is set for each class cluster;
Driver identification corresponding to certificate photograph in same class cluster is included into same driver identification's set, established
Corresponding relation between the classification logotype of such cluster and driver identification set;
It is charge-coupled that each driver identification collection of gained is combined into driver identification's collection.
9. according to the method for claim 8, wherein, the primary vector corresponding to classification logotype is through the following steps that generation
's:
For the corresponding class cluster of category mark, in characteristic vector corresponding to each certificate photograph difference in such cluster
The value of the component of correspondence position is averaged, by the corresponding primary vector of the average value obtained generation category mark.
10. a kind of device for output information, including:
First acquisition unit, it is configured to obtain the certificate photograph after pretreatment of driver;
Second acquisition unit, default classification logotype set is obtained, wherein, classification logotype has corresponding fatigue threshold and first
Vector, the different parts that each component in the primary vector is used to characterize face are in the classification indicated by the classification logotype
Under the image including face in the location of;
Determining unit, it is configured to based on corresponding to the classification logotype in the certificate photograph and the classification logotype set
One vector, determines target classification logotype in the classification logotype set;
First processing units, it is configured to carry out following processing step:Obtaining towards driver collection includes face
Facial image;For each face bit identification in default face site marking set, extracted from the facial image
Going out includes the face area at the face position indicated by the face site marking, wherein, the face site marking is with presetting
Status indicator set in status indicator it is corresponding;The face area extracted is analyzed, it is determined that extract
Whether the face position included by face area is in the shape indicated by the status indicator corresponding to the face site marking
State;If current time reaches specified time, based on the fatigue threshold corresponding to the target classification logotype and described
Face position state in which included by the face area extracted in the preceding preset time period at current time, determines institute
State whether driver is in fatigue state;In response to determining that the driver is in fatigue state, then prompt message is exported;
Second processing unit, if being configured to the current time does not reach the specified time, continue executing with described
Processing step.
11. device according to claim 10, wherein, the determining unit includes:
Subelement is generated, is configured to analyze the certificate photograph, generates secondary vector, wherein, the secondary vector
In each component be used to characterizing different face positions in the certificate photograph the location of in the certificate photograph;
First determination subelement, it is configured to determine the secondary vector and each classification logotype in the classification logotype set
Similarity between corresponding primary vector, by the class corresponding to the similarity highest primary vector with the secondary vector
The target classification logotype Biao Shi be defined as.
12. device according to claim 10, wherein, status indicator also has corresponding score value;And
The first processing units include:
Computation subunit, it is configured to from each face position area extracted in the preceding preset time period at the current time
Target face position region is selected in domain, calculates face position state in which included by the target face position region
Status indicator corresponding to score value summation, wherein, residing for the face position included by the target face position region
State is the state indicated by the status indicator in the status indicator set;
Second determination subelement, it is configured to determine the summation whether less than the threshold in fatigue corresponding to the target classification logotype
Value, if not, it is determined that the driver is in fatigue state.
13. device according to claim 10, wherein, classification logotype also has the corresponding convolutional Neural through training in advance
Group of networks, each convolutional neural networks in the convolutional neural networks group correspond to the different shapes in the status indicator set
State identifies, and convolutional neural networks also have corresponding probability threshold value;For any one convolutional neural networks, the convolutional Neural net
Network be used to characterizing face area and the face position included by the face area be in the convolutional neural networks it is right
The corresponding relation between shape probability of state indicated by the status indicator answered.
14. device according to claim 13, wherein, the first processing units include:
Subelement is inputted, is configured to the face area inputting target convolutional neural networks, obtains the face position area
Face position included by domain is in the shape probability of state indicated by the status indicator corresponding to the target convolutional neural networks,
Wherein, the target convolutional neural networks are in convolutional neural networks group corresponding with the target classification logotype and people
The corresponding convolutional neural networks of status indicator corresponding to face's bit identification;
3rd determination subelement, whether it is configured to determine the probability of gained less than corresponding to the target convolutional neural networks
Probability threshold value, if not, it is determined that the face position included by the face area is in the target convolutional neural networks institute
State indicated by corresponding status indicator.
15. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are by one or more of computing devices so that one or more of processors are real
The now method as described in any in claim 1-9.
16. a kind of computer-readable recording medium, is stored thereon with computer program, wherein, described program is executed by processor
Methods of the Shi Shixian as described in any in claim 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711139967.8A CN107832721B (en) | 2017-11-16 | 2017-11-16 | Method and apparatus for outputting information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711139967.8A CN107832721B (en) | 2017-11-16 | 2017-11-16 | Method and apparatus for outputting information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107832721A true CN107832721A (en) | 2018-03-23 |
CN107832721B CN107832721B (en) | 2021-12-07 |
Family
ID=61652717
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711139967.8A Active CN107832721B (en) | 2017-11-16 | 2017-11-16 | Method and apparatus for outputting information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107832721B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389038A (en) * | 2018-09-04 | 2019-02-26 | 阿里巴巴集团控股有限公司 | A kind of detection method of information, device and equipment |
CN109522853A (en) * | 2018-11-22 | 2019-03-26 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN110390626A (en) * | 2019-07-02 | 2019-10-29 | 深兰科技(上海)有限公司 | A kind of image processing method and device of convolutional neural networks |
CN111709264A (en) * | 2019-03-18 | 2020-09-25 | 北京市商汤科技开发有限公司 | Driver attention monitoring method and device and electronic equipment |
WO2021035983A1 (en) * | 2019-08-26 | 2021-03-04 | 平安科技(深圳)有限公司 | Method for training face-based driving risk prediction model, driving risk prediction method based on face, and related devices |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012146823A1 (en) * | 2011-04-29 | 2012-11-01 | Nokia Corporation | Method, apparatus and computer program product for blink detection in media content |
CN104112334A (en) * | 2013-04-16 | 2014-10-22 | 百度在线网络技术(北京)有限公司 | Fatigue driving early warning method and fatigue driving early warning system |
CN104183091A (en) * | 2014-08-14 | 2014-12-03 | 苏州清研微视电子科技有限公司 | System for adjusting sensitivity of fatigue driving early warning system in self-adaptive mode |
CN104794855A (en) * | 2014-01-22 | 2015-07-22 | 径卫视觉科技(上海)有限公司 | Driver's attention comprehensive assessment system |
CN105185038A (en) * | 2015-10-20 | 2015-12-23 | 华东交通大学 | Safety driving system based on Android smart phone |
TW201608532A (en) * | 2014-08-26 | 2016-03-01 | 國立臺南大學 | Method and device for warning driver |
CN105769120A (en) * | 2016-01-27 | 2016-07-20 | 深圳地平线机器人科技有限公司 | Fatigue driving detection method and device |
CN106203394A (en) * | 2016-07-26 | 2016-12-07 | 浙江捷尚视觉科技股份有限公司 | Fatigue driving safety monitoring method based on human eye state detection |
CN106446811A (en) * | 2016-09-12 | 2017-02-22 | 北京智芯原动科技有限公司 | Deep-learning-based driver's fatigue detection method and apparatus |
CN107038422A (en) * | 2017-04-20 | 2017-08-11 | 杭州电子科技大学 | The fatigue state recognition method of deep learning is constrained based on space geometry |
CN107194346A (en) * | 2017-05-19 | 2017-09-22 | 福建师范大学 | A kind of fatigue drive of car Forecasting Methodology |
CN107220595A (en) * | 2017-05-03 | 2017-09-29 | 北京航空航天大学 | Fatigue monitoring system based on " face characteristic extraction " |
CN107229922A (en) * | 2017-06-12 | 2017-10-03 | 西南科技大学 | A kind of fatigue driving monitoring method and device |
-
2017
- 2017-11-16 CN CN201711139967.8A patent/CN107832721B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012146823A1 (en) * | 2011-04-29 | 2012-11-01 | Nokia Corporation | Method, apparatus and computer program product for blink detection in media content |
CN104112334A (en) * | 2013-04-16 | 2014-10-22 | 百度在线网络技术(北京)有限公司 | Fatigue driving early warning method and fatigue driving early warning system |
CN104794855A (en) * | 2014-01-22 | 2015-07-22 | 径卫视觉科技(上海)有限公司 | Driver's attention comprehensive assessment system |
CN104183091A (en) * | 2014-08-14 | 2014-12-03 | 苏州清研微视电子科技有限公司 | System for adjusting sensitivity of fatigue driving early warning system in self-adaptive mode |
TW201608532A (en) * | 2014-08-26 | 2016-03-01 | 國立臺南大學 | Method and device for warning driver |
CN105185038A (en) * | 2015-10-20 | 2015-12-23 | 华东交通大学 | Safety driving system based on Android smart phone |
CN105769120A (en) * | 2016-01-27 | 2016-07-20 | 深圳地平线机器人科技有限公司 | Fatigue driving detection method and device |
CN106203394A (en) * | 2016-07-26 | 2016-12-07 | 浙江捷尚视觉科技股份有限公司 | Fatigue driving safety monitoring method based on human eye state detection |
CN106446811A (en) * | 2016-09-12 | 2017-02-22 | 北京智芯原动科技有限公司 | Deep-learning-based driver's fatigue detection method and apparatus |
CN107038422A (en) * | 2017-04-20 | 2017-08-11 | 杭州电子科技大学 | The fatigue state recognition method of deep learning is constrained based on space geometry |
CN107220595A (en) * | 2017-05-03 | 2017-09-29 | 北京航空航天大学 | Fatigue monitoring system based on " face characteristic extraction " |
CN107194346A (en) * | 2017-05-19 | 2017-09-22 | 福建师范大学 | A kind of fatigue drive of car Forecasting Methodology |
CN107229922A (en) * | 2017-06-12 | 2017-10-03 | 西南科技大学 | A kind of fatigue driving monitoring method and device |
Non-Patent Citations (4)
Title |
---|
EARN TZEH TAN 等: "OPTIMIZATION OF NEURAL NETWORK ARCHITECTURE FOR THE APPLICATION OF DRIVER FATIGUE MONITORING SYSTEM", 《ARPN JOURNAL OF ENGINEERING AND APPLIED SCIENCES》 * |
YANG YING 等: "The Monitoring Method of Driver’s Fatigue Based on Neural Network", 《PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION》 * |
王雪松 等: "基于驾驶模拟实验的眼部指标与疲劳分级", 《同济大学学报(自然科学版)》 * |
黄春雨 等: "基于图像识别的疲劳驾驶监测方法研究", 《长春理工大学学报(自然科学版)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389038A (en) * | 2018-09-04 | 2019-02-26 | 阿里巴巴集团控股有限公司 | A kind of detection method of information, device and equipment |
US20200074216A1 (en) * | 2018-09-04 | 2020-03-05 | Alibaba Group Holding Limited | Information detection method, apparatus, and device |
US20200167595A1 (en) * | 2018-09-04 | 2020-05-28 | Alibaba Group Holding Limited | Information detection method, apparatus, and device |
US11250291B2 (en) * | 2018-09-04 | 2022-02-15 | Advanced New Technologies, Co., Ltd. | Information detection method, apparatus, and device |
CN109522853A (en) * | 2018-11-22 | 2019-03-26 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN109522853B (en) * | 2018-11-22 | 2019-11-19 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN111709264A (en) * | 2019-03-18 | 2020-09-25 | 北京市商汤科技开发有限公司 | Driver attention monitoring method and device and electronic equipment |
CN110390626A (en) * | 2019-07-02 | 2019-10-29 | 深兰科技(上海)有限公司 | A kind of image processing method and device of convolutional neural networks |
WO2021035983A1 (en) * | 2019-08-26 | 2021-03-04 | 平安科技(深圳)有限公司 | Method for training face-based driving risk prediction model, driving risk prediction method based on face, and related devices |
Also Published As
Publication number | Publication date |
---|---|
CN107832721B (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11487995B2 (en) | Method and apparatus for determining image quality | |
CN107832721A (en) | Method and apparatus for output information | |
CN111340008B (en) | Method and system for generation of counterpatch, training of detection model and defense of counterpatch | |
CN107609536A (en) | Information generating method and device | |
WO2019169688A1 (en) | Vehicle loss assessment method and apparatus, electronic device, and storage medium | |
CN109409297B (en) | Identity recognition method based on dual-channel convolutional neural network | |
CN108182409B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN103902961B (en) | Face recognition method and device | |
CN110909780A (en) | Image recognition model training and image recognition method, device and system | |
CN107590807A (en) | Method and apparatus for detection image quality | |
CN107578034A (en) | information generating method and device | |
CN106919921B (en) | Gait recognition method and system combining subspace learning and tensor neural network | |
CN107832735A (en) | Method and apparatus for identifying face | |
CN110058699B (en) | User behavior identification method based on intelligent mobile device sensor | |
CN112115866A (en) | Face recognition method and device, electronic equipment and computer readable storage medium | |
CN106709528A (en) | Method and device of vehicle reidentification based on multiple objective function deep learning | |
CN107944398A (en) | Based on depth characteristic association list diagram image set face identification method, device and medium | |
CN109670457A (en) | A kind of driver status recognition methods and device | |
CN113627256B (en) | False video inspection method and system based on blink synchronization and binocular movement detection | |
CN110633624A (en) | Machine vision human body abnormal behavior identification method based on multi-feature fusion | |
CN109977867A (en) | A kind of infrared biopsy method based on machine learning multiple features fusion | |
CN112396588A (en) | Fundus image identification method and system based on countermeasure network and readable medium | |
CN115761409A (en) | Fire detection method, device, equipment and medium based on deep learning | |
CN110210382A (en) | A kind of face method for detecting fatigue driving and device based on space-time characteristic identification | |
CN114140844A (en) | Face silence living body detection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |