CN106127123B - Method for detecting face of driver in day and night driving in real time based on RGB-I - Google Patents
Method for detecting face of driver in day and night driving in real time based on RGB-I Download PDFInfo
- Publication number
- CN106127123B CN106127123B CN201610436625.1A CN201610436625A CN106127123B CN 106127123 B CN106127123 B CN 106127123B CN 201610436625 A CN201610436625 A CN 201610436625A CN 106127123 B CN106127123 B CN 106127123B
- Authority
- CN
- China
- Prior art keywords
- convnet
- face
- model
- driver
- rgb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for detecting the face of a driver in a day and night driving based on RGB-I, which comprises a step of model training and a step of driver face recognition; the model training step comprises: s1, preprocessing a driver face library: dividing the face picture of the driver into two groups of sizes of 40 × 40 and 28 × 28 respectively, and then carrying out graying; s2, establishing an R-Convnet model based on RGB image training and an I-Convnet model based on Infrared image training, wherein the R-Convnet model is divided into an R40-Convnet model and an R28-Convnet model, the I-Convnet model is divided into an I40-Convnet model and an I28-Convnet model, and the 4 Convnet models are obtained by removing a full connection layer on the basis of CNN; the driver face recognition step includes: s3, collecting images of the driver in the daytime and at night by using the RGB camera and the Infrared camera respectively; and S4, recognizing by using the trained cascade R40-Convnet model and R28-Convnet model in the daytime, and recognizing by using the trained cascade I40-Convnet model and I28-Convnet model in the nighttime. The invention improves the time and the precision of the detection of the face of the driver at night or in a severe driving environment.
Description
Technical Field
The invention relates to a method for detecting the face of a driver in real time, in particular to a method for detecting the face of a driver in a day and night driving based on RGB-I in real time.
Background
In recent years, the living standard of people is gradually improved, the number of private cars is increased, the incidence rate of traffic accidents is also increased, and the problems of vehicles and safety become a focus of social attention. The safety issues include driver fatigue detection, drunk driving, driver emotion detection, driver behavior detection, and the like, and detection of the face position (face detection) is performed first in driver fatigue detection and expression detection.
Face detection refers to the process of determining the position, size, and pose of all faces (if any) in an input image. The problem of face detection originally comes from face detection, and the research of face detection dates back to 60-70 years in the 20 th century, and the face detection is mature day by day after decades of zigzag development. Face detection is a key link in an automatic face detection system, but early face detection research aims at face images with strong constraint conditions (such as images without background), and the face position is easy to obtain, so that the face detection problem is not considered. In recent years, with the development of applications such as electronic commerce and the like, face detection becomes the most potential biometric authentication means, and the application background requires that an automatic face detection system has certain adaptability to general environment images, so that face detection is beginning to be regarded as an independent subject and received attention of researchers.
At present, a lot of documents are available for face detection, most of the documents mainly analyze multiple angles under a complex background and cannot achieve real time, and the research for face detection of a driver is few at present. The invention provides a method for detecting the face of a driver in real time in day and night driving based on RGB-I by aiming at the face detection of the driver, in particular to a method for detecting the face of the driver in real time in night or in a severe driving environment by an RGB-I camera, which greatly improves the detection precision and speed.
Disclosure of Invention
The invention provides a method for detecting the face of a driver in real time in day and night based on RGB-I, aiming at solving the problem of detecting the face of the driver in real time in day and night, and the technical scheme is as follows:
a method for detecting the face of a driver in a day and night driving based on RGB-I comprises a step of model training and a step of driver face recognition;
the step of model training comprises:
s1, preprocessing a driver face library;
s2, establishing an R-Convnet model based on RGB image training and an I-Convnet model based on Infrared image training;
the driver face recognition step includes:
s3, acquiring images of a driver day and night by using the RGB-included camera, acquiring RGB images of the driver in real time in the day, and acquiring the included images of the driver in real time at night;
s4, using the trained R-Convnet model for identification in the daytime and using the trained I-Convnet model for identification at night; the R-Convnet model is a cascaded R40-Convnet model and R28-Convnet model; the I-Convnet model is a cascaded I40-Convnet model and an I28-Convnet model.
Further, the preprocessing in the step S1 includes driver RGB face library preprocessing and driver Infrared face library preprocessing;
the driver RGB face library preprocessing comprises the following steps: dividing RGB face pictures of the face of a driver into two groups: the group adjusts the size of the picture to 40 x 40, and then grays; another group adjusts the size of the picture to 28 x 28, and then grays;
the preprocessing of the Infrared face library of the driver comprises the following steps: the method comprises the following steps of dividing Infrared face pictures of a driver face into two groups: adjusting the size of the pictures to 40 × 40, and then performing gray scale storage; another group resizes the picture to 28 x 28, followed by graying.
Further, the training method of the R40-Convnet model in the step S2 includes: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 2 and 5, the learning efficiency is 0.01, the size of batch is 200, the training times are 100, then RGB face pictures obtained through preprocessing in the step S1 are input into the Convnet detector, the training times are reached, and an R40-Convnet model is obtained;
the training method of the R28-Convnet model comprises the following steps: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 3 and 2, the learning efficiency is 0.01, the size of batch is 100, and the training times is 50, and then RGB face pictures obtained through S1 preprocessing are input into the Convnet detector to reach the training times, so that an R28-Convnet model is obtained.
Further, the training method of the I40-Convnet model in the step S2 includes: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 2 and 5, the learning efficiency is 0.01, the size of batch is 200, and the training times are 100, and then the Infrared face picture obtained through preprocessing in the step S2 is input into the Convnet detector to reach the training times, so that an I40-Convnet model is obtained.
The training method of the I28-Convnet model comprises the following steps: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 3 and 2, the learning efficiency is 0.01, the size of batch is 100, and the training times is 50, and then the Infrared face picture obtained through preprocessing in the step S2 is input into the Convnet detector to reach the training times, so that an I28-Convnet model is obtained.
Further, the Convnet detector is obtained by removing a full connection layer on the basis of CNN.
Further, the implementation of step S4 includes:
in the daytime, each frame in the RGB video is slid by a window of 40 × 40, the step length is 2, an image obtained by sliding the window is input into an R40-Convnet model to detect whether a human face exists, if the human face exists, the size of the image is adjusted to 28 × 28, the image is input into a next cascaded R28-Convnet model, if the human face exists, the position of the window at the moment is stored, otherwise, the window is not stored, the window is continuously slid next until a complete test picture is detected, the picture is scaled by a proportion of 0.8, and the steps are repeated until the picture is smaller than 40 × 40; at the moment, a plurality of windows are overlapped at the face position, and the face position is calibrated into one window through a non-maximum suppression algorithm, so that the face of the driver can be obtained;
at night, each frame in the Infrared video is slid by a window of 40 × 40, the step length is 2, an image obtained by sliding the window is input into an I40-Convnet model to detect whether a human face exists, if the human face exists, the size of the image is adjusted to 28 × 28, the image is continuously input into a next I28-Convnet model, if the human face exists, the position of the window at the moment is stored, otherwise, the window is not stored, the window is continuously slid next until a complete test picture is detected, the picture is scaled by a proportion of 0.8, and the steps are repeated until the picture is smaller than 40 × 40; at the moment, a plurality of windows are overlapped at the face position, and the face position is calibrated into one window through a non-maximum suppression algorithm, so that the face of the driver can be obtained.
The invention has the beneficial effects that:
1. the Infrared camera is introduced, the definition of night shooting by the RGB camera is improved, the problem of face real-time detection of drivers day and night is successfully solved, and particularly, high-precision real-time face detection is realized by means of RGB-I in night or severe driving environment.
2. The Convnet model obtained by modifying the CNN model greatly improves the face detection time of the driver;
3. the area obtained by the window is slid, and whether the face image exists or not is detected through the cascaded Convnet model, so that the detection speed and accuracy are improved.
4. The invention successfully solves the problem of insufficient detection (accuracy and real-time performance) of the face of the driver day and night, realizes high-definition monitoring of the driver day and night by virtue of RGB-I, and has extremely important significance for fatigue detection and expression detection of the driver by utilizing the Convnet detector to accurately detect the face of the driver in real time.
Drawings
FIG. 1 is a flow chart of driver face detection based on RGB-I images.
Fig. 2 is a schematic diagram of training R40-Convnet model based on 40 × 40 RGB pictures.
Fig. 3 is a schematic diagram of training R28-Convnet model based on 28 × 28 RGB pictures.
Fig. 4 is a schematic diagram of training the I40-Convnet model based on 40 × 40 Infrared pictures.
Fig. 5 is a schematic diagram of I28-Convnet model trained based on 28 × 28 Infrared pictures.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
FIG. 1 shows the general idea of the present invention, using an RGB-hidden camera to obtain the driver image of day and night, in daytime, obtaining the RGB image of the driver in real time, sliding a window of 40 × 40 for each frame image, with the step length of 2, inputting the image obtained by sliding the window into a trained R40-Convnet model to detect whether the image is a human face, if the image is a human face, adjusting the size of the image to 28 × 28, continuing to input into the next trained R28-Convnet model, if the image is a human face, saving the position of the window at the moment, otherwise, continuing to slide the window, until a complete test picture is detected, scaling the picture at a ratio of 0.8, repeating the above steps until the picture is smaller than 40 × 40, at this time, overlapping many windows at the position of the human face, calibrating the image into a window by a non-maximum suppression algorithm, i.e. the identified face of the driver. At night, acquiring an Infrared image of the driver in real time, wherein the trained I40-Convnet model and I28-Convnet model are used for recognizing the face of the driver, and the recognition method is the same as the method for recognizing the face of the driver based on the RGB image.
FIGS. 2-5 are schematic diagrams of the training of the R40-Convnet model, the R28-Convnet model, the I40-Convnet model, and the I28-Convnet model, respectively. As shown, the 4 Convnet models are similar to the CNN model, and the full connection layer is removed on the basis of the CNN model. Therefore, the detection speed can be increased on the basis of ensuring the accuracy, and the real-time performance of the face detection is improved.
The invention relates to a method for detecting the face of a driver in a day and night driving based on RGB-I, which comprises the following steps:
1. preprocessing a driver face library:
1.1 driver RGB face library preprocessing:
dividing RGB face pictures of the face of a driver into two groups: the group adjusts the size of the picture to 40 x 40, and then grays and stores the picture; another group resizes the picture to 28 x 28 before graying and saving. The purpose is as follows: an R40-Convnet detector with an input of 40 × 40 and an R28-Convnet detector with an input of 28 × 28 (for detecting whether a face is present) were trained, respectively.
1.2 Infrared face library preprocessing of the driver:
the method comprises the following steps of dividing Infrared face pictures of a driver face into two groups: the group adjusts the size of the picture to 40 x 40, and then grays and stores the picture; another group resizes the picture to 28 x 28 before graying and saving. The purpose is as follows: I40-Convnet detector with input 40 x 40 and I28-Convnet detector with input 28 x 28 were trained, respectively.
2. And establishing an R-Convnet model based on RGB image training and an I-Convnet model based on Infrared image training.
2.1 establishing an R40-Convnet model and an R28-Convnet model based on RGB image training:
2.1.1 training the R40-Convnet model:
the Convnet detector is obtained by removing a full connection layer on the basis of CNN, and as shown in fig. 2, a model firstly sets parameters of the Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 2 and 5, the learning efficiency is 0.01, the size of batch is 200, the training times is 100, and then an RGB face picture of 40 obtained by preprocessing 1.1 is input into the model to reach the training times, so that an R40-Connet model is obtained. The values set maximize the accuracy of the model.
2.1.2 training the R28-Convnet model:
as shown in fig. 3, the Convnet detector is first subjected to parameter setting, where the sizes of convolution kernels are 5 × 5 and 3 × 3, the numbers of convolution kernels are 3 and 2, the learning efficiency is 0.01, the size of batch is 100, and the training time is 50, and then RGB face pictures obtained by preprocessing 1.1 are input into the model to reach the training time, so as to obtain the R28-Convnet model.
2.2 establishing I40-Convnet model and I28-Convnet model based on the trained images:
2.2.1 training I40-Convnet model:
as shown in fig. 4, the model first sets parameters for the Convnet detector, where the convolution kernels are 5 × 5 and 3 × 3, the number of convolution kernels is 2 and 5, the learning efficiency is 0.01, the batch size is 200, and the training number is 100, then inputs the concealed face picture 40 × 40 obtained by 1.2 preprocessing into the model, and reaches the training number, and I40-Convnet model training is completed.
2.2.2 training I28-Convnet model:
as shown in fig. 5, the model first sets parameters for the Convnet detector, where the convolution kernels are 5 × 5 and 3 × 3, the number of convolution kernels is 3 and 2, the learning efficiency is 0.01, the batch size is 100, and the training number is 50, and then inputs the concealed face picture of 28 × 28 obtained by 1.1 preprocessing into the model, and the training number is reached, I28-Convnet model training is completed.
3. Face recognition based on RGB images and Infrared images:
3.1 recognizing the face of the driver based on the R-Convnet model of the RGB image:
for each frame in the RGB video, sliding a window of 40 × 40, wherein the step length is 2, inputting an image obtained by sliding the window into an R40-Convnet detector to detect whether a face exists, adjusting the size of the image to be 28 × 28 if the face exists, continuously inputting the image into a next R28-Convnet detector, if the face exists, saving the position of the window at the moment, otherwise, not saving, continuously sliding the window until the picture is scaled at a ratio of 0.8 after a complete test picture is detected, repeating the steps until the picture is smaller than 40 × 40, at the moment, overlapping a plurality of windows at the position of the face, and calibrating the window into one window through a non-maximum suppression algorithm, so that the face of the driver can be obtained. The method greatly improves the time and the accuracy of face detection of the driver in the daytime.
3.2 identifying the face of the driver based on the I-Convnet model of the Infrared image:
for each frame in the Infrared video, sliding a window of 40 × 40, wherein the step size is 2, inputting an image obtained by sliding the window into an I40-Convnet detector to detect whether a face exists, adjusting the size of the image to 28 × 28 if the face exists, continuously inputting the image into a next I28-Convnet detector, if the face exists, saving the position of the window at the moment, otherwise, not saving, continuously sliding the window until a complete test picture is detected, scaling the picture at a ratio of 0.8, repeating the steps until the picture is smaller than 40 × 40, at the moment, overlapping a plurality of windows at the position of the face, and calibrating the window into one window through a non-maximum suppression algorithm, so that the face of the driver can be obtained. The method greatly improves the face detection time and accuracy of the driver at night.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
Claims (1)
1. A method for detecting the face of a driver in a day and night driving based on RGB-I is characterized by comprising a step of model training and a step of driver face recognition;
the step of model training comprises:
s1, preprocessing a driver face library;
s2, establishing an R-Convnet model based on RGB image training and an I-Convnet model based on Infrared image training;
the driver face recognition step includes:
s3, acquiring images of a driver day and night by using the RGB-included camera, acquiring RGB images of the driver in real time in the day, and acquiring the included images of the driver in real time at night;
s4, using the trained R-Convnet model for identification in the daytime and using the trained I-Convnet model for identification at night; the R-Convnet model is a cascaded R40-Convnet model and R28-Convnet model; the I-Convnet model is a cascaded I40-Convnet model and an I28-Convnet model;
the preprocessing in the step S1 includes driver RGB face library preprocessing and driver Infrared face library preprocessing;
the driver RGB face library preprocessing comprises the following steps: dividing RGB face pictures of the face of a driver into two groups: the group adjusts the size of the picture to 40 x 40, and then grays; another group adjusts the size of the picture to 28 x 28, and then grays;
the preprocessing of the Infrared face library of the driver comprises the following steps: the method comprises the following steps of dividing Infrared face pictures of a driver face into two groups: adjusting the size of the pictures to 40 × 40, and then performing gray scale storage; another group adjusts the size of the picture to 28 x 28, and then grays;
the training method of the R40-Convnet model in the step S2 includes: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 2 and 5, the learning efficiency is 0.01, the size of batch is 200, the training times are 100, then RGB face pictures obtained through preprocessing in the step S1 are input into the Convnet detector, the training times are reached, and an R40-Convnet model is obtained;
the training method of the R28-Convnet model comprises the following steps: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of the convolution kernels is 3 and 2, the learning efficiency is 0.01, the size of batch is 100, the training times are 50, then RGB face pictures obtained through S1 preprocessing are input into the Convnet detector, the training times are reached, and an R28-Convnet model is obtained;
the training method of the I40-Convnet model in the step S2 includes: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of convolution kernels is 2 and 5, the learning efficiency is 0.01, the size of batch is 200, the training times are 100, then, an Infrared face picture obtained through preprocessing in the step S2 is input into the Convnet detector, the training times are reached, and an I40-Convnet model is obtained;
the training method of the I28-Convnet model comprises the following steps: firstly, parameter setting is carried out on a Convnet detector, wherein the sizes of convolution kernels are 5 × 5 and 3 × 3 respectively, the number of the convolution kernels is 3 and 2, the learning efficiency is 0.01, the size of batch is 100, and the training times are 50, then, an Infrared face picture obtained through preprocessing in the step S2 is input into the Convnet detector, the training times are reached, and an I28-Convnet model is obtained;
the Convnet detector is obtained by removing a full connection layer on the basis of CNN;
the step S4 is implemented by:
in the daytime, each frame in the RGB video is slid by a window of 40 × 40, the step length is 2, an image obtained by sliding the window is input into an R40-Convnet model to detect whether a human face exists, if the human face exists, the size of the image is adjusted to 28 × 28, the image is input into a next cascaded R28-Convnet model, if the human face exists, the position of the window at the moment is stored, otherwise, the window is not stored, the window is continuously slid next until a complete test picture is detected, the picture is scaled by a proportion of 0.8, and the steps are repeated until the picture is smaller than 40 × 40; at the moment, a plurality of windows are overlapped at the face position, and the face position is calibrated into one window through a non-maximum suppression algorithm, so that the face of the driver can be obtained;
at night, each frame in the Infrared video is slid by a window of 40 × 40, the step length is 2, an image obtained by sliding the window is input into an I40-Convnet model to detect whether a human face exists, if the human face exists, the size of the image is adjusted to 28 × 28, the image is continuously input into a next I28-Convnet model, if the human face exists, the position of the window at the moment is stored, otherwise, the window is not stored, the window is continuously slid next until a complete test picture is detected, the picture is scaled by a proportion of 0.8, and the steps are repeated until the picture is smaller than 40 × 40; at the moment, a plurality of windows are overlapped at the face position, and the face position is calibrated into one window through a non-maximum suppression algorithm, so that the face of the driver can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610436625.1A CN106127123B (en) | 2016-06-16 | 2016-06-16 | Method for detecting face of driver in day and night driving in real time based on RGB-I |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610436625.1A CN106127123B (en) | 2016-06-16 | 2016-06-16 | Method for detecting face of driver in day and night driving in real time based on RGB-I |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106127123A CN106127123A (en) | 2016-11-16 |
CN106127123B true CN106127123B (en) | 2019-12-31 |
Family
ID=57471063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610436625.1A Active CN106127123B (en) | 2016-06-16 | 2016-06-16 | Method for detecting face of driver in day and night driving in real time based on RGB-I |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106127123B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729986B (en) * | 2017-09-19 | 2020-11-03 | 平安科技(深圳)有限公司 | Driving model training method, driver identification method, device, equipment and medium |
CN109271947A (en) * | 2018-09-28 | 2019-01-25 | 合肥工业大学 | A kind of night real-time hand language identifying system based on thermal imaging |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014176485A1 (en) * | 2013-04-26 | 2014-10-30 | West Virginia High Technology Consortium Foundation, Inc. | Facial recognition method and apparatus |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7602942B2 (en) * | 2004-11-12 | 2009-10-13 | Honeywell International Inc. | Infrared and visible fusion face recognition system |
US7469060B2 (en) * | 2004-11-12 | 2008-12-23 | Honeywell International Inc. | Infrared face detection and recognition system |
JP4853389B2 (en) * | 2007-06-07 | 2012-01-11 | 株式会社デンソー | Face image capturing device |
CN203193791U (en) * | 2013-03-26 | 2013-09-11 | 深圳市中控生物识别技术有限公司 | Network camera with face identification function |
CN103496341A (en) * | 2013-10-10 | 2014-01-08 | 扬州瑞控汽车电子有限公司 | Vehicle-mounted infrared night-vision imaging system based on vehicle active safety |
CN104318237A (en) * | 2014-10-28 | 2015-01-28 | 厦门大学 | Fatigue driving warning method based on face identification |
CN104580896B (en) * | 2014-12-25 | 2018-06-22 | 深圳市锐明技术股份有限公司 | A kind of video camera diurnal pattern switching method and apparatus |
CN105389546A (en) * | 2015-10-22 | 2016-03-09 | 四川膨旭科技有限公司 | System for identifying person at night during vehicle driving process |
-
2016
- 2016-06-16 CN CN201610436625.1A patent/CN106127123B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014176485A1 (en) * | 2013-04-26 | 2014-10-30 | West Virginia High Technology Consortium Foundation, Inc. | Facial recognition method and apparatus |
Non-Patent Citations (1)
Title |
---|
Facial expression recognition using thermal image processing and neural network;Y. Yoshitomi等;《Proceedings 6th IEEE International Workshop on Robot and Human Communication》;20020806;第380-385页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106127123A (en) | 2016-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113052210B (en) | Rapid low-light target detection method based on convolutional neural network | |
CN106875373B (en) | Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm | |
CN108446678B (en) | Dangerous driving behavior identification method based on skeletal features | |
CN101430759B (en) | Optimized recognition pretreatment method for human face | |
CN109325915B (en) | Super-resolution reconstruction method for low-resolution monitoring video | |
CN110956082B (en) | Face key point detection method and detection system based on deep learning | |
CN105608446A (en) | Video stream abnormal event detection method and apparatus | |
CN110363770B (en) | Training method and device for edge-guided infrared semantic segmentation model | |
CN104484672B (en) | Quick licence plate recognition method based on multiframe picture and autonomous learning | |
CN108052929A (en) | Parking space state detection method, system, readable storage medium storing program for executing and computer equipment | |
CN109117838B (en) | Target detection method and device applied to unmanned ship sensing system | |
CN109086803B (en) | Deep learning and personalized factor-based haze visibility detection system and method | |
CN112163447B (en) | Multi-task real-time gesture detection and recognition method based on Attention and Squeezenet | |
CN111382690B (en) | Vehicle re-identification method based on multi-loss fusion model | |
CN111881743B (en) | Facial feature point positioning method based on semantic segmentation | |
CN110175506B (en) | Pedestrian re-identification method and device based on parallel dimensionality reduction convolutional neural network | |
CN105243154A (en) | Remote sensing image retrieval method and system based on significant point characteristics and spare self-encodings | |
CN112949560B (en) | Method for identifying continuous expression change of long video expression interval under two-channel feature fusion | |
TW201308254A (en) | Motion detection method for comples scenes | |
CN106127193A (en) | A kind of facial image recognition method | |
CN106127123B (en) | Method for detecting face of driver in day and night driving in real time based on RGB-I | |
CN111178405A (en) | Similar object identification method fusing multiple neural networks | |
CN112052829B (en) | Pilot behavior monitoring method based on deep learning | |
CN111881803B (en) | Face recognition method based on improved YOLOv3 | |
CN116884036A (en) | Live pig posture detection method, device, equipment and medium based on YOLOv5DA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |