CN109145684B - Head state monitoring method based on region best matching feature points - Google Patents

Head state monitoring method based on region best matching feature points Download PDF

Info

Publication number
CN109145684B
CN109145684B CN201710465201.2A CN201710465201A CN109145684B CN 109145684 B CN109145684 B CN 109145684B CN 201710465201 A CN201710465201 A CN 201710465201A CN 109145684 B CN109145684 B CN 109145684B
Authority
CN
China
Prior art keywords
feature points
frame
pairs
template
head state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710465201.2A
Other languages
Chinese (zh)
Other versions
CN109145684A (en
Inventor
李小霞
张宇
李菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN201710465201.2A priority Critical patent/CN109145684B/en
Publication of CN109145684A publication Critical patent/CN109145684A/en
Application granted granted Critical
Publication of CN109145684B publication Critical patent/CN109145684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a head state monitoring method based on a region best matching feature point, which comprises the following steps: step 1, collecting an infrared video frame and carrying out face detection; step 2, planning a feature point detection area on the basis of the maximum face frame; step 3, selecting the face in the correct head state as a template, and extracting all SURF feature points of a template detection area; step 4, extracting SURF characteristic points of the monitoring frame detection area, and selecting three pairs of characteristic points which are most matched with the template; and 5, judging the head state of the monitoring frame according to the position information of the three matched pairs of feature points. The method is characterized in that: 1) the real-time performance is good, only a single frame of infrared image is needed, and only three pairs of matched feature points are adopted for calculation, so that the complexity of the algorithm is reduced; 2) the reliability is high, the error matching can be reduced by planning the feature point detection area, the randomness caused by single matching can be avoided by extracting three pairs of matched feature points, and the condition that the feature points are lost is effectively avoided.

Description

Head state monitoring method based on region best matching feature points
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a head state monitoring method based on a region best matching feature point.
Background
With the development of computer vision technology, the intelligent video monitoring technology can gradually realize the application of automobile auxiliary driving, the monitoring of fatigue driving is an important part of a driving auxiliary driving system, the monitoring of fatigue driving causes wide attention of a plurality of research institutions and personnel at home and abroad, and partial research results are obtained. Such as "fatigue driving warning system" of biddi equipment, "fatigue recognition system" of mass cars, speed "attention assistance system," and wolvo "driver safety warning system. These fatigue monitoring methods are generally divided into two categories: (1) monitoring activities such as face, expression, behavior and the like of a person through a camera based on machine vision; (2) the heart rate of the driver, the driving route of the vehicle and other parameters are monitored through the sensors. However, in the actual testing and using process, problems such as too high missing detection rate and false detection rate, poor adaptability to the environment, insufficient real-time performance and the like are exposed, and the actual requirements are difficult to meet, so that the method has important significance for further research.
In monitoring of fatigue driving, monitoring of the behavior of the driver is an important reference index. Incorrect states such as long-time head deviation and head lowering of a driver can cause incomplete judgment of road conditions, and traffic accidents are easily caused. The monitoring of the head state can be used for monitoring whether the attention of a driver is concentrated or not, and has an important effect on guaranteeing the driving safety.
At present, there are several methods for monitoring the behavior of a driver based on video: a bayesian classifier method, a neural network method, a PCA method combining projection information of a moving object, a skin color model method, and the like. The Bayes classifier and neural network method has large sample demand, the sample amount and the sample representativeness have great influence on the identification accuracy, and the requirement of real-time updating cannot be met; the PCA method, the skin color model and other methods also have the problems of low recognition accuracy, poor real-time performance, complex algorithm, and poor robustness and adaptability.
Disclosure of Invention
In order to reliably monitor the head state of a driver in real time, the invention provides a head state monitoring method based on region best matching feature points, which is based on an active infrared video, a detection region of the head is planned on the basis of face detection, SURF feature points of the detection region are extracted in real time, then the SURF feature points are matched with SURF feature points in the detection region of the face (template) in the correct head state, three pairs of the most matching SURF feature points are selected, a method for judging whether the head state is correct or not is established according to the position relationship between the three pairs of the most matching SURF feature points, the judgment reliability is high, and the speed can meet the real-time requirement.
The technical solution of the invention comprises the following steps:
step 1, collecting an infrared video frame, carrying out face detection by using an AdaBoost algorithm, and extracting a maximum face frame;
step 2, planning a feature point detection area on the basis of the maximum face frame, and removing most interference information to obtain a required key detection area;
step 3, selecting the face in the correct head state as a template, and extracting all SURF feature points of a template detection area;
step 4, extracting SURF characteristic points of the monitoring frame detection area, and selecting three pairs of characteristic points which are most matched with the template;
and 5, judging the head state of the monitoring frame according to the position information of the three matched pairs of feature points.
The infrared video in step 1 is less affected by the illumination than the visible light video. Under the same condition, the characteristic points of the infrared video are less than those of the visible light video, so that the matching times among the characteristic points are reduced, and the calculation speed is favorably improved.
Step 2, the detection area of the feature points is planned according to the proportion of the face detection frame, and the width and the height of the face detection frame are respectively set asW f AndH f in order to ensure that the feature points extracted in the correct approximate state are all from the human face, the central point of the planning region is the same as the central point of the maximum human face detection frame, and the width of the planning region is that two sides of the planning region are respectively removed on the basis of the maximum human face detection frameW f And/5, the height of the method is the same as that of the maximum face detection frame, and the strategy can reduce misjudgment and improve the reliability of the method.
And 3, extracting SURF characteristics for head state monitoring in the active infrared video, which is different from the SURF characteristics used for the existing image splicing, tracking, retrieving and the like.
And 4, selecting three pairs of feature points which are most matched with the template, namely calculating the inner product of the feature vectors of all SURF feature points of the monitoring frame and the template detection area in pairs respectively, and selecting the three pairs of SURF feature points with the largest inner product, wherein the purpose is to improve the reliability of monitoring and avoid the randomness caused by single matching, and the feature points with the largest inner product are more likely to be real matching points.
Step 5, the method for judging the head state of the monitoring frame is to firstly define the parallelism between three pairs of the best matching feature pointsMAnd normalizing the overall offset distanceDAnd setting a corresponding threshold valueT M AndT D then, the decision-making level fusion is carried out and decision-making fusion parameters are obtainedRAnd finally 10 consecutive framesRAre added up to obtainR s And is related to the threshold valueT R By comparison, it is possible to distinguish whether the head state is correct or not. Under (near) correct state, the matching points of the monitoring frame and the template detection area are more correct, and the connecting lines between the most matched points are approximately parallel, so that the parallelism can be adjustedMAs one of the criteria, normalizing the overall offset distanceDThe quantification can be more accurate, and the head state can be more reliably judged by combining the decision fusion of the quantification and the decision fusion. The method is different from the common method of matching by using a plurality of fixed characteristic points (such as eyes, a nose and the like) and monitoring the head state, and has higher reliability and flexibility, because the matched SURF characteristic points in the method are changed, the condition that the characteristic points are lost can be effectively avoided.
Compared with the prior art, the invention has the following remarkable advantages: 1) the method has good real-time performance, does not need large-scale sample statistics, only needs a single-frame infrared image, and only adopts three pairs of matched SURF characteristic points to calculate the parallelism and the normalized integral offset distance, thereby reducing the complexity of the algorithm; 2) the method has high reliability, the feature point detection area is planned on the basis of the maximum face frame, the mismatching can be reduced, three pairs of best matching feature points are extracted, the randomness caused by single matching can be avoided, the condition of feature point loss is effectively avoided, and decision-level fusion and judgment are carried out by combining the parallelism between the best matching feature points and the normalized integral deviation distance.
Drawings
FIG. 1 is a flow chart of head state discrimination according to the present invention;
FIG. 2 is a schematic diagram of the feature point detection area planning of the present invention;
FIG. 3 is a time-consuming experiment of extracting SIFT and SURF feature points on an original image according to the present invention;
FIG. 4 is a time-consuming experiment for extracting SURF feature points in a face frame and detection region according to the present invention;
FIG. 5 is an experimental graph of three pairs of best-matched feature points in the detection regions of the template and different monitoring frames according to the present invention;
FIG. 6 is a schematic view of the combination of the template and the detection area of the monitoring frame according to the present invention;
FIG. 7 is a schematic diagram of the coordinates of the template of the present invention separated from the inspection area of the inspection frame;
fig. 8 is a result of a head condition monitoring experiment according to the present invention.
Detailed Description
The invention will be further explained with reference to the drawings and the specific embodiments.
A head state monitoring method based on region best matching feature points is provided, a flow chart is shown in figure 1, and the method comprises the steps of infrared image acquisition, face detection, feature point detection region planning, template extraction and monitoring frame SURF feature extraction, region best matching feature point selection and head state discrimination.
The method comprises the following specific steps:
step 1, collecting an infrared image, carrying out face detection by using an AdaBoost algorithm, and extracting a maximum face frame;
the active infrared camera is used for acquiring video frames, 6 infrared light supplementing diodes are arranged around the camera, and the central wavelength of a camera filter is 850 nm.
Step 2, planning the feature point detection area on the basis of the maximum face frame, eliminating most interference information and obtaining a required key detection area, wherein the specific planning method comprises the following steps:
as shown in fig. 2, the detection area of the feature point is planned according to the proportion of the face detection frame, and the width and height of the face detection frame are set asW f AndH f in order to ensure that the feature points extracted in the correct approximate state are all from the human face, the central point of the planning region is the same as the central point of the maximum human face detection frame, and the width of the planning region is that two sides of the planning region are respectively removed on the basis of the maximum human face detection frameW f And/5, the height of the method is the same as that of the maximum face detection frame, the strategy can reduce misjudgment and improve the reliability of the method,namely, the width and the height of the finally obtained key detection area are respectively 3W f /5 andH f
step 3, selecting the face in the correct head state as a template, and extracting all SURF feature points of a template detection area;
feature point detection is carried out on the template detection area by adopting an SURF algorithm, and three steps of feature point detection based on a Hessian matrix, scale space representation and feature point positioning are carried out; and finally, direction angles of the feature points are distributed, and the feature points are described by using a feature point descriptor based on a Haar wavelet. Fig. 3 is a time-consuming experiment of extracting SIFT and SURF feature points over 10 consecutive original images (each frame of image size 640 x 480) in a video segment. The average time for extracting the SIFT features is 559ms, the average time for extracting the SURF features is only 96ms, and the speed for extracting the SURF features is 5.8 times as high as that of the SIFT features.
Fig. 4 is a time-consuming experiment of extracting SURF feature points in a face box and a detection area. From the experimental results, after the detection area is planned according to the face frame, the extraction of the feature points of the face is ensured as much as possible, the speed is improved, and the SURF feature extraction time of each image in the 10 frames of images in the detection area is 20ms on average.
Step 4, extracting SURF characteristic points of the monitoring frame detection area, and selecting three pairs of characteristic points which are most matched with the template;
as shown in fig. 5, the left and right images of each group of images are respectively the feature point distribution in the detection areas of the template and the monitoring frame, the small circles marked with different colors are the extracted feature points, and 49 SURF feature points are extracted in the detection areas of the template. And the selection of the best matching feature points is to respectively calculate the inner product of feature vectors of all SURF feature points of the template and the detection area of the monitoring frame in pairs, and select three pairs of SURF feature points with the largest inner products.
Fig. 5 also shows the position relationship of the three pairs of feature points of which the template and the monitoring frame are most matched. Wherein fig. 5 a) and fig. 5 b) are the feature point matching results of the 25 th frame and the 30 th frame, respectively, which reflect the case that the head state is relatively correct, it can be seen that the connecting lines between the three pairs of best matching points are relatively parallel, and the corresponding positions of each pair of matching points in the respective pictures are substantially the same, that is, the three pairs of best matching feature points match correctly in the correct state. And fig. 5 c) is the feature point matching result of the 91 st frame, although the connecting lines of the three pairs of points are relatively parallel, the positions of the matching points in each pair under the respective coordinates are obviously different, which indicates that when there is a large difference in the position coordinates between the matching points, the current head state is likely to be incorrect. Fig. 5 d) reflects an illegal head pose, which is manifested as inattention during driving, when the connecting lines between the three pairs of feature points that are the best matched are not significantly parallel and the positions of the matched feature points under the respective coordinates are significantly different. In summary, the parallelism and the deviation distance of the three pairs of the most matched feature points can be used as the basis for judging the head state.
Step 5, judging the head state of the monitoring frame according to the position information of the three pairs of the most matched characteristic points;
firstly, defining parallelism between three pairs of best matching feature pointsMAnd normalizing the overall offset distanceDAnd setting a corresponding threshold valueT M AndT D then, the decision-making level fusion is carried out and decision-making fusion parameters are obtainedRAnd finally 10 consecutive framesRAre added up to obtainR s And is related to the threshold valueT R By comparison, it is possible to distinguish whether the head state is correct or not.
In order to define the parallelism and the offset distance, two image coordinates are established to represent the positions of three pairs of the best matching feature points, which are respectively coordinates obtained by combining the template and the detection area of the monitoring frame (as shown in fig. 6, the coordinates are simply called synthetic coordinates, and the upper left corner is the origin) and coordinates obtained by separating the template and the detection area of the monitoring frame (as shown in fig. 7, the coordinates are simply called separated coordinates, and the upper left corner is the origin).
In the synthesized coordinates, as shown in FIG. 6, the corresponding three best matching points in the template are
Figure 825363DEST_PATH_IMAGE001
The three most matched points corresponding to the monitoring frame are
Figure 298063DEST_PATH_IMAGE002
Figure 53530DEST_PATH_IMAGE003
Coordinate values of three pairs of matching points in the synthesized coordinate. The three pairs of feature points respectively form three matching vectorsI 1I 2AndI 3
Figure 911764DEST_PATH_IMAGE004
(1)
then, the cosine of the angle between these three matching vectors can be expressed as:
Figure 930536DEST_PATH_IMAGE005
(2)
the cosine of the included angle between the vectors can reflect the direction difference of the vectors, namely when the cosine of the included angle between the two vectors is 1, the included angle between the vectors is 0 degree (the vector line segments are parallel or coincident with each other), when the angle is between 0 degree and 180 degrees, the cosine is reduced along with the increase of the angle, when the cosine value of the included angle between the vectors is 0, the corresponding included angle is 90 degrees (the vector line segments are perpendicular to each other), and when the cosine value of the included angle between the vectors is-1, the corresponding included angle is 180 degrees (the vector line segments are parallel or coincident with each other, but the directions are opposite). The degree of parallelism between the vectors can be expressed in terms of the absolute value of the cosine of the angle:
Figure 123489DEST_PATH_IMAGE006
(3)
wherein the content of the first and second substances,
Figure 948225DEST_PATH_IMAGE007
respectively represent vectorsI 1AndI 2I 1andI 3I 3andI 2the parallel degree between the two layers is in the range of [0,1 ]]Larger values indicate closer parallelism. On the basis of which a variable can be definedMTo reflect the degree of parallelism of the population of these three vectors:
Figure 660966DEST_PATH_IMAGE008
(4)
whereinMIs in the range of [0,3 ]]In between, a threshold value is set in this intervalT M Such that whenM<T M The current head state may be considered to be mismatched with the template, whereas the head state is more matched with the template, in this exampleT M =2.95。
In the split coordinates, as shown in FIG. 7, the three points that are the best matches in the template region at this time are
Figure 585060DEST_PATH_IMAGE009
Respectively corresponding to the matching points in the detection region of the monitoring frame
Figure 281621DEST_PATH_IMAGE010
Then, the euclidean distance between each pair of feature points is:
Figure 129622DEST_PATH_IMAGE011
(5)
wherein the content of the first and second substances,d 1d 2andd 3respectively representing pairs of matching pointsP 1AndP 1’、P 2andP 2’、P 3andP 3' Euclidean distance between each pair of matching points, and the Euclidean distance between each pair of matching points reflects the deviation distance of the matching points. Because the template size is variable in practical application, the distance needs to be normalized to define three pairs of best matching feature pointsNormalized overall offset distanceDComprises the following steps:
Figure 962449DEST_PATH_IMAGE012
(6)
wherein the content of the first and second substances,W fm andH fm respectively the width and height of the face image in the template,d i the Euclidean distance between the three pairs of the best matching characteristic points. The normalized integral deviation distance can reflect the integral deviation condition of the matching point, and a threshold value is setT D When is coming into contact withD>T D The time value indicates that the deviation is too large and the current head state is considered to be abnormal, in this example, the deviceT D =1/3。
Decision-level fusion can combine two methods of parallelism and normalized integral deviation distance, so that the judgment is more reliable. Because of the parallelism index in the abnormal stateMIs less than the deviation distance indexDThe decision-level fusion strategy is to integrate two indexes for comprehensive judgment and define a fusion decision parameterR(default value is 0):
Figure 854181DEST_PATH_IMAGE013
(7)
monitoring frames for 10 consecutive framesRThe values are accumulated to obtainR sR s Maximum 20, threshold ofT R The criterion isR s >T R Abnormal temporal head state, in this exampleT R And (5). The experimental result is shown in fig. 8, and it can be seen that the method has high discrimination reliability.

Claims (2)

1. A head state monitoring method based on region best matching feature points comprises the following steps:
step 1, collecting an infrared video frame, carrying out face detection by using an AdaBoost algorithm, and extracting a maximum face frame;
step 2, planning a feature point detection area on the basis of the maximum face frame to obtain a required key detection area: planning according to the proportion of the face detection frame, and setting the width and the height of the face detection frame asW f AndH f the central point of the planning region is the same as the central point of the maximum face detection frame, and the width of the planning region is that two sides of the planning region are respectively removed on the basis of the maximum face detection frameW f /5The height of the frame is the same as that of the maximum human face detection frame;
step 3, selecting the face in the correct head state as a template, and extracting all SURF feature points of a template detection area;
step 4, SURF feature points of the monitoring frame detection area are extracted, and three pairs of feature points which are most matched with the template are selected: respectively calculating the inner products of feature vectors of all SURF feature points of the monitoring frame and the template detection area pairwise, and selecting three pairs of SURF feature points with the maximum inner products;
and 5, judging the head state of the monitoring frame according to the position information of the three matched pairs of feature points: firstly, defining parallelism between three pairs of best matching feature pointsMAnd normalizing the overall offset distanceDAnd setting a corresponding threshold valueT M AndT D then, the decision-making level fusion is carried out and decision-making fusion parameters are obtainedRAnd finally monitoring the frames for 10 consecutive framesRAre added up to obtainR s And is related to the threshold valueT R Comparing, the criterion isR s >T R When the head state is abnormal, the head state can be distinguished to be correct or not.
2. The method of claim 1, wherein the step 5 determines the header status of the monitoring frame by:
defining parallelism between three pairs of best-matching feature pointsMAnd normalizing the overall offset distanceDAnd setting a corresponding threshold valueT M AndT D in which degree of parallelismMThe calculation process of (2) is as follows:
combining the template and the detection area of the monitoring frame into a coordinate, wherein the upper left corner is the origin, and the three most matched feature points in the template and the monitoring frame are respectively
Figure 322502DEST_PATH_IMAGE001
And
Figure 144964DEST_PATH_IMAGE002
the three pairs of feature points form three matching vectorsI 1I 2AndI 3
Figure 411997DEST_PATH_IMAGE003
(1)
the absolute value of the cosine of the angle between these three matching vectors is:
Figure 174679DEST_PATH_IMAGE004
(2)
Figure 843558DEST_PATH_IMAGE005
respectively represent vectorsI 1AndI 2I 1andI 3I 3andI 2the parallel degree between the two layers is in the range of [0,1 ]]Larger values indicate closer parallelism, defining variablesMTo reflect the degree of parallelism of the population of these three vectors:
Figure 836922DEST_PATH_IMAGE006
(3)
Mis in the range of [0,3 ]]In between, a threshold value is set in this intervalT M =2.95, such that whenM<T M The current head state and the model can be consideredThe plates are not matched, otherwise, the head state is matched with the template;
wherein the integral deviation distance is normalizedDThe calculation process of (2) is as follows:
respectively establishing a coordinate for the detection area of the template and the monitoring frame, taking the upper left corner as the origin, and three pairs of the most matched characteristic point pairs
Figure 653568DEST_PATH_IMAGE007
The Euclidean distance between them is:
Figure 390580DEST_PATH_IMAGE008
(4)
defining a normalized global deviation distanceDComprises the following steps:
Figure 913965DEST_PATH_IMAGE009
(5)
W fm andH fm respectively the width and height of the face image in the template,d i setting threshold value for Euclidean distance between three pairs of matched characteristic pointsT D =1/3 whenD>T D A time indicates that the deviation is too large and the current head state is considered abnormal.
CN201710465201.2A 2017-06-19 2017-06-19 Head state monitoring method based on region best matching feature points Active CN109145684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710465201.2A CN109145684B (en) 2017-06-19 2017-06-19 Head state monitoring method based on region best matching feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710465201.2A CN109145684B (en) 2017-06-19 2017-06-19 Head state monitoring method based on region best matching feature points

Publications (2)

Publication Number Publication Date
CN109145684A CN109145684A (en) 2019-01-04
CN109145684B true CN109145684B (en) 2022-02-18

Family

ID=64804258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710465201.2A Active CN109145684B (en) 2017-06-19 2017-06-19 Head state monitoring method based on region best matching feature points

Country Status (1)

Country Link
CN (1) CN109145684B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110672A (en) * 2019-05-10 2019-08-09 广东工业大学 A kind of facial expression recognizing method, device and equipment
CN113469201A (en) * 2020-03-31 2021-10-01 阿里巴巴集团控股有限公司 Image acquisition equipment offset detection method, image matching method, system and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof
CN104573657A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Blind driving detection method based on head lowing characteristics

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070080967A1 (en) * 2005-10-11 2007-04-12 Animetrics Inc. Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
US8885877B2 (en) * 2011-05-20 2014-11-11 Eyefluence, Inc. Systems and methods for identifying gaze tracking scene reference locations
CN104573658B (en) * 2015-01-09 2018-09-18 安徽清新互联信息科技有限公司 A kind of blind based on support vector machines drives detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982316A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Driver abnormal driving behavior recognition device and method thereof
CN104573657A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Blind driving detection method based on head lowing characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions;Vincent Drouard等;《IEEE Transactions on Image Processing 2017》;20170116;第26卷(第3期);1428-1440 *
基于人脸特征和AdaBoost算法的多姿态人脸检测;阮锦新等;《计算机应用》;20100401;第30卷(第4期);967-970 *

Also Published As

Publication number Publication date
CN109145684A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN108791299B (en) Driving fatigue detection and early warning system and method based on vision
CN109460699B (en) Driver safety belt wearing identification method based on deep learning
CN108960065B (en) Driving behavior detection method based on vision
US20230154207A1 (en) Driver fatigue detection method and system based on combining a pseudo-3d convolutional neural network and an attention mechanism
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN104616438B (en) A kind of motion detection method of yawning for fatigue driving detection
Omidyeganeh et al. Intelligent driver drowsiness detection through fusion of yawning and eye closure
CN102436715B (en) Detection method for fatigue driving
CN110751051B (en) Abnormal driving behavior detection method based on machine vision
US7692549B2 (en) Method and system for detecting operator alertness
KR101653278B1 (en) Face tracking system using colar-based face detection method
CN104123549B (en) Eye positioning method for real-time monitoring of fatigue driving
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
CN105488453A (en) Detection identification method of no-seat-belt-fastening behavior of driver based on image processing
CN102306293A (en) Method for judging driver exam in actual road based on facial image identification technology
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN110991348A (en) Face micro-expression detection method based on optical flow gradient amplitude characteristics
CN107194381A (en) Driver status monitoring system based on Kinect
CN109145684B (en) Head state monitoring method based on region best matching feature points
CN105989614A (en) Dangerous object detection method based on fusion of multi-source visual information
CN113179389A (en) System and method for identifying crane jib of power transmission line dangerous vehicle
CN117115752A (en) Expressway video monitoring method and system
Lee et al. Low computational vehicle lane changing prediction using drone traffic dataset
CN112926364A (en) Head posture recognition method and system, automobile data recorder and intelligent cabin

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant