CN111553217A - Driver call monitoring method and system - Google Patents

Driver call monitoring method and system Download PDF

Info

Publication number
CN111553217A
CN111553217A CN202010314035.8A CN202010314035A CN111553217A CN 111553217 A CN111553217 A CN 111553217A CN 202010314035 A CN202010314035 A CN 202010314035A CN 111553217 A CN111553217 A CN 111553217A
Authority
CN
China
Prior art keywords
driver
face
hand
detection
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010314035.8A
Other languages
Chinese (zh)
Inventor
闫保中
何伟
韩旭东
张镇
贾瑞涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202010314035.8A priority Critical patent/CN111553217A/en
Publication of CN111553217A publication Critical patent/CN111553217A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for monitoring the call made by a driver, which specifically comprise the following steps: collecting real-time driving video of a driver and positioning a face; selecting feature points according to the face orientation, initializing the model to position the face feature points, and finishing mouth action judgment; completing the hand positioning of the driver by using a hand positioning algorithm based on a face mask; obtaining an interested driver detection area, completing HOG characteristics in the area, utilizing PAC to reduce the dimension of the HOG characteristics, extracting LBP characteristics, and combining into new PCA-HOG + LBP characteristics; classifying the PCA-HOG + LBP characteristics by using an SVM classifier; and judging whether the driver uses the phone or not by using the detected mouth action, hand position and driving posture, and giving an early warning to the driver through the system. The invention can stably judge the calling behavior of the driver and carry out early warning in time in different driving environments, thereby effectively reducing the occurrence of traffic accidents.

Description

Driver call monitoring method and system
Technical Field
The invention belongs to the technical field of safe driving, relates to a driver call detection method and system, and particularly relates to a driver call monitoring method and system based on multiple characteristic parameters.
Background
Studies have shown that 80% to 90% of road traffic accidents are caused by human factors, with traffic accidents caused by driver's own mistakes finding 70% -80%. Various behavior states of the driver are the most important influence factors causing the occurrence of traffic accidents. In daily life, a driver needs to answer a call in the driving process for various reasons, and the behavior of answering the call or playing a mobile phone by the driver in the driving process can bring great potential safety hazards to the traffic band. Therefore, if the driver can accurately identify dangerous driving behaviors and give early warning to the dangerous behaviors, the situations can be avoided to a great extent so as to reduce the occurrence of traffic accidents.
For monitoring the call-making behavior of a driver, a visual feature-based method is mainly researched at present, and a machine learning method is directly utilized to analyze whether the behavior of a mobile phone of the driver is a single frame or not. However, this method may cause a relatively large error in the detection process. The driver often has some physiological actions such as smoothing out hair, ears and the like in the driving process, the actions are similar to the action of making a call of the driver, when machine learning is utilized to classify and judge the driver in the actual application process, misjudgment is often generated, the detection rate of the algorithm is reduced, the detection rate and the real-time performance of the driver call behavior monitoring technology in the actual application are not good, and the driver call behavior monitoring technology is limited in the actual application. Therefore, the technology needs to be further improved in practical application.
Disclosure of Invention
Aiming at the prior art, the invention aims to solve the technical problem of providing a driver call monitoring method and system which can stably and accurately judge the call behavior of the driver in different driving environments, carry out timely early warning on the violation behavior of the driver and effectively reduce the occurrence of traffic accidents.
In order to solve the technical problem, the invention provides a driver call monitoring method, which comprises the following steps:
s1: acquiring a real-time video image of driving of a driver by using a camera;
s2: using the improved human face rapid detection algorithm to position the human face position;
s3: using an improved face characteristic point extraction algorithm to obtain face characteristic points of a driver, and calculating the mouth shape of the driver;
s4: positioning the hand position of the driver by adopting a hand positioning algorithm based on a face mask;
s5: acquiring a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver;
s6: and (4) using the mouth shape, the hand position and the driving posture detected in real time in the steps S3, S4 and S5 as judgment factors for judging whether the driver calls, and when the judgment factors are larger than a given threshold value, judging that the driver calls and sending an alarm by the system.
The invention also includes:
the improved fast face detection algorithm for locating the face position in the S2 specifically comprises:
s2.1: firstly, recording faces detected by Adaboost for three times, averaging the sizes of the faces detected for three times to obtain the size of an average face, and storing the average face obtained as a basis for filtering a skin color detection result;
s2.2: establishing a skin color model, detecting the obtained real-time image by using the established skin color model, filtering out non-skin color areas and only reserving skin color areas;
s2.3: filtering the result of skin color detection by using the average face size, filtering the area which does not meet the face size, only reserving the area of the skin color area which meets the difference with the average face size within a given range, reducing the detection area and accelerating the detection speed;
s2.4: and performing face detection in the filtered region by using an Adaboost algorithm, judging whether a face is detected, if the face is not detected, skipping to execute S1, if the face is detected, storing the position coordinates of the region, and executing S3, wherein the position stored last time is preferentially detected by detecting the face each time.
2, in S3, using an improved face feature point extraction algorithm to obtain the face feature points of the driver, and calculating the mouth shape of the driver specifically as follows:
s3.1: extracting HOG characteristics of a detected driver face picture, and performing head posture preliminary estimation by using a trained SVM classifier, wherein the preliminary postures comprise five postures, namely, a forward head direction, a left head deviation, a right head deviation, a head raising and a head lowering;
s3.2: matching the corresponding initialized AAM model according to the head posture obtained in the S3.1;
s3.3: performing AAM fitting calculation by utilizing reverse combination;
s3.4: repeating S3.3, and continuously performing iterative fitting to fit the model to a position coincident with the human face to obtain feature points;
s3.5: selecting four feature points including two feature points of the left and right mouth corners and two feature points of the middlemost of the upper and lower lips to form a diamond, calculating the ratio of the diagonals of the diamond, and comparing the ratio with a given threshold value to determine whether the mouth of the driver is open or closed.
3, a hand positioning algorithm based on a face mask is adopted in S4, and the positioning of the hand position of the driver specifically comprises the following steps:
performing 'and' operation on the picture with the extracted skin color and a skin color mask picture, wherein the pixel value of the skin color mask picture in the face region is 0, after the 'and' operation is performed, the pixel value of the corresponding region is also 0, the pixel value of the region except the skin color of the face mask picture is 1, after the 'and' operation is performed, keeping the original pixel value unchanged, performing graying and binarization image processing operation on the picture after the 'and' operation, wherein the last remaining region is a hand region, and calculating the minimum outsourcing rectangle of the last remaining region to obtain the hand position.
S5, acquiring a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver specifically as follows:
s5.1, expanding the positioned rectangular frame of the face of the driver outwards according to a certain ratio to form a new rectangular frame, so that the new rectangular frame can contain the position of the hand of the driver when the driver makes a call, and forming a detection area specifically comprises the following steps:
setting a point P by using the position coordinates of the face of the driver determined in S21Is the coordinate of the upper left corner of a rectangular area of a human face, P2Is the coordinate of the upper right corner of the rectangular frame of the face, P3For the coordinates of the lower right corner of the rectangular area of the face, P3Expand to the lower right corner to obtain P4,P1And P4The formed rectangular area is a detection area for the left-hand call of the driver, and the coordinate point of the lower left corner of the face rectangular area is expanded to the lower left corner to obtain P5,P2And P5The rectangular area formed is the driver's right-hand call detection area, P1、P2、P3、P4And P5Respectively is (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) And (x)5,y5) In which P is1、P2、P3Has been acquired at the time of face detection at S2, the point P is calculated4And P5The formula calculated is as follows:
Figure RE-GDA0002528000740000031
wherein a1 is a set transversal expansion coefficient, and a2 is a set longitudinal expansion coefficient;
s5.2: extracting PAC-HOG characteristics and LBP characteristics of the detection area, and fusing the two characteristics in series into new PCA-HOG + LBP characteristics to replace single characteristics;
s5.3: and classifying the fused PCA-HOG + LBP characteristics by using an SVM classifier, and judging the driving state of the driver, including a left-hand mobile phone state, a right-hand mobile phone state and a hand-free mobile phone state.
The judgment factors in S6 are specifically:
Figure RE-GDA0002528000740000032
wherein α (i), i is 1,2,3 …, n, and represents the classification result of the driving posture of the ith frame image, the result has two types of 0 and 1, the result is 0 represents that the driver does not use the mobile phone, the result is 1 represents that the driver has the action of using the mobile phone, and x is1Weight of the vehicle attitude to the last result, β (i) shows the hand position detection result of the i-th frame image, and the result is 1, that the hand position is in the call detection area, x2Representing the weight of the hand position detection accounting for the final result; (i) the result of the mouth shape detection of the ith frame image is 1, 0 and x3The weight of the mouth action in the final result is shown, and n represents the number of frames which need to be cumulatively detected.
The invention also includes:
the detection system has a man-machine interaction interface, detects the call-making behavior of the stored video or detects the call-making behavior of the driver in real time, and can give out early warning to the driver when the behavior that the driver uses a mobile phone in the driving process is detected.
The invention has the beneficial effects that:
1. the face detection algorithm is improved, and after the face detection algorithm is improved, the position of a face can be rapidly and accurately determined;
2. the orientation of the human face is roughly classified by improving an AAM human face characteristic point extraction algorithm, and the human faces in different orientations are matched with different initialization models, so that the influence of the human face orientation problem on the positioning of the human face characteristic points is reduced, the fitting times in the characteristic point extraction process are reduced, the extraction speed is accelerated, and the extracted characteristic points have higher precision;
3. when the driving posture of the driver is judged, the PAC-HOG characteristic and the LBP characteristic are fused instead of using a single certain characteristic, so that the robustness of identification is stronger;
4. the hand position positioning method is provided, the hands of a driver can be effectively positioned, and a basis is provided for final behavior judgment;
5. the driver's mouth action, hand position, driving posture etc. multiple factor are synthesized and whether the action of making a call is judged, give the weight that different factors are different, have reduced the driver because in the driving process scratch the ear, smooth out some physiological disturbance actions such as hair, the effect of monitoring is better.
Drawings
Fig. 1 is a flow chart of a driver call monitoring method according to an embodiment of the invention.
FIG. 2 is a flow chart of a driver face detection algorithm according to an embodiment of the invention.
FIG. 3 is a diagram illustrating various facial feature point initialization models according to an embodiment of the invention.
FIG. 4 is a schematic diagram of a face feature point location algorithm according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of the distribution of human face feature points according to an embodiment of the invention.
FIG. 6 is a schematic diagram of a hand location algorithm according to an embodiment of the present invention.
FIG. 7 is a schematic diagram of hand position analysis according to an embodiment of the invention.
Fig. 8(a) is a schematic diagram of the distribution of the detection areas for left-hand call of the driver according to the embodiment of the invention.
Fig. 8(b) is a schematic diagram of the distribution of the detection areas for right-hand call made by the driver according to the embodiment of the present invention.
FIG. 9 is a schematic view of a driver call monitoring system interface in accordance with an embodiment of the present invention.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
The invention relates to a multi-characteristic parameter-based driver call monitoring method, which comprises the following steps: the camera is used for collecting real-time driving video of a driver, and the improved face positioning algorithm is used for quickly positioning the face. And then judging the face orientation, selecting different feature point initialization models according to different face orientations to position face feature points, and finishing the judgment of the mouth action of the driver by using the face feature points. A hand positioning algorithm based on a face mask is provided to complete the positioning of the driver's hand. And acquiring an interested driver detection area through the positioned face, completing HOG characteristics on the interested detection area, reducing the dimension of the HOG characteristics by using PAC (programmable automation controller), extracting LBP (local binary pattern) characteristics of the interested area, and combining the HOG characteristics and the LBP characteristics after dimension reduction into new PCA-HOG + LBP characteristics. And classifying the PCA-HOG + LBP characteristics of the driving posture of the driver by using an SVM classifier. And comprehensively judging the behavior of calling the driver by using the detected mouth action, hand position and driving posture of the driver, giving a final result of whether the driver calls in the forming process or not, and early warning the driver by using a developed system. The technical scheme of the invention can stably judge the calling behavior of the driver in different driving environments, and can early warn the violation behavior of the driver in time, thereby effectively reducing the occurrence of traffic accidents.
The invention comprises the following steps:
s1, acquiring a real-time video image of driver driving by using a camera;
s2, positioning the face position by using the improved face rapid detection algorithm;
s3, obtaining 68 feature point coordinates by using an improved human face feature point positioning algorithm, and calculating the mouth shape of the driver;
s4, positioning the hand position of the driver by adopting a hand positioning algorithm based on the face mask;
s5, acquiring a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver;
and S6, judging the telephone calling behavior of the driver by integrating multiple factors by using the mouth movement, the hand position and the driving posture which are detected in real time in the steps S3, S4 and S5 as judgment factors for judging whether the driver calls or not.
Step S1 utilizes a real-time video image of the camera. A common monitoring camera is arranged on a front windshield of an automobile to obtain a real-time image of a driver, and the image is preprocessed after being intercepted from a video stream. The pre-processing techniques include adaptive median filtering and laplacian-based image enhancement.
Step S2 is to improve the conventional face localization algorithm, so as to speed up the face detection and improve the real-time performance of the detection algorithm. The method mainly comprises the steps of firstly recording three successfully detected faces, averaging the sizes of the three detected faces to obtain the size of an average face, combining skin color with an Adaboost algorithm, filtering out an area with an overlarge size difference with the average face by using the average face, and reducing the face detection range. At the same time, the position coordinates of the face successfully detected last time are recorded in the system every time, and the position recorded last time is preferentially detected before the face is detected every time
Step S3 improves the AAM-based face feature point algorithm, and proposes a mouth shape calculation method. The method specifically comprises the following steps:
HOG characteristics of the face image are extracted firstly, then an SVM face orientation classifier is used, and different face characteristic point initialization models are adopted according to different results of face orientation. Face feature points are then extracted using an optimized AAM algorithm.
The mouth shape is then calculated using the extracted 68 personal face feature points. The 68 feature points are numbered, four of the mouth feature points are selected to form a diamond, the ratio of the diagonals of the diamond is calculated, and the ratio is compared with a specific threshold value to determine whether the mouth of the driver is open or closed.
Step S4 provides a hand-positioning algorithm based on the skin tone mask, which positions the driver' S hand as a basis for determining the behavior of making a call. And screening the skin color part of the driver through a skin color model, positioning the face of the driver by using a face positioning algorithm in S2, removing a face area through the positioned face, and finishing hand positioning, wherein the rest skin color area is a hand area.
Step S5 proposes to perform series fusion of multiple features and classify the driving posture of the driver, which specifically includes:
firstly, expanding outwards according to a certain ratio by utilizing the positioned rectangular frame position of the face of the driver to form a new rectangular frame, so that the new rectangular frame can contain the position of the hand of the driver when the driver makes a call, and an interested detection area is formed;
then extracting PAC-HOG characteristics and LBP characteristics of the region of interest, and fusing the two characteristics in series to form new PCA-HOG + LBP characteristics instead of single characteristics;
and finally, classifying the fused PCA-HOG + LBP characteristics by using an SVM classifier, and judging the driving state of the driver.
Step S6 sets a threshold value using the mouth movement, hand position, and driving posture of the driver at a certain ratio as a determination factor, and when the final result is greater than the threshold value, it is determined that the driver is using the mobile phone, and the system issues an alarm.
The system is developed by combining OpenCV and Qt, a man-machine interaction interface is drawn, the system can detect the call-making behavior of the stored video and can detect the call-making behavior of the driver in real time. The implementation of the call monitoring of the driver is carried out according to the steps from S1 to S7, and when the behavior that the driver uses the mobile phone in the driving process is monitored, the system can give out early warning to the driver.
Fig. 1 is a flowchart of a method for detecting fatigue driving of a driver according to an embodiment of the present invention, which includes the following steps:
and S1, acquiring the image of the driver through the camera, and preprocessing the image.
Specifically, the camera is a common monitoring camera and is arranged in front of the front windshield of the automobile. The method is used for acquiring the images of the driver, obtaining the images input by the driver, and preprocessing the images after intercepting the images from the video stream. The pre-processing techniques include adaptive median filtering and laplacian-based image enhancement. The adaptive median filtering replaces the intermediate pixel by the median of all the pixel grays in the template, and the size of the template can be adjusted to reduce the interference of smooth non-impact noise. The laplacian image enhancement is performed by traversing the whole image and then applying a laplacian operator to obtain a value after the pixel according to the gray value of the pixel neighborhood.
And S2, marking the position of the face by using an improved face detection algorithm.
Specifically, as shown in fig. 2, for the image of the driver to be detected, the size of the average face of the driver is first calculated, and the detection area is filtered by using the size of the average face. The method comprises the steps of recording the sizes of three successfully detected face areas by using an Adaboost algorithm, and calculating an average value according to the sizes of the three successfully detected face areas, wherein the average value is used as an average face size and is recorded as S. The skin color model is utilized to complete the detection of the skin color of the driver, and the result of the area of the skin color detection is recorded as SiIf the size difference between the skin color detection area and the average human face is too large, the skin color detection area does not meet the requirement of 0.7S<Si<And 1.3S, the area does not have the face, and the target area is filtered.
The following steps of face detection are as follows:
(1) the face detection is carried out in the skin color area after the filtering until a certain frame detects the face, and the face is marked as R1
(2) R is to be11.2 fold expansion to obtain R2R is to be2As the interested area of the next frame, the face detection is carried out in the area, and if the face is detected, R is used2Region of (5) is given by R1And 2) executing again, and jumping to the step 3) if the face is not detected;
(3) continue at R of the video frame2Continuously carrying out face detection of Adaboost for 2-3 times in the region, skipping to the step 1) if the face of the driver can not be detected, and otherwise, assigning the detection result to R again1Step 2) is performed again.
And S3, obtaining 68 positions of the personal facial feature points by using an improved facial feature point positioning algorithm, and calculating the mouth shape of the driver.
When the traditional AAM human face characteristic point extraction algorithm is used for extracting human face characteristic points, a fixed initialized human face model is used, when the initialized model deviates from a target position too much, the fitting model is difficult to converge to a correct position, the convergence speed is low, the convergence is not accurate, and different human face models for training different head postures are provided.
As shown in fig. 3, in the training process, five initialization models of the human face are trained, namely, a forward head, a left head, a right head, a head upward and a head downward, and different initialization models are selected for the five head postures and different head postures.
The specific method of the head pose is that the object image is divided into a plurality of small blocks (cells) by the HOG feature, and gradient histograms of the pixels of the small blocks are combined together to form a vector representing the image. HOG has no rotation and scale invariance and therefore is much faster to compute. And the method is sensitive to shape change, can well represent the face contour, and is particularly suitable for face orientation classification tasks. After the HOG features are extracted, HOG feature vectors can be obtained for subsequent SVM classification, and finally the face orientation classifier is obtained.
Specifically, the feature point extraction process is as shown in fig. 4:
1) extracting HOG characteristics of an input picture, and performing primary estimation on the head posture by using a trained SVM classifier;
2) matching a corresponding initialization AAM model according to the estimated head posture;
3) performing AAM fitting calculation by utilizing reverse combination;
4) repeating the step 3) for multiple times, and continuously performing iterative fitting to fit the model to a position coincident with the human face;
5) and acquiring the characteristic points through the final fitting result.
The mouth movement is calculated by 68 individual face characteristic points, specifically, as shown in fig. 5, 20 characteristic points of the mouth are shown, the numbers of the characteristic points are from 48 to 67, when the mouth movement of the driver is analyzed, not all 20 characteristic points of the mouth are needed, and the mouth movement of the driver can be reflected by using some characteristic points. 4 characteristic points which can reflect the opening and closing of the mouth part most are taken as analysis objects, and the selected characteristic points are two characteristic points of a left mouth corner and a right mouth corner and two characteristic points of the middlemost part of the upper lip and the lower lip respectively. And combining the four points into a diamond, and using the ratio of two diagonals of the diamond as the judgment standard of the mouth opening and closing.
The judgment criterion is that the diagonal lines are respectively h and w, when h/w is greater than 0.8, the driver is indicated to open the mouth, and when h/w is less than 0.8, the driver is indicated not to open the mouth, so that whether the driver has the mouth opening action or not is judged.
And S4, positioning the hand position of the driver by adopting a hand positioning algorithm based on the face mask, and taking the hand position as a characteristic for finally judging whether the driver makes a call.
The target area is the hand position, and the human face is the skin color and is similar to the skin color of the hand, so that the human face is generally contained in the skin color detection process, the human face area needs to be eliminated, a hand area positioning algorithm based on a human face mask is provided, and finally the hand of a driver is separately segmented.
As shown in fig. 6, the present invention first assumes that the image after skin color extraction is α (x, y), α (x, y) is the skin color region after morphological processing and area filtering, α (x, y)0,y0) Is a point (x)0,y0) β (u, v) in the case where a face is detected, the point (u) is set0,v0) As the coordinates of the upper right point of the rectangular region of the face, point (u)1,v1) Is the coordinates of the lower left point of the rectangular area of the face. The face mask is γ (u, v), and the expression of the face mask is:
Figure RE-GDA0002528000740000081
at this time, the pixel values of the face regions of the skin color mask are all 0, the skin colors of the rest non-face regions are all set to be 1, and λ (x, y) is an output image processed by the skin color mask, so λ (x, y) can be expressed as:
λ(x,y)=γ(x,y)&α(x,y)
namely, the 'and' operation is carried out on the picture after the skin color is extracted and the skin color mask picture, because the pixel value of the skin color mask picture in the face area is 0, after the 'and' operation is carried out, the pixel value of the corresponding area is also 0, and the pixel value of the area except the skin color of the face mask picture is 1, after the 'and' operation is carried out, the original pixel value is kept unchanged. And (3) carrying out a series of image processing operations such as graying, binarization and the like on the lambda (x, y) after the operation of 'and', wherein the last residual area is the hand area, and calculating the minimum outsourcing rectangle of the last residual area, namely the required hand position.
And finally, analyzing the position of the hand, wherein the position state of the hand is divided into the states that the hand is in the detection area and the hand is not in the detection area, and the specific judgment mode is as follows:
as shown in fig. 7: let the abscissa of the vertex at the top right corner of the face frame be y1The vertex abscissa of the lower right corner is y3The abscissa of the detected upper right vertex of the driver's hand position frame is y2. When y is3<y2<y1When the mobile phone is in the call detection area, the hand is in the call detection area; when y is2<y3When the hand position is not in the call detection area. The influence of background factors on the final result can be effectively reduced by adding the position factors of the hands into the final judgment of calling.
S5, obtaining a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver.
As shown in fig. 8(a) and 8(b), in the present invention, the driving posture detection region is specifically determined by first setting a point P using the face position of the driver determined in S21Is the coordinate of the upper left corner of a rectangular area of a human face, P2Is the coordinate of the upper right corner of the rectangular frame of the face, P3As coordinates of the lower right corner of the face, P1And P4The rectangular area is formed as the left-hand calling detection area of the driver, P2And P5The formed area is a driver right-hand call detection area. P1、P2、P3、P4And P5Respectively is (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) And (x)5,y5) In which P is1、P2、P3Is already acquired and is a known quantity when face detection is performed in S2, and therefore, it is necessary to calculate the point P4And P5The formula calculated is as follows:
Figure RE-GDA0002528000740000091
the method comprises the following specific steps of classifying driving postures of a driver:
(1) extracting HOG features, wherein when the HOG features are extracted, parameters of each part are set as follows, block is 32 x 32, cell is 16 x 16, the step length of the block is 16 x 16, and nbins is 9;
(2) reducing the dimension of the HOG feature by PCA, namely reducing the dimension into 1000 dimensions, thereby obtaining a PCA-HOG feature with lower dimension;
(3) extracting LBP characteristics, wherein the size of the set region block is 32 x 32 in the LBP characteristic extraction process;
(4) carrying out series fusion on the PAC-HOG characteristic and the LBP characteristic after dimension reduction to obtain a new PCA-HOG + LBP characteristic;
(5) inputting the PCA-HOG + LBP characteristics to carry out SVM classifier training to obtain a classifier;
(6) inputting a test image, classifying by using the obtained classifier to obtain classification results, wherein the classification results are classified into three types, namely a left-handed mobile phone state, a right-handed mobile phone state and a hand-free mobile phone state.
The characteristics of the HOG and the LBP can be complemented to a certain extent, and the problems that a single characteristic description operator encounters complex driving backgrounds such as partial shielding of a detection area, frequent light change and the like during identification, and the identification efficiency is low can be well solved by combining the HOG and the LBP.
And S6, judging the telephone calling behavior of the driver by integrating multiple factors by using the mouth movement, the hand position and the driving posture which are detected in real time in the steps S4, S4 and S5 as judgment factors for judging whether the driver calls or not.
Specifically, each logical discrimination mode adopts an image detection result accumulated in a period of time as a final result, each mode combines three significant features in the calling process and respectively represents a driving posture, a hand position and a mouth movement, and the discrimination formula is as follows:
Figure RE-GDA0002528000740000092
wherein α (i), i is 1,2,3 …, n, and represents the classification result of the driving posture of the ith frame image, the result is only 0 and 1, the result is 0 represents that the driver does not use the mobile phone, the result is 1 represents that the driver has the action of using the mobile phone, x1β (i) shows the hand position detection result of the ith frame image, and similarly, a result of 1 shows that the hand position is in the call detection region, and x2Indicating the weight of the hand position detection to the final result. (i) Shows the mouth part movement detection result of the ith frame image, the result is 1 shows the mouth opening movement, 0 shows the mouth opening movement, x3Representing the weight of the mouth action against the final result. n represents the number of frames that need to be cumulatively detected. J is the final result of the current frame image to be judged, the threshold value of the final judgment result is set as T, if so, the judgment is carried out
J>T
Indicating that the driver has the action of making a call.
The system development is carried out by combining the OpenCV and the Qt, a man-machine interaction interface is drawn, and as shown in FIG. 9, the system can detect the call-making behavior of the stored video and can detect the call-making behavior of the driver in real time. The implementation of the call monitoring of the driver is carried out according to the steps from 1 to 7, and when the fact that the driver uses the mobile phone in the driving process is monitored, the system can give out early warning to the driver.
The specific implementation mode of the invention also comprises:
the invention discloses a driver call monitoring method, which comprises the following steps:
s1, acquiring a real-time video image of driver driving by using a camera;
s2, positioning the face position by using the improved face rapid detection algorithm;
s3, obtaining 68 feature point coordinates by using an improved human face feature point positioning algorithm, and calculating the mouth shape of the driver;
s4, positioning the hand position of the driver by adopting a hand positioning algorithm based on the face mask;
s5, acquiring a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver;
and S6, judging the telephone calling behavior of the driver by integrating multiple factors by using the mouth movement, the hand position and the driving posture which are detected in real time in the steps S4, S4 and S5 as judgment factors for judging whether the driver calls or not.
The improved fast face detection algorithm in step S2 specifically includes:
s2.1; firstly, recording the face successfully detected by using Adaboost for three times, averaging the sizes of the faces detected for three times to obtain the size of an average face, and storing the obtained average face in a system as a basis for filtering a skin color detection result;
s2.2, establishing a skin color model, detecting the obtained real-time image by using the established skin color model, filtering out non-skin color areas and only reserving the skin color areas;
s2.3, filtering the result of skin color detection by using the average face size, filtering the area which does not meet the face size, only reserving the area of the skin color area which meets the face size, reducing the detection area and accelerating the detection speed;
and S2.4, performing face detection in the filtered area by using an Adaboost algorithm, if a face is detected, storing the position of the area, and continuously detecting the area nearby the next detection, so that the detection area can be shortened, and if no face is detected, performing global search in the filtered area.
In the step S3, the facial feature points of the driver are obtained through an improved face feature point extraction algorithm, and calculating the mouth shape of the driver specifically includes:
s3.1, extracting HOG characteristics of a detected driver face picture, and performing head posture preliminary estimation by using a trained SVM classifier, wherein five postures are respectively the front of the head, the left head deviation, the right head deviation, the upward head and the low head in the preliminary postures;
s3.2, matching the corresponding initialized AAM model according to the estimated head posture;
s3.3, performing AAM fitting calculation by utilizing reverse combination;
s3.4, repeating S.3.3) for multiple times, and continuously performing iterative fitting to fit the model to a position coincident with the human face to obtain 68 feature points;
and S3.5, analyzing the mouth movement of the driver by using the acquired characteristics.
Step S3 is a method for analyzing the mouth shape using the acquired 68 individual face feature points, in which two feature points of the left and right mouth corners and two feature points of the top and bottom lips at the center are four feature points to form a diamond, the mouth motion of the driver is analyzed by the ratio of the diagonals of the diamond, the diagonals are respectively h and w, when h/w >0.8, it is indicated as mouth opening motion, when h/w <0.8, it is indicated that the driver does not have mouth opening motion, h is the top and bottom diagonal, and w is the left and right diagonal, thereby determining whether the driver has mouth opening motion.
Step S4 adopts a hand positioning algorithm based on a face mask, and positioning the hand position of the driver specifically includes:
the target area is the hand position, because the human face is also the skin color, similar to the skin color of the hand, the human face is generally included in the skin color detection process, the human face area is required to be excluded, and the 'and' operation is carried out on the picture after the skin color is extracted and the skin color mask picture, because the skin color mask picture pixel value of the human face area is 0, after the 'and' operation is carried out, the corresponding area pixel value is also 0, and the area pixel value except the skin color of the human face mask picture is 1, after the 'and' operation is carried out, the original pixel value is kept unchanged. And (3) carrying out a series of image processing operations such as graying, binarization and the like on the picture after the operation of 'and', wherein the last remaining area is the hand area, and calculating the minimum outsourcing rectangle of the last remaining area, namely the required hand position.
Step S5 is to classify the driving posture of the driver, and the specific process includes:
s5.1, expanding the positioned rectangular frame of the face of the driver outwards according to a certain ratio to form a new rectangular frame, so that the new rectangular frame can contain the position of the hand of the driver when the driver makes a call, and the specific steps of forming the detection area of interest are as follows:
setting a point P by using the face position of the driver determined in S21Is the coordinate of the upper left corner of a rectangular area of a human face, P2Is the coordinate of the upper right corner of the rectangular frame of the face, P3As coordinates of the lower right corner of the face, P1And P4The rectangular area is formed as the right-hand call detection area of the driver, p2And P5The formed area is a driver left-hand call detection area. P1、P2、P3、P4And P5Respectively is (x)1,y1)、 (x2,y2)、(x3,y3)、(x4,y4) And (x)5,y5) In which P is1、P2、P3Is already acquired and is a known quantity when face detection is performed in S2, and therefore, it is necessary to calculate the point P4And P5The coordinates of (a).
S5.2, extracting PAC-HOG characteristics and LBP characteristics of the region of interest, and fusing the two characteristics in series into new PCA-HOG + LBP characteristics to replace single characteristics;
and S5.3, classifying the fused PCA-HOG + LBP characteristics by using an SVM classifier, and judging the driving state of the driver.
In step S6, a threshold is set by using the mouth movement, hand position, and driving posture of the driver as determination factors in a certain ratio, and when the final result is greater than the threshold, it is determined that the driver is using the mobile phone, and the system issues an alarm. The specific formula of the final result is:
Figure RE-GDA0002528000740000121
the system adopting the driver call monitoring method specifically comprises the following steps: the system is developed by combining OpenCV and Qt, a man-machine interaction interface is drawn, the system can detect the call-making behavior of the stored video and can detect the call-making behavior of the driver in real time. The implementation of the call monitoring of the driver is carried out according to the steps from 1 to 6, and when the fact that the driver uses the mobile phone in the driving process is monitored, the system can give out early warning to the driver.
The above description is only a preferred embodiment of the present invention and does not specifically limit the scope of the present invention. Although the foregoing preferred embodiments have been described in some detail, it should be understood by those skilled in the art that various changes in detail or structure may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A driver call monitoring method, comprising the steps of:
s1: acquiring a real-time video image of driving of a driver by using a camera;
s2: using the improved human face rapid detection algorithm to position the human face position;
s3: using an improved face characteristic point extraction algorithm to obtain face characteristic points of a driver, and calculating the mouth shape of the driver;
s4: positioning the hand position of the driver by adopting a hand positioning algorithm based on a face mask;
s5: acquiring a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver;
s6: and (4) using the mouth shape, the hand position and the driving posture detected in real time in the steps S3, S4 and S5 as judgment factors for judging whether the driver calls, and when the judgment factors are larger than a given threshold value, judging that the driver calls and sending an alarm by the system.
2. A driver call monitoring method as claimed in claim 1, wherein: the improved fast face detection algorithm for locating the face position in S2 specifically includes:
s2.1: firstly, recording faces detected by Adaboost for three times, averaging the sizes of the faces detected for three times to obtain the size of an average face, and storing the average face obtained as a basis for filtering a skin color detection result;
s2.2: establishing a skin color model, detecting the obtained real-time image by using the established skin color model, filtering out non-skin color areas and only reserving skin color areas;
s2.3: filtering the result of skin color detection by using the average face size, filtering the area which does not meet the face size, only reserving the area of the skin color area which meets the difference with the average face size within a given range, reducing the detection area and accelerating the detection speed;
s2.4: and performing face detection in the filtered region by using an Adaboost algorithm, judging whether a face is detected, if the face is not detected, skipping to execute S1, if the face is detected, storing the position coordinates of the region, and executing S3, wherein the position stored last time is preferentially detected by detecting the face each time.
3. A driver call monitoring method as claimed in claim 2, wherein: s3, obtaining the face feature points of the driver by using the improved face feature point extraction algorithm, and calculating the mouth shape of the driver specifically as follows:
s3.1: extracting HOG characteristics of a detected driver face picture, and performing head posture preliminary estimation by using a trained SVM classifier, wherein the preliminary postures comprise five postures, namely, a forward head direction, a left head deviation, a right head deviation, a head raising and a head lowering;
s3.2: matching the corresponding initialized AAM model according to the head posture obtained in the S3.1;
s3.3: performing AAM fitting calculation by utilizing reverse combination;
s3.4: repeating S3.3, and continuously performing iterative fitting to fit the model to a position coincident with the human face to obtain feature points;
s3.5: selecting four feature points including two feature points of the left and right mouth corners and two feature points of the middlemost of the upper and lower lips to form a diamond, calculating the ratio of the diagonals of the diamond, and comparing the ratio with a given threshold value to determine whether the mouth of the driver is open or closed.
4. A driver call monitoring method as claimed in claim 3, wherein: s4, positioning the hand position of the driver by adopting a hand positioning algorithm based on the face mask specifically comprises the following steps:
performing 'and' operation on the picture after skin color extraction and a skin color mask picture, wherein the pixel value of the skin color mask picture in a face region is 0, after the 'and' operation is performed, the pixel value of a corresponding region is also 0, the pixel value of a region except the skin color of the face mask picture is 1, after the 'and' operation is performed, keeping the original pixel value unchanged, performing graying and binarization image processing operation on the picture after the 'and' operation, wherein the last remaining region is a hand region, and calculating the minimum outsourcing rectangle of the last remaining region to obtain the hand position.
5. The driver call monitoring method according to claim 4, wherein: s5, acquiring a detection area of the driving posture of the driver according to the face position, extracting the characteristics of PAC-HOG + LBP of the detection area, and classifying the driving posture of the driver specifically as follows:
s5.1, expanding the positioned rectangular frame of the face of the driver outwards according to a certain ratio to form a new rectangular frame, so that the new rectangular frame can contain the position of the hand of the driver when the driver makes a call, and forming a detection area specifically comprises the following steps:
setting a point P by using the position coordinates of the face of the driver determined in S21Is the coordinate of the upper left corner of a rectangular area of a human face, P2Is the coordinate of the upper right corner of the rectangular frame of the face, P3For the coordinates of the lower right corner of the rectangular area of the face, P3Expand to the lower right corner to obtain P4,P1And P4The formed rectangular area is a detection area for the left-hand call of the driver, and the coordinate point of the lower left corner of the face rectangular area is expanded to the lower left corner to obtain the face rectangular areaP5,P2And P5The rectangular area formed is the driver's right-hand call detection area, P1、P2、P3、P4And P5Respectively is (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) And (x)5,y5) In which P is1、P2、P3Has been acquired at the time of face detection at S2, the point P is calculated4And P5The formula calculated is as follows:
Figure FDA0002458890530000021
wherein a1 is a set transversal expansion coefficient, and a2 is a set longitudinal expansion coefficient;
s5.2: extracting PAC-HOG characteristics and LBP characteristics of the detection area, and fusing the two characteristics in series into new PCA-HOG + LBP characteristics to replace single characteristics;
s5.3: and classifying the fused PCA-HOG + LBP characteristics by using an SVM classifier, and judging the driving state of the driver, including a left-hand mobile phone state, a right-hand mobile phone state and a hand-free mobile phone state.
6. The driver call monitoring method according to claim 5, wherein: s6, the determination factor is specifically:
Figure FDA0002458890530000031
wherein α (i), i is 1,2,3 …, n, and represents the classification result of the driving posture of the ith frame image, the result has two types of 0 and 1, the result is 0 represents that the driver does not use the mobile phone, the result is 1 represents that the driver has the action of using the mobile phone, and x is1Weight of the vehicle attitude to the last result, β (i) shows the hand position detection result of the i-th frame image, and the result is 1, that the hand position is in the call detection area, x2Indicating hand position detectionWeight of the final result; (i) the result of the mouth shape detection of the ith frame image is 1, 0 and x3The weight of the mouth action in the final result is shown, and n represents the number of frames which need to be cumulatively detected.
7. A detection system using the driver call monitoring method of any one of claims 1 to 6, characterized in that: the system has a man-machine interaction interface, detects the call-making behavior of the stored video or detects the call-making behavior of the driver in real time, and can give out early warning to the driver when the behavior that the driver uses a mobile phone in the driving process is monitored.
CN202010314035.8A 2020-04-20 2020-04-20 Driver call monitoring method and system Pending CN111553217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010314035.8A CN111553217A (en) 2020-04-20 2020-04-20 Driver call monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010314035.8A CN111553217A (en) 2020-04-20 2020-04-20 Driver call monitoring method and system

Publications (1)

Publication Number Publication Date
CN111553217A true CN111553217A (en) 2020-08-18

Family

ID=72003875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010314035.8A Pending CN111553217A (en) 2020-04-20 2020-04-20 Driver call monitoring method and system

Country Status (1)

Country Link
CN (1) CN111553217A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966655A (en) * 2021-03-29 2021-06-15 高新兴科技集团股份有限公司 Office area mobile phone playing behavior identification method and device and computing equipment
CN115861984A (en) * 2023-02-27 2023-03-28 联友智连科技有限公司 Driver fatigue detection method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514442A (en) * 2013-09-26 2014-01-15 华南理工大学 Video sequence face identification method based on AAM model
CN104573724A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Method for monitoring call making and receiving behaviors of driver
CN104573659A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Driver call-making and call-answering monitoring method based on svm
US20150186714A1 (en) * 2013-12-30 2015-07-02 Alcatel-Lucent Usa Inc. Driver behavior monitoring systems and methods for driver behavior monitoring
CN104966059A (en) * 2015-06-15 2015-10-07 安徽创世科技有限公司 Method for detecting phoning behavior of driver during driving based on intelligent monitoring system
CN106682601A (en) * 2016-12-16 2017-05-17 华南理工大学 Driver violation conversation detection method based on multidimensional information characteristic fusion
CN107220624A (en) * 2017-05-27 2017-09-29 东南大学 A kind of method for detecting human face based on Adaboost algorithm
US20180253094A1 (en) * 2013-03-27 2018-09-06 Pixart Imaging Inc. Safety monitoring apparatus and method thereof for human-driven vehicle
CN108509902A (en) * 2018-03-30 2018-09-07 湖北文理学院 A kind of hand-held telephone relation behavioral value method during driver drives vehicle
CN108564034A (en) * 2018-04-13 2018-09-21 湖北文理学院 The detection method of operating handset behavior in a kind of driver drives vehicle
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN110728185A (en) * 2019-09-10 2020-01-24 西安工业大学 Detection method for judging existence of handheld mobile phone conversation behavior of driver

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180253094A1 (en) * 2013-03-27 2018-09-06 Pixart Imaging Inc. Safety monitoring apparatus and method thereof for human-driven vehicle
CN103514442A (en) * 2013-09-26 2014-01-15 华南理工大学 Video sequence face identification method based on AAM model
US20150186714A1 (en) * 2013-12-30 2015-07-02 Alcatel-Lucent Usa Inc. Driver behavior monitoring systems and methods for driver behavior monitoring
CN104573724A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Method for monitoring call making and receiving behaviors of driver
CN104573659A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Driver call-making and call-answering monitoring method based on svm
CN104966059A (en) * 2015-06-15 2015-10-07 安徽创世科技有限公司 Method for detecting phoning behavior of driver during driving based on intelligent monitoring system
CN106682601A (en) * 2016-12-16 2017-05-17 华南理工大学 Driver violation conversation detection method based on multidimensional information characteristic fusion
CN107220624A (en) * 2017-05-27 2017-09-29 东南大学 A kind of method for detecting human face based on Adaboost algorithm
CN108509902A (en) * 2018-03-30 2018-09-07 湖北文理学院 A kind of hand-held telephone relation behavioral value method during driver drives vehicle
CN108564034A (en) * 2018-04-13 2018-09-21 湖北文理学院 The detection method of operating handset behavior in a kind of driver drives vehicle
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN110728185A (en) * 2019-09-10 2020-01-24 西安工业大学 Detection method for judging existence of handheld mobile phone conversation behavior of driver

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DAN WANG等: "Detecting Driver Use of Mobile Phone Based on In-Car Camera", 《2014 TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY》 *
文佳黎: "基于同时反向合成算法的人脸对齐方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
潘世吉: "智能交通违章监测算法研究及软件系统实现", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
王丹: "基于机器视觉的驾驶员打电话行为检测", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
缑新科等: "基于特征融合的静态手势识别", 《计算机与数字工程》 *
许书环等: "基于肤色特征的AdaBoost人脸检测方法", 《计算机系统应用》 *
邓旺华: "基于人脸特征的疲劳驾驶检测算法研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966655A (en) * 2021-03-29 2021-06-15 高新兴科技集团股份有限公司 Office area mobile phone playing behavior identification method and device and computing equipment
CN115861984A (en) * 2023-02-27 2023-03-28 联友智连科技有限公司 Driver fatigue detection method and system
CN115861984B (en) * 2023-02-27 2023-06-02 联友智连科技有限公司 Driver fatigue detection method and system

Similar Documents

Publication Publication Date Title
US7953253B2 (en) Face detection on mobile devices
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
US7643659B2 (en) Facial feature detection on mobile devices
CN106682601B (en) A kind of driver&#39;s violation call detection method based on multidimensional information Fusion Features
CN102324025B (en) Human face detection and tracking method based on Gaussian skin color model and feature analysis
KR100668303B1 (en) Method for detecting face based on skin color and pattern matching
CN103400110B (en) Abnormal face detecting method before ATM cash dispenser
CN109840565A (en) A kind of blink detection method based on eye contour feature point aspect ratio
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN110175530A (en) A kind of image methods of marking and system based on face
CN113158850B (en) Ship driver fatigue detection method and system based on deep learning
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN110728185B (en) Detection method for judging existence of handheld mobile phone conversation behavior of driver
CN110910339B (en) Logo defect detection method and device
CN108197534A (en) A kind of head part&#39;s attitude detecting method, electronic equipment and storage medium
CN111158491A (en) Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN111158457A (en) Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN111553217A (en) Driver call monitoring method and system
CN106874848A (en) A kind of pedestrian detection method and system
CN111881732B (en) SVM (support vector machine) -based face quality evaluation method
CN114724190A (en) Mood recognition method based on pet posture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200818