CN111310657B - Driver face monitoring method, device, terminal and computer readable storage medium - Google Patents

Driver face monitoring method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111310657B
CN111310657B CN202010092216.0A CN202010092216A CN111310657B CN 111310657 B CN111310657 B CN 111310657B CN 202010092216 A CN202010092216 A CN 202010092216A CN 111310657 B CN111310657 B CN 111310657B
Authority
CN
China
Prior art keywords
face
accumulation threshold
frame accumulation
feature
feature fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010092216.0A
Other languages
Chinese (zh)
Other versions
CN111310657A (en
Inventor
邱静
徐林浩
何天翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing China Tsp Technology Co ltd
Original Assignee
Beijing China Tsp Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing China Tsp Technology Co ltd filed Critical Beijing China Tsp Technology Co ltd
Priority to CN202010092216.0A priority Critical patent/CN111310657B/en
Publication of CN111310657A publication Critical patent/CN111310657A/en
Application granted granted Critical
Publication of CN111310657B publication Critical patent/CN111310657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a terminal and a computer-readable storage medium for monitoring the face of a driver, wherein the method comprises the following steps: when the system is started, the face detection model is used for carrying out real-time detection on the face of the driver in a determined face searching area, and the face recognition frame value and the feature fusion frame value are respectively accumulated and counted when the face is detected each time; when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, starting a face recognition model to obtain a corresponding face feature vector, clearing the face recognition frame value, and then accumulating again; if the identity of the driver is correct, starting a face feature point detection model to obtain corresponding face feature points each time the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value, and resetting the feature fusion frame value and then accumulating the count again. The technical scheme of the invention can avoid unnecessary operation, greatly reduce the operation pressure of the vehicle-mounted SOC, thereby realizing the efficient operation of the system and the like.

Description

Driver face monitoring method, device, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of driving monitoring technologies, and in particular, to a method, an apparatus, a terminal, and a computer readable storage medium for monitoring a face of a driver.
Background
With the rapid development of ADAS (advanced driving assistance system) technology, driver monitoring systems (DMS, driver Monitoring System) are also gradually becoming mounted on automobiles, and vision processing based on-vehicle cameras is a widely adopted solution at present. The key problem based on visual processing is extraction of face features, and whether the adopted scheme can be efficiently and stably fused with the face features or not, so that effective driver state information is output, and whether the scheme can be applied in a landing mode or not is determined.
At present, a common scheme is to migrate face recognition applications of other scenes into a vehicle-mounted SOC system, typically, face feature detection running on a mobile phone. The general steps of this scheme are: training by using a general face data set (such as WIDER FACE) to obtain a convolutional neural network model of face detection, then applying the model obtained by training to the face data set (such as LFW) and performing face alignment processing to obtain a face recognition convolutional neural network by training; training is performed on specific data sets (such as a fatigue detection data set, an attention detection data set, an abnormal action data set and the like) based on the same thought, so that a convolutional neural network model corresponding to fatigue detection, attention detection, abnormal action detection and the like is obtained.
However, the above solution does not consider a specific application scenario, and is often not satisfactory in accuracy and stability in the face of a changeable illumination condition of the cabin. In addition, the calculation capability of the vehicle-mounted SOC is not considered to be generally lower than that of the mobile phone SOC, the power consumption requirement of the vehicle-mounted equipment is high, and the driver monitoring system needs to operate for a long time and the like, so that the later algorithm optimization pressure is high and the like.
Disclosure of Invention
In view of this, it is an object of the present invention to overcome at least one of the deficiencies in the prior art by providing a method, apparatus, terminal and computer readable storage medium for monitoring a driver's face.
An embodiment of the present invention provides a method for monitoring a face of a driver, including:
when the monitoring system is started, the face detection model is used for carrying out real-time detection on the face of the driver in a determined face searching area, and the face recognition frame value and the feature fusion frame value are respectively accumulated and counted when the face is detected each time;
starting a face recognition model to obtain a corresponding face feature vector every time the face recognition frame value exceeds the current face recognition frame accumulation threshold value, and resetting the face recognition frame value and then accumulating and counting again; judging whether the identity of the driver is correct or not according to the corresponding face feature vector, inputting the obtained two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold into the new face recognition frame accumulation threshold;
If the identity of the driver is correct, starting a face feature point detection model to obtain corresponding face feature points each time the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value, and resetting the feature fusion frame value and then accumulating again; inputting the detected face and the face feature points into a feature fusion model to obtain corresponding feature information, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold, and updating the current feature fusion frame accumulation threshold into the new feature fusion frame accumulation threshold; wherein the characteristic information includes a fatigue level value and an attention deviation value;
and executing preset operation according to the obtained characteristic information.
Further, in the above method for monitoring a face of a driver, before the real-time detection of the face of the driver in the determined face search area by the face detection model, the method further includes:
when the system is in an initial starting-up stage, initializing a camera data capturing module and loading the face detection model;
judging whether a historical face searching area stored before the last shutdown exists or not, and if so, carrying out face searching by using the historical face searching area through the face detection model;
If the face is searched, enlarging the size of the face detection frame obtained currently, and taking the enlarged face detection frame as the determined face searching area;
and then loading the face recognition model, the face feature point detection model, the feature fusion model and the frame accumulation threshold decision model.
Further, in the above method for monitoring a face of a driver, the method further includes:
if the historical face searching area does not exist, face searching is carried out in the full image area of the camera;
if no face is searched in the historical face searching area, expanding the historical face searching area for searching.
Further, in the above method for monitoring a face of a driver, the initializing the camera data capturing module includes:
loading a pre-created shared memory buffer zone formed by a plurality of buffer zones with the same size, wherein the shared memory buffer zone is used for storing images captured by a camera called by a camera data capturing interface of a platform;
and loading a pre-created context of the capturing device and a capturing thread, wherein the capturing thread is used for acquiring the image frames captured by the camera from the shared memory buffer area at a fixed frame rate.
Further, in the above method for monitoring a face of a driver, the frame accumulation threshold decision model includes a calculation formula of a face recognition frame accumulation threshold and a calculation formula of a feature fusion frame accumulation threshold, where the calculation formula of the face recognition frame accumulation threshold is used to calculate a new face recognition frame accumulation threshold according to the input two adjacent face feature vectors; the calculation formula of the feature fusion frame accumulation threshold is used for calculating a new feature fusion frame accumulation threshold according to the input feature information;
the calculation formula of the face recognition frame accumulation threshold value is as follows:
Figure BDA0002384070500000041
wherein A is i The i-th component of the historical face feature vector A representing the previous moment; b (B) i An ith component of the face feature vector B representing the current time; n represents a vector dimension; cos θ represents cosine similarity; e represents the output face recognition frame accumulation threshold; c (C) 1 And C 2 Respectively isTwo cosine similarity thresholds for classification; n (N) 1 And N 2 Respectively representing face recognition frame accumulation thresholds corresponding to different levels; n (N) 5 Representing the raw face recognition frame accumulation threshold.
Further, in the above-mentioned driver face monitoring method, the characteristic information includes a fatigue degree value and an attention deviation value; the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure BDA0002384070500000042
Wherein a is i Representing an ith fatigue level value; b i Representing an ith focus bias value; m is M 1 A coefficient indicating a fatigue level value; m is M 2 Representing the attention deviation value coefficient; m is M 3 Representing an attenuation factor; c k Representing the previous k times of historical face feature fusion values; d, d k A feature fusion frame accumulation threshold value representing the previous k outputs; k (K) 1 、K 2 And K 3 Three historical face feature fusion value thresholds for grading are respectively adopted; x is X 1 、X 2 And X 3 Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x is X 0 Representing the raw feature fusion frame accumulation threshold.
Further, in the above method for monitoring a face of a driver, the face detection model, the face recognition model, the face feature point detection model and the feature fusion model are all obtained by training in advance based on a deep convolutional neural network by using collected multi-scene face data of the driver.
Another embodiment of the present invention provides a driver face monitoring apparatus, including:
the human face detection module is used for detecting the human face of a driver in real time in a determined human face searching area through a human face detection model after the monitoring system is started, and respectively accumulating and counting human face recognition frame values and characteristic fusion frame values when the human face is detected each time;
The face recognition module is used for starting the face recognition model to obtain corresponding face feature vectors when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, and resetting the face recognition frame value and then accumulating and counting again; judging whether the identity of the driver is correct or not according to the corresponding face feature vector, inputting the obtained two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold into the new face recognition frame accumulation threshold;
the feature acquisition module is used for starting a face feature point detection model to obtain corresponding face feature points each time the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value if the identity of the driver is correct, and resetting the feature fusion frame value and then accumulating the count again; inputting the detected face and the face feature points into a feature fusion model to obtain corresponding feature information, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold, and updating the currently stored feature fusion frame accumulation threshold into the new feature fusion frame accumulation threshold; wherein the characteristic information includes a fatigue level value and an attention deviation value;
And the operation execution module is used for executing preset operation according to the obtained characteristic information.
A further embodiment of the present invention proposes a terminal comprising: the system comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the computer program to implement the driver face monitoring method.
Yet another embodiment of the present invention proposes a computer-readable storage medium storing a computer program which, when executed, implements the driver face monitoring method according to the above.
The technical scheme of the embodiment of the invention has the following beneficial effects:
the method provided by the embodiment of the invention utilizes the trained face detection model to detect the face in real time in the obtained face search area, and based on the frame accumulation threshold decision model, a dynamic frame accumulation threshold which can give consideration to both performance and resource occupation is obtained by operating and calibrating on a real vehicle, the face recognition model and the face feature point detection model are activated only when the corresponding frame accumulation value exceeds the current threshold, unnecessary operation is avoided, the operation pressure of the vehicle-mounted SOC is greatly reduced to realize the efficient operation of the monitoring system, the improvement of the whole architecture of the existing driver monitoring system is not needed, and the like.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope of the present invention. Like elements are numbered alike in the various figures.
Fig. 1 is a schematic flow chart of a method for monitoring a face of a driver according to an embodiment of the present invention;
FIG. 2 is a schematic diagram showing a second flow of a method for monitoring a face of a driver according to an embodiment of the present invention;
FIG. 3 is a schematic view showing a third flow of a method for monitoring a face of a driver according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a shared memory buffer area of a driver face monitoring method according to an embodiment of the present invention;
fig. 5 shows a schematic structural diagram of a driver face monitoring apparatus according to an embodiment of the present invention.
Description of main reference numerals:
10-a driver face monitoring device; 110-a face detection module; 120-face recognition module; 130-a feature acquisition module; 140-an operation execution module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present invention, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the invention.
Example 1
Referring to fig. 1, the present embodiment provides a method for monitoring a face of a driver, which can be applied to monitor states such as whether the driver is driving fatigue, whether the driver is suffering from sudden diseases, etc. during driving, so as to improve driving safety, etc. The following describes the face monitoring method of the driver in detail.
Step S11, after the monitoring system is started, the face of the driver is detected in real time in the determined face searching area through the face detection model.
Exemplarily, if the monitoring system detects a valid driver face in the determined face search area, step S12 is performed. If no face is detected, the face search area is moved to continue face detection until a valid face is detected, and step S12 is not performed.
Step S12, if the effective face is detected each time, respectively carrying out one-time accumulation counting on the face recognition frame value and the feature fusion frame value.
For the two set values, one of the two values represents a face recognition frame and is used for dynamically adjusting the accumulated threshold value of the face recognition frame; and the other one represents a feature fusion frame and is used for dynamically adjusting the accumulation threshold value of the feature fusion frame. In step S12, each time a valid face is detected, the two values are added together. The effective face is detected through a pre-trained face detection model.
Step S13, judging whether the face recognition frame value exceeds the current face recognition frame accumulation threshold.
For step S13, if the accumulated face recognition frame value is greater than the current face recognition frame accumulation threshold, steps S14 and S15 are performed. If the face recognition frame accumulation threshold value is smaller than or equal to the current face recognition frame accumulation threshold value, accumulation is continued until the current face recognition frame accumulation threshold value is exceeded. It will be appreciated that the step S14 and the step S15 may be performed simultaneously, or may be performed in the order of arrangement, which is not limited herein.
And S14, if the face recognition model exceeds the threshold, starting the face recognition model to obtain the corresponding face feature vector.
The system inputs the detected effective face into a pre-trained face recognition model for face recognition, and then outputs the corresponding face feature vector. Then, the face feature vector is compared with the pre-stored face of the driver, so as to determine whether the identity of the driver is correct, i.e. step S16 is performed.
It can be appreciated that the face recognition model will not be started for each face recognition frame value from zero to accumulation to exceeding the face recognition frame accumulation threshold; and only when the face recognition frame value of each accumulated count exceeds the current face recognition frame accumulated threshold value, the face recognition model is started to perform face recognition once, and a face feature vector is correspondingly output. In other words, the face recognition operation is performed every certain number of frames, and the number depends on the size of the face recognition frame accumulation threshold.
And S15, if the face recognition frame value exceeds the face recognition frame value, resetting the face recognition frame value, and then accumulating the face recognition frame value again.
And for the step S15, when the face recognition frame value is accumulated to be larger than the current face recognition frame accumulation threshold value, clearing the face recognition frame value, and restarting to accumulate the times of detecting the valid face.
And S16, judging whether the identity of the driver is correct or not according to the obtained face feature vector.
For step S16, if the identity is correct, the next step is to acquire the characteristic information of the driver, such as fatigue degree and attention deviation, so as to determine whether the driver is tired, and has poor attention. If the identity is incorrect, optionally, an alarm or the like may be issued in advance without the need for a subsequent unnecessarily large feature fusion operation.
And S17, inputting the obtained two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold.
In this embodiment, the frame accumulation threshold decision model includes two parts, which are respectively a calculation formula of a face recognition frame accumulation threshold and a calculation formula of a feature fusion frame accumulation threshold. The face recognition frame accumulation threshold value calculation formula is used for calculating a new face recognition frame accumulation threshold value according to the input adjacent two face feature vectors; the calculation formula of the feature fusion frame accumulation threshold is used for calculating a new feature fusion frame accumulation threshold according to the input feature information.
Illustratively, after two adjacent face feature vectors are obtained, a next face recognition frame accumulation threshold may be determined according to cosine similarity of the two adjacent face feature vectors. Generally, if the similarity of two adjacent face feature vectors is high, the next face recognition frame accumulation threshold may be slightly larger, i.e. the face recognition model is started only once with more frames apart. Of course, other constraint conditions can be added to dynamically adjust the face recognition frame accumulation threshold in practical application.
The face recognition frame accumulation threshold is calculated by the following formula:
Figure BDA0002384070500000101
wherein A is i The i-th component of the historical face feature vector A representing the previous moment; b (B) i An ith component of the face feature vector B representing the current time; n represents a vector dimension; cos θ represents the cosine similarity between these two neighboring face feature vectors. e represents the output face recognition frame accumulation threshold; c (C) 1 And C 2 Respectively representing two cosine similarity thresholds, and dividing three intervals, namely corresponding to three levels, by the two values; n (N) 1 And N 2 Respectively representing face recognition frame accumulation thresholds corresponding to different levels; n (N) 0 Representing the original face recognition frame accumulation threshold (i.e., a preset default face recognition frame accumulation threshold). For example, if the computed cosine similarity is greater than C 1 The face recognition frame accumulation threshold e will take the value of N 1 Or, when the calculated cosine similarity is smaller than C 2 The face recognition frame accumulation threshold e will take the value of N 0 . Then, using the calculated N 1 (or N) 0 ) And replacing the current face recognition frame accumulation threshold value.
For N as described above 0 ~N 2 Respectively accumulating threshold values for face recognition frames corresponding to three level intervals, and C 1 And C 2 The parameters are respectively different cosine similarity thresholds, and can be obtained by testing and calibrating a real vehicle. IllustrativelyThe video data is collected by the vehicle-mounted camera, and the data should cover all scenes as much as possible, including the condition of interference and the like. Each frame of data is then marked, and an objective function is constructed that integrates the optimum at the accuracy and frame rate, and the values are then mapped out by an optimization algorithm such as a genetic algorithm. The above-mentioned classification of the levels is not limited to three levels, and may be specifically selected according to practical situations. In the embodiment, three grades are preferably selected, which is mainly determined according to practical experience, and the data volume required by calibration is overlarge because of too many grades, so that model convergence is difficult; and the grade division is too few to meet the requirements.
Step S18, updating the current face recognition frame accumulation threshold value to the new face recognition frame accumulation threshold value.
In the above step S18, if an e is outputted by using the above calculation formula, the e at this time replaces the current face recognition frame accumulation threshold value, and is used as a new face recognition frame accumulation threshold value for starting the face recognition model next time. For example, if the current face recognition frame accumulation threshold is N 3 N is output through a formula 4 The next accumulated count of face recognition frames is greater than N 4 The face recognition model is started.
And S19, if the identity of the driver is correct, judging whether the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value.
Similar to the principle of the face recognition frame, on the premise that the identity of the driver is correct, whether the feature fusion value exceeds the current feature fusion frame accumulation threshold value is judged, and if so, steps S20 and S21 are executed. If not, continuing accumulation.
And step S20, if the detected face feature points are exceeded, starting a face feature point detection model to obtain corresponding face feature points.
The system inputs the recognized face of the driver into a pre-trained face feature point detection model, outputs a corresponding face feature point detection frame, such as a region frame of the features of eyes, mouth, etc., and then performs step S22. In general, when a person is tired, the face features, particularly eyes, often have more obvious state changes, so that the detection of the face feature points has a certain help to judge the state and the behavior of the driver.
And S21, if the characteristic fusion frame value exceeds the characteristic fusion frame value, resetting the characteristic fusion frame value and then accumulating the characteristic fusion frame value again.
And for the step S21, when the feature fusion frame value is accumulated to be larger than the current feature fusion frame accumulation threshold value, resetting the feature fusion frame value, and restarting to accumulate the times of detecting the effective face.
Step S22, inputting the detected face and the face feature points into a feature fusion model to obtain corresponding feature information.
For step S22, the obtained face detection frame and the region frame of the feature points such as eyes and mouths obtained in step S21 are input into a feature fusion model trained in advance, and then the corresponding feature information is output. Exemplary such characteristic information includes, but is not limited to, a fatigue level value, a focus bias value, abnormal actions, and the like.
It will be appreciated that for each feature fusion frame value from zero to accumulation to exceeding the feature fusion frame accumulation threshold, the feature fusion model will not be started; and only when the feature fusion frame value counted in each accumulation exceeds the current feature fusion frame accumulation threshold value, the feature fusion model is started to perform feature fusion once, and accordingly feature information is obtained.
Step S23, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold.
For example, if the feature information includes a fatigue level value and an attention deviation value, the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure BDA0002384070500000121
wherein a is i Representing an ith fatigue level value; b i Representing an ith focus bias value; m is M 1 A coefficient indicating a fatigue level value; m is M 2 Representing the attention deviation value coefficient; m is M 3 Representing an attenuation factor; c k Representing the previous k times of historical face feature fusion values; d, d k A feature fusion frame accumulation threshold value representing the previous k outputs; k (K) 1 、K 2 And K 3 Three historical face feature fusion value thresholds are respectively represented, and four intervals are divided by the three values, namely four grades are corresponding to the three values; x is X 1 、X 2 And X 3 Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x is X 0 Representing the raw feature fusion frame accumulation threshold.
For X as described above 0 ~X 3 Respectively integrating the frame accumulation threshold values for the characteristics corresponding to the four level intervals, and K 1 ~K 3 The three different historical face feature fusion value thresholds are respectively set, and the parameters can be obtained by testing and calibrating a real vehicle. The specific calibration method is similar to the above-mentioned face recognition frame accumulation threshold, so that the detailed description is omitted here. In this embodiment, the priority is divided into four levels, but of course, the priority may be divided into more or less levels, which may be determined according to practical situations.
And step S24, updating the current characteristic fusion frame accumulation threshold value into the new characteristic fusion frame accumulation threshold value.
In the above step S24, the value calculated and output based on the above calculation formula will be used as a new feature fusion frame accumulation threshold for the next start of the feature fusion model. The principle is similar to the above-described face recognition frame value, and thus will not be described in detail here.
Considering that the face characteristic state of a driver is in a slowly-changing state in most cases in the vehicle running process, if a frame of image data collected each time is subjected to reasoning operation by using a face recognition model and a characteristic fusion model, the system resources are high in occupancy rate and high in load. However, if a fixed frame accumulation threshold is adopted, a large amount of operations are avoided, and a scene that the face features are rapidly changed cannot be tracked, so that the monitoring system cannot process an emergency scene and the response delay is large. In order to solve the above problems, the present embodiment proposes the above frame accumulation threshold decision model based on historical face feature data to dynamically calculate the face recognition frame accumulation threshold and feature fusion frame accumulation threshold, so that the requirements of large operand and quick response to face feature changes can be well balanced, and efficient operation of the monitoring system is realized.
Step S25, a preset operation is executed according to the obtained characteristic information.
Illustratively, for the obtained feature information, the preset operation may include, but is not limited to, saving and displaying the feature information, etc. Optionally, if the obtained characteristic information, such as the fatigue level value, exceeds a preset threshold, the preset operation may also be an alarm, so as to remind the corresponding staff of paying attention.
In this embodiment, for the face detection model, the face recognition model, the face feature point detection model, and the feature fusion model, these models may be obtained by training in advance based on different neural networks, such as a convolutional neural network, a deep convolutional neural network, and the like.
For example, a plurality of face data sets of multiple scenes of a driver can be acquired through a vehicle-mounted camera, wherein the face data sets comprise face images of different fatigue degree values, attention deviation values (such as looking right ahead, looking left, looking right, and the like) under different illumination conditions, and the face data sets are marked with face frames, face characteristic points (such as eyes, mouths, and the like), and state behaviors (such as fatigue degree, attention, abnormal actions, and the like).
Then, training to obtain a face detection model through face frame marking data; training the convolutional neural network according to the face feature point marking data to obtain a face feature point detection model; and then outputting by using the face feature point detection module to obtain feature point area frames such as eyes, mouths and the like. In addition, the face detection frame, the eyes, the mouth and other area frames are used as input of a convolutional neural network, and the fatigue degree, the attention deviation, the abnormal actions and other state behaviors are used as output, so that the feature fusion model is obtained through training. And meanwhile, training the convolutional neural network by using the face recognition data set to obtain a face recognition model for recognizing the identity of the driver.
It can be understood that the frame accumulation threshold decision model is based on the frame accumulation threshold decision model to dynamically determine the face recognition frame accumulation threshold and the feature fusion frame accumulation threshold, and the corresponding model is started to operate only when the corresponding condition is met, so that the required operation amount can be greatly reduced in the process, and the performance requirement of the monitoring system on the vehicle-mounted SOC can be further reduced. For the driver face monitoring system of the embodiment, the driver face monitoring system can be efficiently operated on a vehicle-mounted SOC with limited resources, so that the problem of high resource occupation caused by long-term complex operation of a background in the prior art can be solved, an additional vision processor is not required to be added for independent operation, the increase of integration complexity and cost caused by hardware addition is avoided, and better economic benefits are achieved.
Example 2
Referring to fig. 2, based on the method of embodiment 1, the method for monitoring a face of a driver according to the present embodiment further includes, before the step S11: when the system is in the initial starting-up stage, the historical face search data is utilized to quickly locate the face search area, so that the face search area can be determined with less operation amount, and the occupation of system resources is reduced.
Exemplarily, as shown in fig. 2, the method mainly includes the following steps:
step S1, initializing a camera data capturing module and loading a face detection model when the system is in an initial starting-up stage.
When the driver face monitoring system is in the initial stage of starting up, the camera data capturing module associated with the camera is initialized, and a face detection model is loaded for face detection after the initialization is successful.
And S2, judging whether a historical face searching area stored before the last shutdown exists or not.
Since the face is usually present in a relatively fixed area during driving of the driver, it is possible to determine whether or not there is historical face search data to quickly determine whether or not the face search area is present. If so, face detection may be preferentially performed in the area, i.e., step S3 may be performed, thereby reducing the resource occupation and the like required for searching over a wide area. Of course, if not, the extended range search is performed in the entire image area, that is, step S5 is performed.
And step S3, if the historical face searching area exists, carrying out face searching through a face detection model. For step S3, if the face is searched, executing step S4; if no face is searched, step S6 is performed.
And S4, if the face is searched, enlarging the size of the currently obtained face detection frame, and taking the enlarged face detection frame as the determined face search area.
Exemplarily, if a face is detected in the history face search area, the size of the face detection frame may be enlarged appropriately to obtain a larger face detection frame, i.e. to be used as a new face search area, and then step S7 is performed.
And S5, if no face is searched in the historical face searching area, expanding the historical face searching area for searching.
Exemplary, the range expansion adjustment is performed based on the historical face search area to further search the position of the face.
And S6, if the historical face searching area does not exist, carrying out face searching in the full image area of the camera.
And S7, loading a face recognition model, a face feature point detection model, a feature fusion model and a frame accumulation threshold decision model.
For example, in the case that the face search area is determined quickly in the step S4, the step S7 may be directly skipped; in the case of the steps S5 and S6, the face search area is finally determined, and then the models may be loaded. It will be appreciated that the training method of these models is the same as that of each model of the above-described embodiment 1, and thus a description thereof will not be repeated here.
In one embodiment, as shown in fig. 3, for the step S1, the initialization of the camera data capturing module mainly includes:
step S101, loading a pre-created shared memory buffer area composed of a plurality of buffer areas with the same size. The shared memory buffer area is used for storing images captured by the cameras called by the camera data capturing interfaces of the platform.
Step S102, loading the context of the capture device and the capture thread which are created in advance. The capturing thread is used for acquiring image frames captured by the camera from the shared memory buffer area at a fixed frame rate.
In this embodiment, the camera data capturing module uses a camera capturing interface of the system platform, so as to avoid unnecessary camera data copying, further save resources occupation and time for obtaining data copying, preferably, a shared memory buffer area can be created in advance at the bottom layer of the system, and then a context and a capturing thread of the capturing device are created for other modules to call the camera image data stored in the shared memory buffer area.
For example, as shown in fig. 4, the shared memory buffer may be composed of several buffers of the same size, and each buffer may store one complete image data. For the capturing thread, a fixed frame rate may be set for capturing, where the fixed frame rate may be set correspondingly according to the size of the camera image capturing frame rate, and typically, the fixed frame rate is smaller than the image capturing frame rate.
For the steps S101 and S102, by loading the pre-created shared memory buffer area, the context of the capturing device and the capturing thread in the initialization process, the subsequent cross-module sharing of the camera data can be realized, so that the time for copying the data can be saved, the data acquisition efficiency can be improved, and particularly for a system with limited resources, a large amount of memory resources and the like can be prevented from being occupied during data copying by cross-module sharing of the data.
In this embodiment, the driver face monitoring system may employ a QNX system or the like, so that the characteristics of the zero-copy camera supported by the QNX system platform for sharing data and supporting quick start may be utilized. Of course, other systems may be employed as long as the system is capable of satisfying the following two conditions: a camera data sharing supporting zero-copy between the multiple modules; secondly, the system supports quick start, and the DMS can start working within 2 seconds. The initial start-up stage of the system can be understood as the stage of the system after the quick start-up, and the quick positioning of the face search area can be realized by utilizing the characteristic of the quick start-up of the system.
It can be understood that, in addition to the key point of dynamically determining the face recognition frame accumulation threshold and the feature fusion frame accumulation threshold based on the frame accumulation threshold decision model in the above embodiment 1 to realize the efficient motion of the monitoring system, the present embodiment starts from two other aspects:
firstly, by means of a high-efficiency equipment capturing interface of a system such as a QNX platform and the like, image data storage is carried out by creating a shared memory buffer area, so that data obtained by a camera can be shared across modules without copying, and synchronization of the image data can be ensured;
secondly, considering that the face area of the driver appears in the same position area most of the time, by utilizing the quick start characteristic of the QNX system and the like, namely, in the initial start-up stage, the face search area is quickly positioned by combining the historical face data, so that the face detection frame can be obtained with less operation data amount in most of the time.
Based on the two points, unnecessary operation can be further reduced, so that efficient operation of the face monitoring system of the driver and the like can be realized on the basis of not increasing hardware cost.
Example 3
Referring to fig. 5, based on the above-mentioned method for monitoring the face of the driver in embodiment 1, the present embodiment proposes a device 10 for monitoring the face of the driver, which includes:
The face detection module 110 is configured to detect a face of a driver in real time in a determined face search area through a face detection model after the monitoring system is started, and respectively count the face recognition frame value and the feature fusion frame value in an accumulated manner when the face is detected each time;
the face recognition module 120 is configured to start a face recognition model to obtain a corresponding face feature vector whenever the face recognition frame value exceeds a current face recognition frame accumulation threshold, and to clear the face recognition frame value and then to re-accumulate the face recognition frame value; judging whether the identity of the driver is correct or not according to the corresponding face feature vector, inputting the obtained two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold into the new face recognition frame accumulation threshold;
the feature acquisition module 130 is configured to, if the identity of the driver is correct, start a face feature point detection model to obtain a corresponding face feature point whenever the feature fusion frame value exceeds a current feature fusion frame accumulation threshold, and clear the feature fusion frame value and then re-accumulate the feature fusion frame value; inputting the detected face and the face feature points into a feature fusion model to obtain corresponding feature information, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold, and updating the currently stored feature fusion frame accumulation threshold into the new feature fusion frame accumulation threshold; wherein the characteristic information includes a fatigue level value and an attention deviation value;
And an operation execution module 140, configured to execute a preset operation according to the obtained feature information.
It is to be understood that the above-described driver face monitoring apparatus 10 corresponds to the driver face monitoring method of embodiment 1. Any of the alternatives in embodiment 1 are also applicable to this embodiment and will not be described in detail here.
The invention also provides a terminal, such as an electronic device such as a vehicle-mounted image, and the like, wherein the terminal comprises a memory and a processor, the memory stores a computer program, and the processor enables the terminal device to execute the functions of each module in the driver face monitoring method or the driver face monitoring device by running the computer program.
The memory may include a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the terminal, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The present invention also provides a computer readable storage medium storing the computer program for use in the above terminal.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the invention may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (8)

1. A method for monitoring a face of a driver, comprising:
when the monitoring system is started, the face detection model is used for carrying out real-time detection on the face of the driver in a determined face searching area, and the face recognition frame value and the feature fusion frame value are respectively accumulated and counted when the face is detected each time;
starting a face recognition model to obtain a corresponding face feature vector every time the face recognition frame value exceeds the current face recognition frame accumulation threshold value, and resetting the face recognition frame value and then accumulating and counting again; judging whether the identity of the driver is correct or not according to the corresponding face feature vector, inputting the obtained two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold into the new face recognition frame accumulation threshold;
If the identity of the driver is correct, starting a face feature point detection model to obtain corresponding face feature points each time the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value, and resetting the feature fusion frame value and then accumulating again; inputting the detected face and the face feature points into a feature fusion model to obtain corresponding feature information, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold, and updating the current feature fusion frame accumulation threshold into the new feature fusion frame accumulation threshold; wherein the characteristic information includes a fatigue level value and an attention deviation value;
executing preset operation according to the obtained characteristic information;
the frame accumulation threshold decision model comprises a calculation formula of a face recognition frame accumulation threshold and a calculation formula of a feature fusion frame accumulation threshold, wherein the calculation formula of the face recognition frame accumulation threshold is used for calculating a new face recognition frame accumulation threshold according to the input two adjacent face feature vectors; the calculation formula of the feature fusion frame accumulation threshold is used for calculating a new feature fusion frame accumulation threshold according to the input feature information;
The calculation formula of the face recognition frame accumulation threshold value is as follows:
Figure FDA0004257897990000021
wherein A is i The i-th component of the historical face feature vector A representing the previous moment; b (B) i An ith component of the face feature vector B representing the current time; n represents a vector dimension; cos θ represents cosine similarity; e represents the output face recognition frame accumulation threshold; c (C) 1 And C 2 Two cosine similarity thresholds for classification; n (N) 1 And N 2 Respectively representing face recognition frame accumulation thresholds corresponding to different levels; n (N) 0 Representing the originA threshold value is accumulated for the face identification frame;
the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure FDA0004257897990000022
Figure FDA0004257897990000023
wherein a is i Representing an ith fatigue level value; b i Representing an ith focus bias value; m is M 1 A coefficient indicating a fatigue level value; m is M 2 Representing the attention deviation value coefficient; m is M 3 Representing an attenuation factor; c k Representing the previous k times of historical face feature fusion values; d, d k A feature fusion frame accumulation threshold value representing the previous k outputs; k (K) 1 、K 2 And K 3 Three historical face feature fusion value thresholds for grading are respectively adopted; x is X 1 、X 2 And X 3 Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x is X 0 Representing the raw feature fusion frame accumulation threshold.
2. The method for monitoring the face of a driver according to claim 1, wherein the step of detecting the face of the driver in real time by the face detection model in the determined face search area further comprises:
When the system is in an initial starting-up stage, initializing a camera data capturing module and loading the face detection model;
judging whether a historical face searching area stored before the last shutdown exists or not, and if so, carrying out face searching by using the historical face searching area through the face detection model;
if the face is searched, enlarging the size of the face detection frame obtained currently, and taking the enlarged face detection frame as the determined face searching area;
and then loading the face recognition model, the face feature point detection model, the feature fusion model and the frame accumulation threshold decision model.
3. The driver face monitoring method of claim 2, further comprising:
if the historical face searching area does not exist, face searching is carried out in the full image area of the camera;
if no face is searched in the historical face searching area, expanding the historical face searching area for searching.
4. The method for monitoring the face of a driver according to claim 2, wherein the initializing of the camera data capturing module includes:
loading a pre-created shared memory buffer zone formed by a plurality of buffer zones with the same size, wherein the shared memory buffer zone is used for storing images captured by a camera called by a camera data capturing interface of a platform;
And loading a pre-created context of the capturing device and a capturing thread, wherein the capturing thread is used for acquiring the image frames captured by the camera from the shared memory buffer area at a fixed frame rate.
5. The method for monitoring the face of the driver according to claim 1, wherein the face detection model, the face recognition model, the face feature point detection model and the feature fusion model are all obtained by training in advance based on a deep convolutional neural network by using collected multi-scene face data of the driver.
6. A driver face monitoring apparatus, comprising:
the human face detection module is used for detecting the human face of a driver in real time in a determined human face searching area through a human face detection model after the monitoring system is started, and respectively accumulating and counting human face recognition frame values and characteristic fusion frame values when the human face is detected each time;
the face recognition module is used for starting the face recognition model to obtain corresponding face feature vectors when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, and resetting the face recognition frame value and then accumulating and counting again; judging whether the identity of the driver is correct or not according to the corresponding face feature vector, inputting the obtained two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold into the new face recognition frame accumulation threshold;
The feature acquisition module is used for starting a face feature point detection model to obtain corresponding face feature points each time the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value if the identity of the driver is correct, and resetting the feature fusion frame value and then accumulating the count again; inputting the detected face and the face feature points into a feature fusion model to obtain corresponding feature information, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold, and updating the currently stored feature fusion frame accumulation threshold into the new feature fusion frame accumulation threshold; wherein the characteristic information includes a fatigue level value and an attention deviation value;
the operation execution module is used for executing preset operation according to the obtained characteristic information;
the frame accumulation threshold decision model comprises a calculation formula of a face recognition frame accumulation threshold and a calculation formula of a feature fusion frame accumulation threshold, wherein the calculation formula of the face recognition frame accumulation threshold is used for calculating a new face recognition frame accumulation threshold according to the input two adjacent face feature vectors; the calculation formula of the feature fusion frame accumulation threshold is used for calculating a new feature fusion frame accumulation threshold according to the input feature information;
The calculation formula of the face recognition frame accumulation threshold value is as follows:
Figure FDA0004257897990000041
wherein A is i The i-th component of the historical face feature vector A representing the previous moment; b (B) i An ith component of the face feature vector B representing the current time; n represents a vector dimension; cos θ represents cosine similarity; e represents the output face recognition frame accumulation threshold; c (C) 1 And C 2 Two cosine similarity thresholds for classification; n (N) 1 And N 2 Respectively representing face recognition frame accumulation thresholds corresponding to different levels; n (N) 0 Representing an original face recognition frame accumulation threshold;
the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure FDA0004257897990000051
Figure FDA0004257897990000052
wherein a is i Representing an ith fatigue level value; b i Representing an ith focus bias value; m is M 1 A coefficient indicating a fatigue level value; m is M 2 Representing the attention deviation value coefficient; m is M 3 Representing an attenuation factor; c k Representing the previous k times of historical face feature fusion values; d, d k A feature fusion frame accumulation threshold value representing the previous k outputs; k (K) 1 、K 2 And K 3 Three historical face feature fusion value thresholds for grading are respectively adopted; x is X 1 、X 2 And X 3 Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x is X 0 Representing the raw feature fusion frame accumulation threshold.
7. A terminal, comprising: a processor and a memory, the memory storing a computer program for executing the computer program to implement the driver face monitoring method according to any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, implements the driver face monitoring method according to any one of claims 1 to 5.
CN202010092216.0A 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium Active CN111310657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092216.0A CN111310657B (en) 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092216.0A CN111310657B (en) 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111310657A CN111310657A (en) 2020-06-19
CN111310657B true CN111310657B (en) 2023-07-07

Family

ID=71149044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092216.0A Active CN111310657B (en) 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111310657B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535639B1 (en) * 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US7072521B1 (en) * 2000-06-19 2006-07-04 Cadwell Industries, Inc. System and method for the compression and quantitative measurement of movement from synchronous video
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101551934A (en) * 2009-05-15 2009-10-07 东北大学 Device and method for monitoring fatigue driving of driver
CN102254148A (en) * 2011-04-18 2011-11-23 周曦 Method for identifying human faces in real time under multi-person dynamic environment
CN103770733A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
CN106686314A (en) * 2017-01-18 2017-05-17 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN206271049U (en) * 2016-12-07 2017-06-20 西安蒜泥电子科技有限责任公司 A kind of human face scanning instrument synchronization system device
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN107832694A (en) * 2017-10-31 2018-03-23 北京赛思信安技术股份有限公司 A kind of key frame of video extraction algorithm
CN110728234A (en) * 2019-10-12 2020-01-24 爱驰汽车有限公司 Driver face recognition method, system, device and medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535639B1 (en) * 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US7072521B1 (en) * 2000-06-19 2006-07-04 Cadwell Industries, Inc. System and method for the compression and quantitative measurement of movement from synchronous video
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101551934A (en) * 2009-05-15 2009-10-07 东北大学 Device and method for monitoring fatigue driving of driver
CN102254148A (en) * 2011-04-18 2011-11-23 周曦 Method for identifying human faces in real time under multi-person dynamic environment
CN103770733A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
EP3109114A1 (en) * 2014-01-15 2016-12-28 National University of Defense Technology Method and device for detecting safe driving state of driver
CN206271049U (en) * 2016-12-07 2017-06-20 西安蒜泥电子科技有限责任公司 A kind of human face scanning instrument synchronization system device
CN106686314A (en) * 2017-01-18 2017-05-17 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN107832694A (en) * 2017-10-31 2018-03-23 北京赛思信安技术股份有限公司 A kind of key frame of video extraction algorithm
CN110728234A (en) * 2019-10-12 2020-01-24 爱驰汽车有限公司 Driver face recognition method, system, device and medium

Also Published As

Publication number Publication date
CN111310657A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
KR102476022B1 (en) Face detection method and apparatus thereof
US8995714B2 (en) Information creation device for estimating object position and information creation method and program for estimating object position
WO2016144431A1 (en) Systems and methods for object tracking
CN112016349A (en) Parking space detection method and device and electronic equipment
CN113012383B (en) Fire detection alarm method, related system, related equipment and storage medium
CN111757008B (en) Focusing method, device and computer readable storage medium
US9053355B2 (en) System and method for face tracking
CN111401196A (en) Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN104243796A (en) Photographing apparatus, photographing method, template creation apparatus, and template creation method
CN113112525A (en) Target tracking method, network model, and training method, device, and medium thereof
CN111699509B (en) Object detection device, object detection method, and recording medium
CN111310657B (en) Driver face monitoring method, device, terminal and computer readable storage medium
CN110799984A (en) Tracking control method, device and computer readable storage medium
CN117152453A (en) Road disease detection method, device, electronic equipment and storage medium
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN113642442B (en) Face detection method and device, computer readable storage medium and terminal
JP6399122B2 (en) Face detection apparatus and control method thereof
CN116189119A (en) Lane departure early warning method and device
CN114140822A (en) Pedestrian re-identification method and device
JP2010009234A (en) Eye image processing device
CN109993078A (en) Image-recognizing method, device and the equipment of vehicle environment
CN115761616B (en) Control method and system based on storage space self-adaption
US10713517B2 (en) Region of interest recognition
JP7154071B2 (en) Driving state monitoring support system, driving state monitoring support method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant