CN111310657A - Driver face monitoring method, device, terminal and computer readable storage medium - Google Patents

Driver face monitoring method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111310657A
CN111310657A CN202010092216.0A CN202010092216A CN111310657A CN 111310657 A CN111310657 A CN 111310657A CN 202010092216 A CN202010092216 A CN 202010092216A CN 111310657 A CN111310657 A CN 111310657A
Authority
CN
China
Prior art keywords
face
accumulation threshold
value
driver
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010092216.0A
Other languages
Chinese (zh)
Other versions
CN111310657B (en
Inventor
邱静
徐林浩
何天翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing China Tsp Technology Co ltd
Original Assignee
Beijing China Tsp Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing China Tsp Technology Co ltd filed Critical Beijing China Tsp Technology Co ltd
Priority to CN202010092216.0A priority Critical patent/CN111310657B/en
Publication of CN111310657A publication Critical patent/CN111310657A/en
Application granted granted Critical
Publication of CN111310657B publication Critical patent/CN111310657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a terminal and a computer readable storage medium for monitoring the face of a driver, wherein the method comprises the following steps: after the system is started, the real-time detection of the face of the driver is carried out in the determined face searching area through the face detection model, and the face recognition frame value and the feature fusion frame value are respectively accumulated and counted when the face is detected each time; when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, starting a face recognition model to obtain a corresponding face characteristic vector, and resetting the face recognition frame value and then accumulating the count again; if the identity of the driver is correct, when the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value, the face feature point detection model is started to obtain the corresponding face feature point, and the feature fusion frame value is reset and then the count is accumulated again. The technical scheme of the invention can avoid unnecessary operation and greatly reduce the operation pressure of the vehicle SOC, thereby realizing the high-efficiency operation of the system and the like.

Description

Driver face monitoring method, device, terminal and computer readable storage medium
Technical Field
The invention relates to the technical field of driving monitoring, in particular to a method, a device, a terminal and a computer readable storage medium for monitoring the face of a driver.
Background
With the rapid development of ADAS (advanced driving assistance System) technology, a Driver Monitoring System (DMS) is also gradually mounted on an automobile, and a visual processing based on a vehicle-mounted camera is a widely adopted scheme at present. The core problem based on visual processing is extraction of human face features, and whether the adopted scheme can efficiently and stably fuse the human face features or not is judged, so that effective driver state information is output, and whether the scheme can be applied to the ground or not is determined.
At present, a common scheme is to transplant face recognition applications of other scenes into an onboard SOC system, typically, such as face feature detection running on a mobile phone. The usual steps of this protocol are: training by using a general face data set (such as WIDER FACE) to obtain a convolutional neural network model for face detection, then applying the trained model to a face recognition data set (such as LFW) and performing face alignment processing to further train to obtain a face recognition convolutional neural network; based on the same thought, training is respectively carried out on specific data sets (such as a fatigue detection data set, an attention detection data set, an abnormal motion data set and the like), so that convolution neural network models corresponding to fatigue detection, attention detection, abnormal motion detection and the like are obtained.
However, the above solution does not take into account a specific application scenario, and in the face of variable changes of the illumination conditions of the vehicle cabin, the accuracy and stability of the solution are often unsatisfactory. In addition, the fact that the computing capacity of the vehicle-mounted SOC is generally lower than that of a mobile phone SOC is not considered, the requirement on power consumption of vehicle-mounted equipment is high, a driver monitoring system needs to operate for a long time, and the like, so that the later algorithm optimization pressure is high, and the like.
Disclosure of Invention
In view of the above, the present invention is directed to overcoming at least one of the deficiencies in the prior art, and providing a method, an apparatus, a terminal and a computer-readable storage medium for monitoring a driver's face.
One embodiment of the present invention provides a method for monitoring a face of a driver, including:
when the monitoring system is started, the real-time detection of the face of the driver is carried out in the determined face searching area through the face detection model, and the face recognition frame value and the feature fusion frame value are respectively accumulated and counted when the face is detected each time;
when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, starting a face recognition model to obtain a corresponding face characteristic vector, and carrying out accumulated counting again after clearing the face recognition frame value; judging whether the identity of the driver is correct according to the corresponding face feature vectors, inputting two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold to the new face recognition frame accumulation threshold;
if the identity of the driver is correct, when the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value, starting a face feature point detection model to obtain a corresponding face feature point, and resetting the feature fusion frame value and then accumulating again for counting; inputting the detected human face and the human face characteristic points into a characteristic fusion model to obtain corresponding characteristic information, inputting each characteristic information into the frame accumulation threshold decision model to obtain a new characteristic fusion frame accumulation threshold, and updating the current characteristic fusion frame accumulation threshold into the new characteristic fusion frame accumulation threshold; wherein the characteristic information comprises a fatigue degree value and an attention deviation value;
and executing preset operation according to the obtained characteristic information.
Further, in the above method for monitoring a face of a driver, before the real-time detection of the face of the driver in the determined face search area by the face detection model, the method further includes:
when the system is in an initial startup stage, initializing a camera data capturing module and loading the face detection model;
judging whether a historical face searching area stored before last shutdown exists, and if yes, searching a face through the face detection model by using the historical face searching area;
if the face is searched, the size of the currently obtained face detection frame is enlarged, and the enlarged face detection frame is used as the determined face search area;
and then loading the face recognition model, the face feature point detection model, the feature fusion model and the frame accumulation threshold decision model.
Further, in the above method for monitoring a face of a driver, the method further includes:
if the historical face searching area does not exist, face searching is carried out in the full image area of the camera;
and if no face is searched in the historical face searching area, expanding the historical face searching area for searching.
Further, in the above method for monitoring a face of a driver, the initializing of the camera data capturing module includes:
loading a pre-created shared memory cache region composed of a plurality of cache regions with the same size, wherein the shared memory cache region is used for storing images captured by a camera called through a camera data capturing interface of a platform;
and loading a pre-created context of the capturing device and a capturing thread, wherein the capturing thread is used for acquiring the image frames captured by the camera from the shared memory buffer area at a fixed frame rate.
Further, in the above driver face monitoring method, the frame accumulation threshold decision model includes a calculation formula of a face recognition frame accumulation threshold and a calculation formula of a feature fusion frame accumulation threshold, and the calculation formula of the face recognition frame accumulation threshold is used for calculating to obtain a new face recognition frame accumulation threshold according to two adjacent face feature vectors; the calculation formula of the feature fusion frame accumulation threshold is used for calculating to obtain a new feature fusion frame accumulation threshold according to the input feature information;
the calculation formula of the face recognition frame accumulation threshold is as follows:
Figure BDA0002384070500000041
wherein A isiRepresenting the ith component of the historical face feature vector A at the previous moment; b isiThe ith component of the face feature vector B representing the current moment; n represents a vector dimension; cos θ represents cosine similarity; e represents the accumulative threshold value of the output face recognition frame; c1And C2Two cosine similarity threshold values for grading are respectively used; n is a radical of1And N2Respectively representing face recognition frame accumulation threshold values corresponding to different levels; n is a radical of5Representing the original face recognition frame accumulation threshold.
Further, in the above driver face monitoring method, the characteristic information includes a fatigue level value and an attention deviation value; the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure BDA0002384070500000042
wherein, aiRepresenting the ith fatigue degree value; biRepresenting the ith attention bias value; m1A value coefficient representing a degree of fatigue; m2Representing an attention deviation value coefficient; m3Represents an attenuation factor; c. CkRepresenting the previous k times of historical human face feature fusion values; dkA feature fusion frame accumulation threshold value representing the previous k times of output; k1、K2And K3Three historical human face feature fusion value thresholds used for grading are respectively set; x1、X2And X3Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x0Representing the raw feature fusion frame accumulation threshold.
Further, in the above method for monitoring a face of a driver, the face detection model, the face recognition model, the face feature point detection model, and the feature fusion model are obtained by pre-training the collected multi-scene face data of the driver based on a deep convolutional neural network.
Another embodiment of the present invention provides a driver face monitoring apparatus, including:
the face detection module is used for detecting the face of the driver in real time in the determined face search area through the face detection model after the monitoring system is started, and respectively carrying out accumulation counting on the face recognition frame value and the feature fusion frame value when the face is detected each time;
the face recognition module is used for starting a face recognition model to obtain a corresponding face characteristic vector when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, and performing accumulation counting again after the face recognition frame value is cleared; judging whether the identity of the driver is correct according to the corresponding face feature vectors, inputting two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold to the new face recognition frame accumulation threshold;
the characteristic acquisition module is used for starting a face characteristic point detection model to obtain corresponding face characteristic points when the characteristic fusion frame value exceeds the current characteristic fusion frame accumulation threshold value if the identity of the driver is correct, and accumulating the count again after the characteristic fusion frame value is cleared; inputting the detected human face and the human face characteristic points into a characteristic fusion model to obtain corresponding characteristic information, inputting each characteristic information into the frame accumulation threshold decision model to obtain a new characteristic fusion frame accumulation threshold, and updating the currently stored characteristic fusion frame accumulation threshold into the new characteristic fusion frame accumulation threshold; wherein the characteristic information comprises a fatigue degree value and an attention deviation value;
and the operation execution module is used for executing preset operation according to the obtained characteristic information.
Another embodiment of the present invention provides a terminal, including: a processor and a memory, the memory storing a computer program for executing the computer program to implement the above-described driver face monitoring method.
Yet another embodiment of the present invention proposes a computer-readable storage medium, which stores a computer program that, when executed, implements the driver face monitoring method according to the above.
The technical scheme of the embodiment of the invention has the following beneficial effects:
the method provided by the embodiment of the invention utilizes a trained face detection model to carry out face real-time detection in an obtained face search area, and based on a frame accumulation threshold decision model, a dynamic frame accumulation threshold which can take both performance and resource occupation into consideration is obtained by running and calibrating on a real vehicle, and the face recognition model and the face characteristic point detection model are activated only when the corresponding frame accumulation value exceeds the current threshold, so that unnecessary operation is avoided, the operation pressure of a vehicle-mounted SOC is greatly reduced, the efficient operation of a monitoring system is realized, the improvement of the overall architecture of the existing driver monitoring system is not needed, and the like.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
FIG. 1 shows a first flow diagram of a driver face monitoring method of an embodiment of the invention;
FIG. 2 shows a second flow diagram of a driver face monitoring method of an embodiment of the invention;
FIG. 3 is a third flowchart of a method for monitoring a face of a driver according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a shared memory buffer of a driver face monitoring method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a driver face monitoring apparatus according to an embodiment of the present invention.
Description of the main element symbols:
10-driver face monitoring means; 110-a face detection module; 120-a face recognition module; 130-a feature acquisition module; 140-operation execution module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Example 1
Referring to fig. 1, the present embodiment provides a method for monitoring a driver's face, which can be applied to monitoring states of whether a driver is tired of driving, whether a driver is suddenly ill, and the like during driving, so as to improve driving safety and the like. The driver face monitoring method will be described in detail below.
And step S11, after the monitoring system is started, the real-time detection of the face of the driver is carried out in the determined face searching area through the face detection model.
Exemplarily, if the monitoring system detects a valid driver face in the determined face search area, step S12 is executed. If no face is detected, the face search area is moved to continue the face detection, and step S12 is not executed until a valid face is detected.
And step S12, if a valid face is detected each time, accumulating and counting the face identification frame value and the feature fusion frame value respectively.
Exemplarily, for two set values, one represents a face recognition frame, and is used for dynamically adjusting the accumulated threshold of the face recognition frame; and the other represents a feature fusion frame for dynamically adjusting the feature fusion frame accumulation threshold. In step S12, each time a valid face is detected, the two values are accumulated and incremented by one. Wherein, the effective human face is detected by a human face detection model which is trained in advance.
Step S13, determine whether the face recognition frame value exceeds the current face recognition frame accumulation threshold.
For step S13, if the accumulated face recognition frame value is greater than the current face recognition frame accumulation threshold, then steps S14 and S15 are performed. If the current face recognition frame accumulation threshold is less than or equal to the current face recognition frame accumulation threshold, continuing accumulation until the current face recognition frame accumulation threshold is exceeded. It is understood that step S14 and step S15 may be executed simultaneously, or may be executed according to a set sequence, and are not limited herein.
And step S14, if the number of the face feature vectors exceeds the number of the face feature vectors, starting the face recognition model to obtain the corresponding face feature vectors.
Exemplarily, the system inputs the detected effective human face into a pre-trained human face recognition model for human face recognition, and then outputs a corresponding human face feature vector. Next, the facial feature vector is compared with a pre-stored face of the driver to determine whether the identity of the driver is correct, i.e., step S16 is executed.
It can be understood that the face recognition model will not be started in the process from zero to accumulation of each face recognition frame value to exceeding the face recognition frame accumulation threshold; and only when the face recognition frame value of each accumulated count exceeds the current face recognition frame accumulation threshold value, the face recognition model is started to perform face recognition once, and a face feature vector is correspondingly output. In other words, the face recognition operation is performed every certain number of frames, and the number depends on the size of the accumulation threshold of the face recognition frames.
And step S15, if the frame value exceeds the threshold value, the frame value is cleared and then the count is accumulated again.
For step S15, whenever the face recognition frame value is accumulated to be greater than the current face recognition frame accumulation threshold, the face recognition frame value is cleared, and the accumulation of the number of times of detecting a valid face is restarted.
And step S16, judging whether the identity of the driver is correct or not according to the obtained face feature vector.
In step S16, if the identity is correct, the next step is performed to acquire characteristic information of the driver, such as fatigue degree and attention deviation, so as to determine whether the driver is fatigued or not, and is not attentive. If the identity is incorrect, optionally, an alarm or the like may be issued in advance without performing a subsequent unnecessary number of feature fusion operations.
And step S17, inputting the two adjacent human face feature vectors into a frame accumulation threshold decision model to obtain a new human face recognition frame accumulation threshold.
In this embodiment, the frame accumulation threshold decision model includes two parts, which are a calculation formula of face recognition frame accumulation threshold and a calculation formula of feature fusion frame accumulation threshold respectively. The calculation formula of the face recognition frame accumulative threshold is used for calculating to obtain a new face recognition frame accumulative threshold according to the input two adjacent face feature vectors; and the calculation formula of the feature fusion frame accumulation threshold is used for calculating to obtain a new feature fusion frame accumulation threshold according to the input feature information.
Exemplarily, after two adjacent face feature vectors are acquired, a next face recognition frame accumulation threshold value can be determined according to the cosine similarity of the two adjacent face feature vectors. Generally, if the similarity between two adjacent face feature vectors is high, the cumulative threshold of the next face recognition frame may be slightly larger, that is, the face recognition model is started once more frames apart. Of course, other constraints may be added to dynamically adjust the face recognition frame accumulation threshold in practical applications.
Exemplarily, the calculation formula of the face recognition frame accumulation threshold is as follows:
Figure BDA0002384070500000101
wherein A isiRepresenting the ith component of the historical face feature vector A at the previous moment; b isiThe ith component of the face feature vector B representing the current moment; n represents a vector dimension; cos θ represents the cosine similarity between the two adjacent face feature vectors. e represents the accumulative threshold value of the output face recognition frame; c1And C2Respectively representing two cosine similarity threshold values, and dividing three intervals, namely corresponding to three levels, according to the two values; n is a radical of1And N2Respectively representing face recognition frame accumulation threshold values corresponding to different levels; n is a radical of0Representing an original face recognition frame accumulation threshold (i.e., a preset default face recognition frame accumulation threshold). For example, if the cosine similarity calculated this time is greater than C1If the face recognition frame accumulation threshold e is equal to N, the face recognition frame accumulation threshold e is set to N1Or, when the calculated cosine similarity is less than C2If the face recognition frame accumulation threshold e is equal to N, the face recognition frame accumulation threshold e is set to N0. Then, the calculated N is used1(or N)0) And replacing the current face recognition frame accumulation threshold.
For the above-mentioned N0~N2Respectively, face recognition frame accumulation threshold values corresponding to the three level sections, and C1And C2The parameters are respectively different cosine similarity threshold values, and the parameters can be obtained by testing and calibrating the real vehicle. Exemplarily, video data are collected through a vehicle-mounted camera, and the data should cover all scenes as much as possible, including the situation of interference and the like. Then, each frame of data is marked, an objective function which is comprehensively optimal according to the accuracy and the frame rate is constructed, and the values are calibrated through an optimization algorithm such as a genetic algorithm. The division of the above-mentioned grades can be not limited to three grades, and the division can be specifically selected according to actual situations. In the embodiment, three levels are preferentially selected, which is mainly determined according to actual experience, and as the levels are divided too much, the data size required by calibration is too large, and the model convergence is difficult; and the grade division is too little to meet the requirements.
Step S18, the current face recognition frame accumulation threshold is updated to the new face recognition frame accumulation threshold.
In step S18, if an e is output by using the above calculation formula, the current face recognition frame accumulation threshold is replaced by the e at this time, and the e is used as a new face recognition frame accumulation threshold for starting the face recognition model next time. For example, if the current face recognition frame accumulation threshold is N3By formula, output N4If the next accumulated counting frame value is larger than N4Then the face recognition model is started.
Step S19, if the driver' S identity is correct, determine whether the feature fusion frame value exceeds the current feature fusion frame accumulation threshold.
Similar to the principle of the face recognition frame, on the premise that the driver identity is correct, it is determined whether the feature fusion value exceeds the current feature fusion frame accumulation threshold, and if so, steps S20 and S21 are executed. If not, accumulation is continued.
In step S20, if yes, the face feature point detection model is started to obtain the corresponding face feature point.
Exemplarily, the system inputs the recognized face of the driver into a pre-trained face feature point detection model, and then outputs a corresponding face feature point detection frame, such as an area frame of features of eyes, mouth, etc., before performing step S22. Generally, when a person is tired, the human face features, especially the eyes, of the person often have obvious state changes, so that the state and behavior of the driver can be judged with certain help by detecting the human face feature points.
And step S21, if the frame value exceeds the threshold value, the feature fusion frame value is cleared and then the count is accumulated again.
For step S21, when the feature fusion frame value is accumulated to be greater than the current feature fusion frame accumulation threshold, the feature fusion frame value is cleared, and accumulation of the number of times of detecting a valid face is restarted.
And step S22, inputting the detected human face and the human face characteristic points into a characteristic fusion model to obtain corresponding characteristic information.
In step S22, the obtained face detection frame and the region frame of the feature points such as eyes and mouth obtained in step S21 are input into a feature fusion model trained in advance, and then output to obtain corresponding feature information. Exemplary, the characteristic information includes, but is not limited to, a fatigue level value, an attention bias value, an abnormal action, and the like.
It will be appreciated that for each time the feature fusion frame value is accumulated from zero to over the feature fusion frame accumulation threshold, the feature fusion model will not be started; and only when the feature fusion frame value of each accumulated count exceeds the current feature fusion frame accumulation threshold value, the feature fusion model is started to perform feature fusion once, and accordingly feature information is obtained.
Step S23, inputting each feature information into the frame accumulation threshold decision model to obtain a new feature fusion frame accumulation threshold.
Exemplarily, if the feature information includes a fatigue degree value and an attention deviation value, the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure BDA0002384070500000121
wherein, aiRepresenting the ith fatigue degree value; biRepresenting the ith attention bias value; m1A value coefficient representing a degree of fatigue; m2Representing an attention deviation value coefficient; m3Represents an attenuation factor; c. CkRepresenting the previous k times of historical human face feature fusion values; dkA feature fusion frame accumulation threshold value representing the previous k times of output; k1、K2And K3Respectively representing three historical human face feature fusion value thresholds, and dividing four intervals by the three values, namely corresponding to four levels; x1、X2And X3Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x0Representing the raw feature fusion frame accumulation threshold.
For X above0~X3Respectively, feature fusion frame accumulation threshold values corresponding to four level intervals, and K1~K3The parameters can be obtained by testing and calibrating the real vehicle respectively for three different set historical human face feature fusion value thresholds. The specific calibration method is similar to the above-mentioned face recognition frame accumulation threshold, and therefore, the detailed description thereof is omitted. In this embodiment, the division is preferentially performed into four levels, but of course, the division may also be performed into more or less levels, which may be determined according to actual situations.
In step S24, the current feature fusion frame accumulation threshold is updated to the new feature fusion frame accumulation threshold.
In the above step S24, the value calculated and output based on the above calculation formula will be used as a new feature fusion frame accumulation threshold for the next start-up of the feature fusion model. The principle is similar to the above-mentioned face recognition frame value, and therefore, the detailed description thereof is omitted.
Considering that the human face feature state is in a slowly changing state under most conditions when a driver drives a vehicle, if a human face recognition model and a feature fusion model are used for reasoning and calculating one frame of image data acquired each time, the occupancy rate of system resources is high and the load is large. However, if a fixed frame accumulation threshold is adopted, a large amount of operations are avoided, and at the same time, a scene with rapidly changing human face features may not be tracked, so that the monitoring system cannot process an emergency scene and response delay is large. In order to solve the above problems, the embodiment proposes the frame accumulation threshold decision model based on the historical face feature data to dynamically calculate the face recognition frame accumulation threshold and the feature fusion frame accumulation threshold, so that the requirements of large computation amount and quick response to the face feature change can be well balanced, and the efficient operation of the monitoring system is realized.
In step S25, a preset operation is performed according to the obtained feature information.
For example, the predetermined operation may include, but is not limited to, saving and displaying the feature information. Optionally, if the obtained characteristic information, such as the fatigue degree value, exceeds the preset threshold, the preset operation may also be to alarm, so as to remind corresponding staff of paying attention.
In this embodiment, for the above-mentioned face detection model, face recognition model, face feature point detection model and feature fusion model, exemplarily, these models may be obtained by performing pre-training based on different neural networks, such as convolutional neural network, deep convolutional neural network, and the like.
For example, a large number of driver multi-scene face data sets including face images with different fatigue degree values and attention deviation values (such as looking straight ahead, looking left, looking right and the like) under different illumination conditions can be acquired through the vehicle-mounted camera, and face frame marking, face characteristic point (such as eyes, mouths and the like) marking, and state behavior (such as fatigue degree, attention, abnormal actions and the like) marking are carried out on the data sets.
Then, a face detection model can be obtained through training of face frame marking data; training the convolutional neural network according to the face feature point marking data to obtain a face feature point detection model; and then, outputting by using the human face characteristic point detection module to obtain characteristic point region frames such as eyes, mouths and the like. In addition, the human face detection frame, the eye, the mouth and other region frames are used as the input of the convolutional neural network, the fatigue degree, the attention deviation, the abnormal action and other state behaviors are used as the output, and the feature fusion model is obtained through training. Meanwhile, the convolutional neural network is trained by utilizing a face recognition data set to obtain a face recognition model for recognizing the identity of the driver.
It can be understood that the dynamic determination of the face recognition frame accumulation threshold and the feature fusion frame accumulation threshold is performed based on the frame accumulation threshold decision model, and the corresponding model is started for operation only when corresponding conditions are met, so that the required operation amount can be greatly reduced in the process, and the performance requirements of the monitoring system on the vehicle-mounted SOC can be further reduced. For the driver face monitoring system of the embodiment, the driver face monitoring system can be efficiently operated on the vehicle-mounted SOC with limited resources, so that the problem of high resource occupation caused by long-term complex operation of a background in the prior art can be solved, an additional visual processor is not required to be added for independent operation, the increase of integration complexity and cost caused by hardware increase is avoided, and the driver face monitoring system has good economic benefits and the like.
Example 2
Referring to fig. 2, based on the method of embodiment 1, the method for monitoring a face of a driver according to this embodiment further includes, before step S11: when the system is in an initial starting-up stage, the historical face search data is used for quickly positioning the face search area, so that the face search area can be determined with less calculation amount, and the occupation of system resources and the like are reduced.
Exemplarily, as shown in fig. 2, the method mainly comprises the following steps:
step S1, when the system is in the initial startup phase, the camera data capturing module is initialized and the face detection model is loaded.
Exemplarily, when the driver face monitoring system is in an initial stage of startup, a camera data capturing module associated with a camera is initialized, and a face detection model is loaded for face detection after the initialization is successful.
Step S2, determine whether there is a historical face search area saved before the previous shutdown.
Since a driver usually has a face in a relatively fixed area during driving, it is possible to determine whether or not there is historical face search data to quickly determine whether or not the face search area exists. If so, the face detection in the area can be preferentially performed, that is, step S3 is executed, so as to reduce the resource occupation required when searching in a wide range. Of course, if not, the extended range search is performed in the entire image area, that is, step S5 is performed.
And step S3, if the historical face search area exists, the face search is carried out through the face detection model by using the historical face search area. For step S3, if a face is searched, execute step S4; if no face is found, step S6 is executed.
Step S4, if a face is searched, the size of the currently obtained face detection frame is enlarged, and the enlarged face detection frame is used as the determined face search area.
Exemplarily, if the presence of a face is detected in the historical face search area, the size of the face detection frame may be enlarged appropriately to obtain a larger face detection frame, i.e. to serve as a new face search area, and then step S7 is executed.
And step S5, if no face is searched in the historical face search area, expanding the historical face search area for searching.
Exemplarily, the range expansion adjustment is performed by taking the historical face search area as a reference so as to further search the position of the face.
In step S6, if there is no historical face search area, face search is performed in the full image area of the camera.
And step S7, loading a face recognition model, a face feature point detection model, a feature fusion model and a frame accumulation threshold decision model.
Exemplarily, for the case that the face search region is rapidly determined in the above step S4, it may directly jump to perform the step S7; in the case of the above steps S5, S6, etc., the plurality of models may be loaded after the face search area is finally determined. It is to be understood that the training method of these models is the same as that of each model of embodiment 1 described above, and therefore, the description thereof will not be repeated.
In one embodiment, as shown in fig. 3, for step S1, the initialization of the camera data capturing module mainly includes:
step S101, loading a pre-created shared memory cache region composed of a plurality of cache regions with the same size. The shared memory cache region is used for storing images captured by a camera called through a camera data capturing interface of the platform.
Step S102, loading the pre-created context of the capture device and the capture thread. The capturing thread is used for acquiring image frames captured by a camera from the shared memory buffer area at a fixed frame rate.
In this embodiment, the camera data capturing module uses a camera capturing interface of the system platform to avoid unnecessary copying of camera data, and further save resource occupation and data copy acquisition time, and preferably, a shared memory cache area may be created in advance on a system bottom layer, and then a context and a capturing thread of the capturing device are created to be used by other modules to call camera image data stored in the shared memory cache area.
For example, as shown in fig. 4, the shared memory buffer may be composed of several buffers with the same size, and each buffer may store a complete image data. For the capturing thread, a fixed frame rate may be set to obtain, where the fixed frame rate may be set according to the size of the image capturing frame rate of the camera, and generally, the fixed frame rate is smaller than the image capturing frame rate.
For the above steps S101 and S102, by loading the pre-created shared memory buffer, the context of the capture device, and the capture thread in the initialization process, the subsequent cross-module sharing of the camera data can be realized, which not only can save the time for data copying and improve the data acquisition efficiency, but also, especially for a resource-limited system, can avoid occupying a large amount of memory resources during data copying by sharing data across modules.
In this embodiment, the driver face monitoring system may adopt a QNX system, and further may utilize the zero-copy camera data sharing and fast start-up characteristics supported by the QNX system platform. Of course, the system may also adopt other systems as long as the system can satisfy the following two conditions: a camera data sharing supporting zero-copy among multiple modules; secondly, the system supports quick start to ensure that the DMS can start working within 2 s. In the initial startup stage of the system, the system can be understood as being in the stage after the system is started quickly, and the characteristic of quick start of the system is utilized to realize quick positioning of the face search area and the like.
It can be understood that, in addition to the above embodiment 1, which dynamically determines the key point of the face recognition frame accumulation threshold and the feature fusion frame accumulation threshold based on the frame accumulation threshold decision model to implement efficient motion of the monitoring system, the present embodiment also proceeds from other two aspects:
firstly, by means of a system efficient device capturing interface such as a QNX platform and the like, a shared memory cache region is created for storing image data, so that cross-module sharing can be realized without copying data obtained by a camera, and synchronization of the image data can be ensured;
secondly, considering that the face area of the driver appears in the same position area most of the time, the face detection frame can be obtained with less operation data amount most of the time by utilizing the quick starting characteristic of systems such as QNX and the like, namely, in the initial starting stage and combining historical face data to carry out quick positioning of the face search area.
Based on the two points, unnecessary operation can be further reduced, and therefore efficient operation of the driver face monitoring system is achieved on the basis of not increasing hardware cost.
Example 3
Referring to fig. 5, based on the method for monitoring the face of the driver in embodiment 1, the present embodiment provides a device 10 for monitoring the face of the driver, including:
the face detection module 110 is configured to, after the monitoring system is started, perform real-time detection on the face of the driver in the determined face search area through the face detection model, and perform accumulation counting on the face recognition frame value and the feature fusion frame value respectively when the face is detected each time;
the face recognition module 120 is configured to start a face recognition model to obtain a corresponding face feature vector whenever the face recognition frame value exceeds a current face recognition frame accumulation threshold, and perform re-accumulation counting after clearing the face recognition frame value; judging whether the identity of the driver is correct according to the corresponding face feature vectors, inputting two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold to the new face recognition frame accumulation threshold;
a feature obtaining module 130, configured to, if the identity of the driver is correct, start a face feature point detection model to obtain a corresponding face feature point whenever the feature fusion frame value exceeds a current feature fusion frame accumulation threshold, and perform re-accumulation counting after clearing the feature fusion frame value; inputting the detected human face and the human face characteristic points into a characteristic fusion model to obtain corresponding characteristic information, inputting each characteristic information into the frame accumulation threshold decision model to obtain a new characteristic fusion frame accumulation threshold, and updating the currently stored characteristic fusion frame accumulation threshold into the new characteristic fusion frame accumulation threshold; wherein the characteristic information comprises a fatigue degree value and an attention deviation value;
and an operation executing module 140, configured to execute a preset operation according to the obtained feature information.
It is to be understood that the driver face monitoring apparatus 10 described above corresponds to the driver face monitoring method of embodiment 1. Any of the options in embodiment 1 are also applicable to this embodiment, and will not be described in detail here.
The invention also provides a terminal, such as electronic equipment like vehicle-mounted images and the like, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor enables the terminal equipment to execute the functions of each module in the driver face monitoring method or the driver face monitoring device by operating the computer program.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The present invention also provides a computer-readable storage medium for storing the computer program used in the above-mentioned terminal.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (10)

1. A method for monitoring a face of a driver, comprising:
when the monitoring system is started, the real-time detection of the face of the driver is carried out in the determined face searching area through the face detection model, and the face recognition frame value and the feature fusion frame value are respectively accumulated and counted when the face is detected each time;
when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, starting a face recognition model to obtain a corresponding face characteristic vector, and carrying out accumulated counting again after clearing the face recognition frame value; judging whether the identity of the driver is correct according to the corresponding face feature vectors, inputting two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold to the new face recognition frame accumulation threshold;
if the identity of the driver is correct, when the feature fusion frame value exceeds the current feature fusion frame accumulation threshold value, starting a face feature point detection model to obtain a corresponding face feature point, and resetting the feature fusion frame value and then accumulating again for counting; inputting the detected human face and the human face characteristic points into a characteristic fusion model to obtain corresponding characteristic information, inputting each characteristic information into the frame accumulation threshold decision model to obtain a new characteristic fusion frame accumulation threshold, and updating the current characteristic fusion frame accumulation threshold into the new characteristic fusion frame accumulation threshold; wherein the characteristic information comprises a fatigue degree value and an attention deviation value;
and executing preset operation according to the obtained characteristic information.
2. The method for monitoring the face of the driver as claimed in claim 1, wherein before the real-time detection of the face of the driver in the determined face search area by the face detection model, the method further comprises:
when the system is in an initial startup stage, initializing a camera data capturing module and loading the face detection model;
judging whether a historical face searching area stored before last shutdown exists, and if yes, searching a face through the face detection model by using the historical face searching area;
if the face is searched, the size of the currently obtained face detection frame is enlarged, and the enlarged face detection frame is used as the determined face search area;
and then loading the face recognition model, the face feature point detection model, the feature fusion model and the frame accumulation threshold decision model.
3. The method for monitoring the face of a driver according to claim 2, further comprising:
if the historical face searching area does not exist, face searching is carried out in the full image area of the camera;
and if no face is searched in the historical face searching area, expanding the historical face searching area for searching.
4. The method for monitoring the face of a driver as claimed in claim 2, wherein the initialization of the camera data capturing module comprises:
loading a pre-created shared memory cache region composed of a plurality of cache regions with the same size, wherein the shared memory cache region is used for storing images captured by a camera called through a camera data capturing interface of a platform;
and loading a pre-created context of the capturing device and a capturing thread, wherein the capturing thread is used for acquiring the image frames captured by the camera from the shared memory buffer area at a fixed frame rate.
5. The method for monitoring the face of the driver according to claim 1, wherein the frame accumulation threshold decision model comprises a calculation formula of a face recognition frame accumulation threshold and a calculation formula of a feature fusion frame accumulation threshold, and the calculation formula of the face recognition frame accumulation threshold is used for calculating a new face recognition frame accumulation threshold according to two adjacent input face feature vectors; the calculation formula of the feature fusion frame accumulation threshold is used for calculating to obtain a new feature fusion frame accumulation threshold according to the input feature information;
the calculation formula of the face recognition frame accumulation threshold is as follows:
Figure FDA0002384070490000031
wherein A isiRepresenting the ith component of the historical face feature vector A at the previous moment; b isiThe ith component of the face feature vector B representing the current moment; n represents a vector dimension; cos θ represents cosine similarity; e represents the accumulative threshold value of the output face recognition frame; c1And C2Two cosine similarity threshold values for grading are respectively used; n is a radical of1And N2Respectively representing face recognition frame accumulation threshold values corresponding to different levels; n is a radical of0Representing the original face recognition frame accumulation threshold.
6. The method of monitoring a driver's face according to claim 5, wherein the feature information includes a fatigue level value and an attention bias value; the calculation formula of the feature fusion frame accumulation threshold is as follows:
Figure FDA0002384070490000032
wherein, aiRepresenting the ith fatigue degree value; biRepresenting the ith attention bias value; m1A value coefficient representing a degree of fatigue; m2Representing an attention deviation value coefficient; m3Represents an attenuation factor; c. CkRepresenting the previous k times of historical human face feature fusion values; dkA feature fusion frame accumulation threshold value representing the previous k times of output; k1、K2And K3Three historical human face feature fusion value thresholds used for grading are respectively set; x1、X2And X3Respectively representing feature fusion frame accumulation thresholds corresponding to different levels; x0Representing the raw feature fusion frame accumulation threshold.
7. The method for monitoring the face of the driver as claimed in claim 1, wherein the face detection model, the face recognition model, the face feature point detection model and the feature fusion model are obtained by pre-training collected multi-scene face data of the driver based on a deep convolutional neural network.
8. A driver face monitoring apparatus, comprising:
the face detection module is used for detecting the face of the driver in real time in the determined face search area through the face detection model after the monitoring system is started, and respectively carrying out accumulation counting on the face recognition frame value and the feature fusion frame value when the face is detected each time;
the face recognition module is used for starting a face recognition model to obtain a corresponding face characteristic vector when the face recognition frame value exceeds the current face recognition frame accumulation threshold value, and performing accumulation counting again after the face recognition frame value is cleared; judging whether the identity of the driver is correct according to the corresponding face feature vectors, inputting two adjacent face feature vectors into a frame accumulation threshold decision model to obtain a new face recognition frame accumulation threshold, and updating the current face recognition frame accumulation threshold to the new face recognition frame accumulation threshold;
the characteristic acquisition module is used for starting a face characteristic point detection model to obtain corresponding face characteristic points when the characteristic fusion frame value exceeds the current characteristic fusion frame accumulation threshold value if the identity of the driver is correct, and accumulating the count again after the characteristic fusion frame value is cleared; inputting the detected human face and the human face characteristic points into a characteristic fusion model to obtain corresponding characteristic information, inputting each characteristic information into the frame accumulation threshold decision model to obtain a new characteristic fusion frame accumulation threshold, and updating the currently stored characteristic fusion frame accumulation threshold into the new characteristic fusion frame accumulation threshold; wherein the characteristic information comprises a fatigue degree value and an attention deviation value;
and the operation execution module is used for executing preset operation according to the obtained characteristic information.
9. A terminal, comprising: a processor and a memory, the memory storing a computer program for execution by the processor to implement the driver face monitoring method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program that, when executed, implements the driver face monitoring method according to any one of claims 1 to 7.
CN202010092216.0A 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium Active CN111310657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092216.0A CN111310657B (en) 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092216.0A CN111310657B (en) 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111310657A true CN111310657A (en) 2020-06-19
CN111310657B CN111310657B (en) 2023-07-07

Family

ID=71149044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092216.0A Active CN111310657B (en) 2020-02-14 2020-02-14 Driver face monitoring method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111310657B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535639B1 (en) * 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US7072521B1 (en) * 2000-06-19 2006-07-04 Cadwell Industries, Inc. System and method for the compression and quantitative measurement of movement from synchronous video
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101551934A (en) * 2009-05-15 2009-10-07 东北大学 Device and method for monitoring fatigue driving of driver
CN102254148A (en) * 2011-04-18 2011-11-23 周曦 Method for identifying human faces in real time under multi-person dynamic environment
CN103770733A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
CN106686314A (en) * 2017-01-18 2017-05-17 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN206271049U (en) * 2016-12-07 2017-06-20 西安蒜泥电子科技有限责任公司 A kind of human face scanning instrument synchronization system device
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN107832694A (en) * 2017-10-31 2018-03-23 北京赛思信安技术股份有限公司 A kind of key frame of video extraction algorithm
CN110728234A (en) * 2019-10-12 2020-01-24 爱驰汽车有限公司 Driver face recognition method, system, device and medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535639B1 (en) * 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US7072521B1 (en) * 2000-06-19 2006-07-04 Cadwell Industries, Inc. System and method for the compression and quantitative measurement of movement from synchronous video
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101551934A (en) * 2009-05-15 2009-10-07 东北大学 Device and method for monitoring fatigue driving of driver
CN102254148A (en) * 2011-04-18 2011-11-23 周曦 Method for identifying human faces in real time under multi-person dynamic environment
CN103770733A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
EP3109114A1 (en) * 2014-01-15 2016-12-28 National University of Defense Technology Method and device for detecting safe driving state of driver
CN206271049U (en) * 2016-12-07 2017-06-20 西安蒜泥电子科技有限责任公司 A kind of human face scanning instrument synchronization system device
CN106686314A (en) * 2017-01-18 2017-05-17 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN107832694A (en) * 2017-10-31 2018-03-23 北京赛思信安技术股份有限公司 A kind of key frame of video extraction algorithm
CN110728234A (en) * 2019-10-12 2020-01-24 爱驰汽车有限公司 Driver face recognition method, system, device and medium

Also Published As

Publication number Publication date
CN111310657B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN109584507B (en) Driving behavior monitoring method, device, system, vehicle and storage medium
JP5755012B2 (en) Information processing apparatus, processing method thereof, program, and imaging apparatus
JP6617085B2 (en) Object situation estimation system, object situation estimation apparatus, object situation estimation method, and object situation estimation program
CN110826530A (en) Face detection using machine learning
KR102476022B1 (en) Face detection method and apparatus thereof
EP3910507A1 (en) Method and apparatus for waking up screen
US20220309623A1 (en) Method and apparatus for processing video
KR102095152B1 (en) A method of recognizing a situation and apparatus performing the same
CN111401196A (en) Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space
CN111783797B (en) Target detection method, device and storage medium
CN113191318A (en) Target detection method and device, electronic equipment and storage medium
JP2018005839A (en) Image processing apparatus and image processing method
CN111699509B (en) Object detection device, object detection method, and recording medium
CN112990009A (en) End-to-end-based lane line detection method, device, equipment and storage medium
CN110799984A (en) Tracking control method, device and computer readable storage medium
CN111310657A (en) Driver face monitoring method, device, terminal and computer readable storage medium
CN112200109A (en) Face attribute recognition method, electronic device, and computer-readable storage medium
CN111507999A (en) FDSST algorithm-based target tracking method and device
US20180060647A1 (en) Image processing apparatus, non-transitory computer readable medium, and image processing method
CN110062235B (en) Background frame generation and update method, system, device and medium
CN111127345A (en) Image processing method and device, electronic equipment and computer readable storage medium
JP7298171B2 (en) Image compression device and image compression method
CN117197249B (en) Target position determining method, device, electronic equipment and storage medium
KR20220079426A (en) Object tracking method using dynamic fov and apparatus thereof
US10713517B2 (en) Region of interest recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant